Printer Friendly

Dynamic simulation modelling for lean logistics.

Glossary of terms

AINV Actual inventory level APIOBPCS Automatic pipeline, inventory and order based production control system AVCON Average consumption COMRATE Completion rate CONS Demand consumption CSL Customer service level CUSUM Cumulative sum of errors DINV Desired inventory level DSS Decision support system EDI Electronic data interchange EINV Error in inventory GA Genetic algorithm ITAE Integral of time multiplied by the absolute error MRI Minimum reasonable inventory ORATE Order rate Ta Time to average consumption Ti Time to adjust inventory Tw Time to adjust WlP WIP Work in progress

Introduction

The Law of Industrial Dynamics was defined by Burbidge[1] as follows; "If demand is transmitted along a series of inventories using stock control ordering, then the amplitude of demand variation will increase with each transfer". The Law of Industrial Dynamics results in excessive inventory, production, labour, capacity and learning curve costs, due to unnecessary fluctuations in perceived demand. A contributory cause for these on-costs are the time lags between the initiation of an action and the consequence of that action. This cannot be avoided as it always takes time to produce and distribute goods. However, the effect is often made much worse by poor decision making by production schedulers and distribution managers, as they do not understand the nature of the time lags between the ordering of goods and receipt into stock. This has been amply demonstrated in work by Sterman[2] where a simplified model of a beer production/distribution system was used to convince senior executives that they did not fully understand the concept of the supply chain, especially the effect of system structure on system behaviour.

The objective of an ordering system is to buffer production via the provision of minimum reasonable inventory (MRI)[3] thus providing high customer service levels (CSL) coupled with high stock turnover. However, as shown in Figure 1, a poorly designed system can actually make the situation much worse, since the demand pattern may be amplified with a resulting stock out situation. To minimize this possibility it is essential to select the appropriate structure for the production ordering system, and then to set the system parameters at their "best" value.

This paper utilizes a well-established production control system model which operates on a knowledge of customer demand, inventory level and unfilled orders. Such a model has been shown both industrially and theoretically to provide a sound basis for an acceptable trade-off between production smoothing and a high level of stock turnover[4]. Here the "best" controller settings are selected via a genetic algorithm (GA) approach in which a directed choice is based on the simulated response to a range of operating scenarios.

The role of simulation in system design

This paper will exploit a method of problem solving that has been extended considerably by the recent advances in technology. Whenever faced with a problem, most people develop a solution by building a mental picture, or model, of the situation[5], on which various hypotheses are tested in an abstract or idealized manner. Such hypotheses testing on a model is known as simulation. This saves time as simulation of many hypotheses can be done more quickly than by physical experimentation and allows the best route from a crisis to recovery to be determined.

To enable a model to mimic completely real world supply chains, a large and complex model has to be built in our minds. This is beyond the capability of most people, but the use of computers can help considerably. In particular, a computer simulation can be built up as a series of building blocks, where each step is of manageable and understandable proportions. This allows decision makers to learn about system structure (how things couple together and the effects of their interaction) and the contribution of individual elements to total system behaviour. When testing such models for their dynamic properties the conclusions drawn from the test outputs are independent of the input. Dynamic behaviour as discussed in this paper is a function of the structure of the system. Therefore, suitable test inputs are utilized that allow valid comparisons, or benchmarks, to be drawn. Typically, step input signals are utilized from which inferences can be made about other test responses which are arguably more representative of real life.

Importantly, with simulation a case based reasoning approach may be adopted. An a priori knowledge based on system structures, and their associated test responses, may be developed based on previous problem-solving experiences. Thus, when a new challenge is faced, the specific system requirements are developed and compared with the knowledge base. An existing model may then be utilized, or adapted and added to the knowledge base.

When a model is completed the computer can be used to concentrate on the vast amount of detail, leaving the decision maker to concentrate on the higher inferences and conclusions to be drawn from the simulation output. Thus, it is possible to model a total system that is beyond an individual's capacity. Additionally this exercise is repeatable many times over in search of the best solution.

The purpose of inventory

Consider the following arguments; the primary purpose of inventory holdings is to buffer the customer from time lags and thus to offer greater customer service levels, as goods can be sold straight off the shelf. It is undesirable to have a stockout situation, as custom will be diverted elsewhere and market share is lost. Therefore the inventory has to be large enough to cover the lead time.

The secondary purpose is to buffer the production system from the customer by absorbing the high frequency content in demand in the inventory, allowing the production system to have a level schedule. This is important as production on-costs are proportional to cube of the production rate variation[6]. The production facility should also be asked to operate within the capacity available and, hence, approximate a linear and thus predictable response. A non-linear response is undesirable as it results in the double accounting phenomenon. This is caused when excess orders are placed on a production backlog and they are also shipped from inventory. Such orders will therefore be accounted for in both the inventory signal and the order backlog. Dynamic problems then arise when there is capacity in the plant to reduce the backlog, as the backlog is effectively a false order already accounted for in the inventory levels[7].

There has been a strong trend in the last decade to reduce these inventory holdings as Japanese philosophies such as lean manufacturing have gained in popularity. However, even with a production lead time of zero, inventory is still needed to cover the distribution time, which can often be longer than production time. This was the case in Toyota[8]. Accepting that some inventory will be maintained in the supply chain there is a need to ensure that a minimum reasonable inventory (MRI) is held in order to buffer production, maintain CSL and reduce stock holding costs[3].

Build up of the production control system

It can be appreciated that the amount of inventory holding that is needed to satisfy a CSL is dependent on the decision rule that is used to replenish the inventory holding as well as the uncertainties in both demand and lead times. Likewise, the production on-costs are also affected by the chosen decision rule. An example of an input/output representation of an actual human scheduler with overall responsibility for this function is shown in Figure 2 which is taken from Olsmats et al.[9]. The wide ranging sources of information and external and internal pressures influencing his decision making are self-evident from the diagram.

The next step is to consider the relative impact of the demand, pipeline and inventory policies on the production control system dynamics.

Demand policy

Current demand is important because if it is omitted from the scheduling algorithm, it can be easily shown mathematically and experimentally that there is a continuing freefall in inventory levels following a ramp input and a permanent inventory deficit following a step increase in demand, which is typical of many stock replenishment systems. It can also be realized that allowing demand to be used for scheduling without some form of averaging will result in excessive fluctuations in production rates, which as has already been argued is supposed to be absorbed by the inventory buffers. Therefore there is a need to utilize an average measure of the current market demand in the proposed scheduling algorithm. This leads to the question, how is demand smoothed? There are various methods of smoothing. For example, a simple moving average may be utilized. The smoothed demand would then become a function of the last number of prescribed time units. It is often argued that the most recent demands are more indicative of the true demand, so a weighted moving average may be used that gives greater emphasis to the most recent figures.

However, both of these two methods require the storage of past demands which, if a company has a 1,000 products and the demand is averaged over the last 20 time periods, would require the storage of 20,000 demand figures. This is a problem. A variation of the weighted moving average that only requires the storage of the previous moving average, called exponential smoothing, overcomes this problem. It is a function of all previously calculated averages and is weighted towards the most recent. Exponential smoothing also has the advantage that it is a close approximation to the first order lag used in control theory and is readily understood. It is also relatively accurate for short-term forecasts, and will therefore be used in this study. The question still remaining is how much weighting is given to the recent demand figures to attenuate the fluctuations in demand and at the same time respond to genuine changes?

Inventory policy

The inventory policy is to be considered because the rate at which inventory deviations are recovered will have a profound effect on production target fluctuations. It is often a misguided practice in industry that production targets are set to recover all the inventory deficit in a single time period, even though it may take many more time periods for the product to be manufactured and appear in inventory holding[10]. This is continued for the whole of the production lead time. By the time the products begin to appear in the inventory there is significant excess WIP on the shopfloor, which will inevitably increase the inventory holding beyond the desired level. This will have to be reduced by producing less than the average market demand until the market demand has reduced the inventory levels to the desired level. However, the same target overshoots happen and the production rates are continuously fluctuating. As inventory levels are related to our customer service levels it is desirable to correct inventory discrepancies. The question is, how much of the inventory discrepancy should be corrected each time production/distribution requirements are set to avoid excessive overshoots and undershoots around the target level?

Pipeline policy

The pipeline policy is concerned with how much WIP is present on the shopfloor. The desired WIP level is a function of the average demand and the time it takes to produce the product, i.e. the lead time. Throughout this paper the assumption will be that, when setting targets, the production lead time is known via shopfloor feedback. However, it is not proposed to update the system controller settings in real time during the robustness experiments: this accords with known industrial practice[4].

During periods when there is insufficient WIP, for example, just after a genuine step increase in demand, then it would be beneficial for the pipeline policy to increase the demands on the shopfloor to account for the shortfall in WIP. However, there will be periods when there is excessive WIP on the shopfloor due to the inventory and demand policies not considering the effects of the time delays in the system, and it would then be beneficial for the pipeline policy to reduce the production targets. The question is, how quickly is the pipeline deviation corrected each time the production/distribution requirement is set?

Purpose of the production control system

The purpose of the production control system may be summarized as a decision support system (DSS) to assist the production scheduler to place orders on the factory in such a way as to provide good smoothing of actual demand yet good customer service levels from stock. The DSS is applied to all the products in the "active" range so that the lead times, demand patterns, WIP levels, and inventory levels must be monitored across this range if the company is to achieve international competitiveness. Note that the target CSL will be selected according to the ABC product classification which might vary from 99 per cent for "A" products to 97.5 per cent for "B" products and 95 per cent for "C" products[4].

The production control algorithm

The scheduling algorithm considered is thus defined as; "The production targets placed on the factory are equal to the market demand that has been exponentially smoothed over (Ta) time increments (hours, days or weeks as appropriate), plus a fraction (1/Ti) of the inventory discrepancy, plus a fraction (1/Tw) of the WIP discrepancy". The response of a poorly designed scheduling algorithm has already been seen in Figure 1. It has no feedforwarding of smoothed current demand, a time to adjust inventory (Ti) of one, and no WIP adjustment law. The simulated random demand signal (CONS) has a mean of 100 widgets and a standard deviation of 34 widgets. At time t = 0, there is a step increase in the mean value to 117 widgets. Consider the poor response. It can be seen that the production target (ORATE) is fluctuating more than the market demand. When considering that the purpose of the algorithm is to set ORATE in a manner that smoothes the sales and provides good inventory recovery, this algorithm performs very badly. Inventory levels (AINV) have permanently dropped below the desired level of 300. At time t = 62 there is an inventory stock out. The gravity of the situation can be seen with greater clarity from the inventory CUSUM plot (CUSUMs are the plot of the accumulative sum of errors, in this case the accumulative sum of inventory error from a target of 300[11]). It can be seen that the DINV is continuously not being meet, and that after the step there is a greater average inventory deficit than before the step.

However, the situation can be improved. Consider the following production algorithm, the numbers for which will be justified later; "set production targets equal to the sum of the sales, exponentially smoothed over 16 time units, plus a seventh of the inventory discrepancy, plus a twenty-fifth of the WIP discrepancy". This scheduling algorithm results in the good response also shown in Figure 1. The demand signal in both cases is exactly the same. It can be seen that the ORATE is considerably more smoothed than the sales demand, so production on-costs have been reduced. Inventory levels are also improved as there is not a permanent drop in inventory holding after the step. Even though the inventory is given less emphasis, there is no inventory stockout, so CSLs have been increased. It can be seen that the situation has been dramatically improved by a proper understanding of the control structure.

The production/distribution scheduling algorithm described above is termed APIOBPCS (automatic pipeline, inventory and order based production control system) and is represented in causal loop form in Figure 3. This describes, in a form suitable for building a simulation model, the controllers that are used to place production orders[12]. It is also representative of Sterman's work[2], as his heuristics can be directly related to our control parameters Ta, Tw and Ti, via mathematical manipulation[13].

The importance of system structure

The structure of the control system can be readily seen from Figure 3. The system has three controllers; demand, WlP and inventory. Everything else is there to show how these controllers affect the production requirements, how they interact with each other, and how the emphasis can be shifted from one controller to another by altering the values of Ti, Ta and Tw. It can thus be seen by inspection that the structure will affect the response of the system. It will be shown that a proper understanding of the structure will lead to informed decisions that will greatly improve company competitiveness. The feedforwarding of averaged sales and inventory and WIP feedback loops are the main influences on the system. These are in turn greatly affected by the time to average sales, Ta, fraction 1/Ti and fraction 1/Tw. The inventory target and pipeline (WIP) targets also affect the response, but only in the magnitude of the levels in the system. This will have an effect on the CSL a company can provide but it will not alter the dynamic inferences drawn from the simulation[4]. When the optimum solution has been found the target levels may be set to provide a satisfactory CSL, based on the knowledge that the dynamic response of the system will be optimal.

It has already been proven[14] that the addition of feedback loops can increase the robustness of a system. This has been confirmed by Towill[15] for a number of commonly met practical system designs. In the production control model inventory feedback is provided to help counteract drift problems met with stock levels. The WlP loop is then provided to damp down the system response and simultaneously to attenuate the impact of lead time changes.

The effect of controllers' settings on system response

On the production orders

The effect of each controller in making up the production targets (ORATE) is clearly shown in Figure 4. The smoothed sales signal (AVCON) is responsible for feeding forward the change from one level of sales to the other. The inventory feedback signal (EINV/Ti) is the main contributor to ORATE overshoot and oscillatory behaviour. The WIP signal (EWIP/Tw) is self-adjusting, i.e. during times when WIP is too small it bolsters ORATE, and during times when WIP is too great it reduces ORATE. The effect of the WIP signal reduces the rise time of ORATE, and decreases the percentage overshoot; both favourable traits. However, the costs downside must appear somewhere, which in this case is in the extended settling time. The more emphasis that is given to WIP feedback, the longer it takes to reach steady state again. This is due to the negative WIP signal cancelling out the inventory signal, thus giving more responsibility to the contribution of AVCON in reaching the steady state.

The problem can now be fully stated. There is a need to determine the optimum setting of the design parameters Ta, Ti and Tw. An optimum setting of the parameters will:

* respond to genuine changes in demand quickly;

* filter out random noise in the sales pattern;

* be robust to unknown changes in production lead time and to changes in production leadtime distributions. Although it is an objective of lean manufacturing to achieve short, consistent lead times, this is more difficult in a multi-product environment. The competition for resources means that achievement in practice is still difficult[16]. It would therefore be beneficial to consider the consequences of production variations on the dynamic response in our optimization procedure, and thus ensure that the detrimental effects are minimized.

On the inventory levels

Figure 5 shows the effect of the controller parameters on the actual inventory levels (AINV), following a step increase in sales from 100 to 200 widgets/time unit at time t = 0. Each controller setting has been reduced by 25 per cent of its nominal value. Decreasing Tw will slightly reduce the maximum inventory deficit, but it will take longer for the inventory levels to recover fully. Decreasing Ta has a similar effect to Ti, but is less pronounced. Ti will reduce the inventory freefall considerably more than reducing Tw, and the recovery is much quicker. Ti also has the advantage over Ta in that after the freefall it is quicker in the recovery. However, as seen in the example system of Figure 1, too small a value of Ti results in very poor filtering properties in the presence of a random signal.

On system selectivity

The user requires a production control system which has good selectivity. That is, there is a reasonable range of controller settings which will give acceptable response. At the same time moving away from the optimum values means that a threshold is reached beyond which performance changes rapidly. Ideally this means a quadratic performance indicator with a "flat" region followed by a steep departure from an acceptable response. A family of responses which demonstrates suitable selectivity is shown in Figure 6, in which the control parameters have been varied by [+ or -] 25 per cent about the optimum. It shows that the performance variability is relatively small, but further variation brings a significant penalty. is frequently the case in practice[16]. WIP information is generally more difficult to determine in practice than inventory information. However, with the use of new technologies such as EDI and barcode scanners the process is becoming considerably easier. During the optimization undertaken it is assumed that the nominal production lag is eight time units, but it could vary anywhere between four and 12 time units. The nominal production distribution is assumed to be exponential, which has been argued to be representative of many industrial situations[17]. However, the implications of a cubic distribution will also be considered, which is a closer approximation to the "worst case" of a pure time delay. Additionally when assessing robustness, exponential WIP information delays of zero, four and eight time units coupled with a nominal production lead time and exponential distribution are to be considered. The latter replicates delays in feeding back WIP information to our ordering system. Figure 7 shows the effect of these conditions on the ORATE response to a step input in demand from 100 to 200 widgets per time period.

In the optimization procedure the "robustness" of the system has been exploited to ensure the changes in behaviour under these initial conditions are suitably constrained. Whereas there is a desire for the design to be selective, i.e. not change very much in response to small deviations in controller settings, there is also the requirement for the design to be "robust", i.e. cope with relatively large changes in the controlled process. So both elements are included in the operations procedure which will now be outlined.

Many consultants, academics and practitioners argue that better performance will be gained by reducing the time delays in the system, and this is not being disputed. However, better performance can be achieved by carefully considering a decision process that is used to set production/distribution requirements and, because physical processes are not being altered, there is low cost associated with the benefits. It can also be implemented without delay. Or, in other words, to improve profitability, without major capital expenditure, and very quickly. After the ordering algorithm has been optimized in this way, methods of reducing the time delays and variability in the system may be considered so as to obtain even better performance.

The optimization procedure

The characteristics of a good design have already been stated, but there is still the need to be able to quantify how a set of design parameters contribute to these characteristics. This is done via using three simple tools. The first tool is the AINV ITAE, or the Actual INVentory Integral of Time multiplied by Absolute Error. The second is the ORATE variance, and the third that will be used for assessing robustness vectors is Pythagoras' theorem.

Actual inventory, integral of time multiplied by absolute error

The ITAE criterion favoured by hardware systems designers was developed by Graham and Lathrop[18] and is a measure of the deviation from a target level that is weighted in the time domain. It can be visualized as the area under the Time x Absolute Error in Inventory curve in Figure 8. The value of the ITAE as time progresses is also shown. It is of particular use as it penalizes both positive and negative errors, and it allows for the freefall in the inventory levels that are inevitable shortly following a step. The smaller the ITAE value for a given response the better.

It can be seen that (for the controller settings used to generate the ITAE curve) 80 per cent of the answer is obtained by four production lags, but the ITEA has not reached the final value after 12 production lags. A trade-off is to be made between the accuracy of the ITEA required and the simulation time.

Order rate variance

Variance is a statistical measure that is used to describe the spread of a random distribution. It is the mean squared deviation from the mean. The variance of two order rates calculated with different scheduling algorithms varies widely as can be seen in Figure 1. These are two very different responses, one good and one bad, and it does not require sophisticated techniques to determine which one has less production on-costs. The smaller the variance of the ORATE the better the response. In Figure 1 the variance for the good system is approximately 60 for the first 50 time periods (ignoring the superimposed step of 17) and for the poor system is approximately 3,000 (for the same conditions).

Trade-off between inventory and order rates

In Figure 9, ORATE variance is given as the y-axis, with the x-axis showing AINV ITAE. The labels on the data points represent the value of Ti changing from 1 to 64. Tw and Ta are kept constant. The scaling factor on the ITAE is representative of a cost function that determines how much emphasis should be given to the production on-costs surrogate (variance) and how much emphasis to the inventory surrogate (ITAE). The cost function should be representative of the costs to ramp up and down production, the costs to keep inventory, the cost of customer dissatisfaction, etc. In practice these costs are very difficult to obtain, if at all possible. Therefore the value of the scaling factor was chosen to give equal weighting to the relative improvements in the attenuation of random demands, and relative improvements in the inventory recovery.

A solution that is optimal for just two of the five characteristics, following genuine changes in demand, and filtering "noise" from ORATE, will be the minimum distance from the origin in Figure 9. It can be seen from Figure 9 that the optimum value of Ti, for these values of Tw and Ta is approximately Ti = 7.

On the robustness vectors

In Figure 10 each point represents the response of a particular set of control parameters under various production lags and distributions denoted by the labels. The more robust a set of design parameters is to production variation, the more compact will be the cluster of points on the ITAE/variance plane. A measure of the production robustness will be the average distance to each of the points for every combination of circumstances outlined above, from the centre of gravity. The point of the centre of gravity can be calculated by finding the average ITAE and the average variance. The average distance can be calculated by using Pythagoras' theorem. A separate measure will be used for the WIP robustness and selectivity. They are calculated in exactly the same manner that was used for the production robustness vector.

The centre of gravity represents a statistical "average" production lead time and distribution. It has already been argued that the true value of the production lead time and its distribution is unlikely to be known, the point represented by 8.1 is used as the datum design in this optimization, and it is interesting to note that the average production lead time and distribution is close to it. From this the true average performance of the algorithm will possess a 12.5 per cent increase in ITAE and a 25 per cent increase in ORATE variance.

Now there are five performance vectors that need to be minimized. If two dimensions were to be minimized, say ITAE and variance, then it is simply finding the distance from the origin to the point in question, as described above. However, Pythagoras' theorem can be extended to five dimensions. Therefore, the overall score for a set of design parameter is the reciprocal of square root of the sum of the squares of the ITAE, variance, production robustness, WIP robustness and selectivity. The reciprocal is used so as to demonstrate that the higher the score the better is the result in the solution space[19].

The search for the optimum set of control parameters

After consideration it was decided that the range of possible values for each of the control parameters would be from 1 to 256, therefore the number of possible solutions would be 256 x 256 x 256 = 16,777,216. This will require considerable computer power to search completely. Therefore the use of a common artificial intelligence technique known as genetic algorithms is employed. Genetic algorithms (GAs) are an attempt to simulate Darwin's Theory of Evolution. Darwin[20] stipulated that more favourable characteristics in an individual would increase the individual's chance of passing those favoured characteristics to the next generation via reproduction. The important struggle for life filtered out the weaker individuals, and the strong and healthy survived to pass on the favourable genes/chromosomes. Thus the fitness of the population increases over the generations.

Random mutations in the genes/chromosomes gave some individuals an advantage over others, and these beneficial mutations gradually filtered into the population. The exact nature of the struggle for life in particular areas created the differences in the local population and thus species (or branches in the evolutionary tree) emerged. However, the aim is not to create species, but to find the optimum design parameters. The struggle for life is replaced with a fitness function (the aggregated score for the five performance vectors) given to a set of controller parameters. The term fitness means, specifically, reproductive success. The reproduction of a population of chromosomes (chromosomes in this case relate to the APIOBPCS parameters encoded as a string of binary numbers) is going to be simulated by randomly selecting the more favourable chromosomes and interchanging lengths of each chromosome (simulated reproduction), and then randomly mutating bits in the chromosome (simulated mutation). The next step is to evaluate the fitness (score) of each chromosome in the population and repeat the process. After a number of generations the population of chromosomes should be similar as they converge to the optimum solution. The highest scoring set of control parameters is then selected as the solution to our problem. A flow diagram of the GA used is shown in Figure 11.

The optimum design parameters for the APIOBPCS model

The results from the GA indicate that Ti should be slightly less than the production lag, Ta should be twice the production lag and Tw should be slightly greater than three times the production lag. For a production lag of eight this corresponds to Ti = 7, Ta = 16, and Tw = 25. This corresponds very well with the results of previous studies into a two term controller, named an inventory and order based production control system (IOBPCS), and with a little thought seems perfectly logical. The IOBPCS model is the same as the APIOBPCS model without the WIP feedback loop. The IOBPCS optimum[21] was stated as Ti = 8 and Ta = 16. Now Ta stays the same as the ability of the Ta to filter out demand noise, which has not been altered by the addition of the WIP loop. Ti would naturally be decreased slightly as it now has the help of Tw to increase the damping ratio of ORATE, while at the same time decrease the rise in time. Tw is quite large, but recalling the effect of Tw as discussed earlier in the paper [ILLUSTRATION FOR FIGURES 4 AND 5 OMITTED] this is not surprising, as small Tws result in a larger setting time as they counteract the inventory signal. The inclusion of the WIP feedback loop also increases the robustness system to production variations by a factor of three and enhances selectivity.

The optimum is quite close to the optimum solutions defined in the literature[12,21]. It is not surprising that the empirical approach to system optimization as adopted should give good results. The reason is that by adopting an "analogue" approach to system modelling it has proven possible to tap into the rich heritage of existing hardware system designs developed for such purposes as aircraft landing systems, satellite trackers and weapon launchers. Conventionally such designs can be used with confidence as benchmarks when setting control parameters within a given system structure.

It has become clear that the robustness of the algorithm to production variation is improved by the inclusion of WIP information. However, it has been assumed in the analysis that the production lead time is known and is unvarying for the purpose of setting WIP targets. This is a practical problem that has no easy answer. The inventory offset that occurs can only be corrected by accurate estimations of the lead time. However, if a steady state is reached this does provide a method of determining the lead time accurately. The WIP loop also improves the filtering characteristics of the algorithm, because the WIP loop forces the ORATE to be just slightly closer to the AVCON, i.e. it is counteracting the effect of the inventory signal. However, it was shown earlier that the WIP loop does increase the settling time, jeopardizing the CSL, but only slightly.

An understanding of the effect of system structure on dynamic performance initially expounded by Forrester[22] is increasingly becoming part of the lean logistics paradigm. Unfortunately it is taking much longer for an equally important truth to be recognized, which is that robustness is entirely a function of system structure[14]. Hence, once the system structure has been chosen there is very little room for manoeuvre when trying to enhance robustness. In fact, the greater robustness achieved by the APIOBPCS design is readily predicted from feedback theory, since the additional feedback path reduces uncertainty.

Putting theory into practice

The original stimulus for the applied research so far described was the practical development and application of a DSS in a South Wales-based medical manufacturing company. The DSS was designed to aid the master production scheduler in his prime duty of forming aggregate production orders (by part number) for the plant. The operational environment was multi-product (due to a wide and active product catalogue of many thousands of items), batch manufacturing focused, and with very stringent requirements for product availability and customer service levels.

The complex interaction between product mix, product numbers and routeings as well as difficulties caused by shopfloor breakdowns plus labour and machine constraints led to protracted and variable internal lead times. Although the company was in the process of re-engineering the plant to reduce lead times and remove sources of uncertainty, it was realized that such a programme would take time to produce the anticipated improvements. Hence, the system was developed in-house to provide the master production scheduler with a computerized DSS driven by customer service levels (CSLs) set by the marketing department[12].

The apparently pragmatic rules contained within the model in fact incorporate a number of advanced features. In particular, the ordering rule makes use of information about the level of orders still being processed in the "pipeline" and a frequently updated estimate of the "pipeline" delay. The company regards the pipeline as the delay between generating a production order by the model to the receipt of the order into finished goods stock, i.e. the WIP.

According to the industry-based designer of the DSS:

The model is ideally suited to a medium sized enterprise, with a large order book, diverse product range, finished goods stock control problems and customers demanding high service levels[23].

The incorporation of the model into the company's overall production and inventory planning and control system was considered by company executives to be key in the achievement of high customer service levels while minimizing stock-holding costs. Similar improvements in company performance as a result of a DSS have also been described elsewhere[24].

A recent material flow survey has shown that, in line with the South Wales medical manufacturer's experience, it is still quite common to find companies operating in high degrees of uncertainty, with variable and protracted lead times[23]. The markets in which the companies operated include the automotive, electronics and construction sectors. Many of these companies are striving to re-engineer their supply and/or production pipelines. The survey also showed that these same companies are gathering pipeline information, usually in order to expedite orders. Therefore, they are prime candidates for incorporating that information into more robust ordering rules as typified by the APIOBPCS model using the generic parameter settings outlined earlier in this paper.

Significance of the DSS

The failure to consider the pipeline within ordering algorithms has been suggested as a major cause of costly swings in production and stocks[24]. The cost implications of poor ordering behaviour reinforces the need to incorporate the present state of the pipeline into ordering rules/decisions[25].

The practical experience of the South Wales medical manufacturer has shown that considerable benefits accrued through the use of this DSS tool. High customer service levels for a minimum finished goods level have been claimed. Consequently the novel nature of the DSS raises a number of research issues. How does the model perform in a dynamic sense? Is the inclusion of pipeline information within the DSS advantageous? Could the model be further improved in terms of parameter settings?

Previous research by the authors' research team has confirmed and reinforced the experiential results of Sterman[2] by adopting a structured control engineering and simulation methodology. The research output from the previous analytical work has shown that incorporation of pipeline information within an ordering algorithm can lead to increased customer service levels while minimizing stock holding requirements[26] and leads to improved dynamic behaviour[12].

The research output has already had company-specific relevance. The initial model was developed on control engineering concepts but it was restricted to a steady-state design. The dynamic simulation approach herein yielded new model parameters settings and led to improved model performance[12,26]. Previous analytical research results are significant but have primarily focused either on minimizing variation order[12] or minimizing stock-holding costs[2] or maximising service levels[26]. An example of the latter is given in Figure 12. Baseline refers to the static model representation and, hence, yields misleadingly optimistic results in terms of stock-holding costs for a given service level requirement. Also shown in Figure 12 are good and bad designs for the dynamic model representation with lead times of four and eight weeks.

The research presented in this paper allows for a unified performance criterion, based on suitably weighted measures, to be made available. As the problem addressed is multi-variable, multi-parameter and multi-criteria, the use of GAs for the first time have been successfully utilized. Thus, previous analytical results have been further refined, and the a priori knowledge base extended in search of a true optimum.

Conclusions

This paper has demonstrated that the use of a model of a decision support system coupled with a simulation facility and genetic algorithm optimization procedure can yield enhanced performance to the production control function by fully understanding the nature of the trade-off between factory orders and inventory levels. It thus enables "lean logistics" via "smart modelling". The benefits are much more spectacular, of course, when supply-chain operation is considered because the effect of poor decision making is multiplicative, and not additive[27]. Hence, if there is a three-echelon chain with an amplification of 3:1 across each echelon, the total amplification will be around 27:1. By intelligent design the chain amplification can be reduced to about 1.73:1, i.e. a 20-fold improvement. This is the difference between bankruptcy and world class performance!

References

1. Burbidge, J.L., "Automated production control with a simulation capability", Proceedings of the International Federation for Information Processing Conference, WG 5-7, Copenhagen, 1984.

2. Sterman, J.D., "Modelling managerial behaviour: misconceptions of feedback in a dynamic decision making experiment", Management Science, Vol. 35 No. 3, March 1989, pp. 321-39.

3. Grunwald, H.J. and Fortuin, L., "Many steps towards zero inventory," European Journal of Operational Research, Vol. 59, 1992, pp. 359-69.

4. Cheema, P., "Dynamic analysis of an inventory and production control system with an adaptive lead-time estimator", PhD Dissertation, UWCC, University of Wales College, Cardiff, 1994.

5. Senge, P.M., The Fifth Discipline: The Art and Practice of the Learning Organisation, Century Business, 1990.

6. Stalk, G.H. and Hout, T.M., Competing Against Time; How Time-based Competition is Reshaping Global Markets, Free Press, New York, NY, 1990.

7. Evans, G.N. and Naim, M.M., "The dynamics of capacity constrained supply chains", International Systems Dynamics Conference, 1994, pp. 28-39.

8. Womack, J.P., Jones, D.T. and Roos, D, The Machine that Changed the World, Rawson Associates, New York, NY, 1990.

9. Olsmats, C.M.G., Edghill, J.S. and Towill, D.T., "Industrial dynamics model of a close-coupled production-distribution system", Engineering Costs and Production Economics, Vol. 13, 1988, pp. 295-310.

10. Berry, D. and Towill, D.R., "Reduce costs - use a more intelligent production and inventory planning policy", BPICS Control, November 1995, pp. 26-30.

11. Slater, R. and Ashcroft, P., Quantitative Techniques in Business Context, Chapman and Hall, New York, NY, 1990.

12. John, S., Naim, M.M. and Towill, D.R., "Dynamic analysis of a WIP compensated decision support system", International Journal of Manufacturing System Design, Vol. 1 No. 4, 1994, pp. 283-97.

13. Naim, M.M. and Towill, D.R., "What's in the pipeline?", 2nd International Symposium on Logistics, 11-12 July 1995, Nottingham, UK, pp. 135-42.

14. Horowitz, I.M., Synthesis of Feedback Systems, Academic Press, New York, NY, 1963.

15. Towill, D.R., Coefficient Plane Models for Control Systems Analysis and Design, Research Studies Press, New York, NY, 1981, pp. 178-83.

16. Harrison, J.M., Holloway, C.A. and Patell, J.M., "Measuring delivery performance: a case study in from the semiconductor industry", in Kaplan, R.S. (Ed.), Measures for Manufacturing Excellence, Harvard Business School Press, Boston, MA, 1990, pp. 314-21.

17. Edgehill, J., "The application of aggregate industrial dynamic techniques to manufacturing systems", PhD, University of Wales College, Cardiff, 1990.

18. Graham, D. and Lathrop, R.C., "The synthesis of optimum transient response: criteria and standard forms", Transactions of the American Institute of Electrical Engineers, Applications and Industry, Vol. 72, 1953, pp. 273-88.

19. Disney, S.M., Naim, M.M. and Towill, D.R., "Development of a spreadsheet based optimisation routine for inventory management and production control", in Naum, M.M. and Evans, G.N. (Eds), "Computer simulated plant: human capital and mobility workshop", MAST occasional papers, No. 31, 1-3 April 1996, pp. 21-33.

20. Darwin, C., The Origin of Species by Means of Natural Selection or the Preservation of Favoured Races in the Struggle for Life, John Murrey, 1859.

21. Towill, D.R., "Dynamic analysis of an inventory and order based production control system", Int. Journal of Production Research, Vol. 20 No. 6, 1982, pp. 671-87.

22. Forrester, J.W., Industrial Dynamics, MIT Press, Boston, MA, 1962.

23. Berry, D., Evans, G.N. and Naim, M.M., "Pipeline control: a UK perspective", submitted to the International Journal of Management Science, OMEGA, 1996.

24. Gangneux, P., "A short term forecasting DSS for improving the sales/manufacturing linkage", International Journal of Engineering Costs & Production Economics, Vol. 17, 1989, pp. 369-75.

25. Bonney, M.C., "A possible framework for CAPM", working paper presented at the SERC ACME Directorate CAPM Seminar, Castle Donnington, February 1990.

26. Towill, D.R., Evans, G.N. and Cheema, P., "Analysis and design of an adaptive minimum reasonable inventory control system", to be published in the International Journal of Production Planning and Control, Vol. 8 No. 6, 1997.

27. Towill, D.R. and del Vecchio, A., "The application of filter theory to the study of supply chain dynamics", Production Planning & Control, Vol. 5 No. 1, 1994, pp. 82-96.
COPYRIGHT 1997 Emerald Group Publishing, Ltd.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1997 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Disney, S.M.; Naim, M.M.; Towill, D.R.
Publication:International Journal of Physical Distribution & Logistics Management
Date:Mar 1, 1997
Words:7423
Previous Article:Lean logistics.
Next Article:An integrated approach to reengineering material and logistics control.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters