Modelling quality-cost dynamics.
Literature abounds on the connection at the organizational level between quality and costs[1-6]. However, as Plunkett and Dale indicate, there are problems attached to the quality-cost models featured in much of this literature. Despite the common underlying principles, these models display wide differences, which in many cases are misleading and inaccurate. The models found by Plunkett and Dale tended to be of uncertain provenance and often appeared to be "notional", i.e. constructed to persuade the reader of the value of a prescribed course of action without being overly concerned with theoretical justification. This paper attempts to remedy partially this deficiency by examining rigorously the behaviour of quality costs through the system dynamics method more rigorously.
The paper first examines existing quality-cost models, drawing substantially on the review carded out by Plunkett and Dale. The structure of a system dynamics quality-cost model is then described, followed by a presentation and comment on the simulation results. The paper closes by drawing some general conclusions.
Existing quality-cost models
In focusing on the connection between the factors of quality and costs, the substantial literature[1-6] which exists excludes other factors from the analysis. For example, quality can have an impact on market share and hence, indirectly, on production volume and unit costs. However, this impact is not automatic since prices can be raised to convert increased quality into improved profitability rather than improved market share. Even when an improvement in quality is translated into a market share increase, this often occurs over a longer timescale than the connections between process quality and cost. In keeping with the existing literature this review focuses firmly on quality-cost models.
Among the problems Plunkett and Dale identified in connection with existing quality cost models, was the imprecision surrounding the definition of both quality and costs. Discussions of quality often agree on two main aspects: conformance quality and quality of design[9; 10, p. 10; 11, p. 188!. The main focus in many discussions of quality costs, as demonstrated by Plunkett and Dale's review, is on quality at the process level, i.e. conformance quality. In those situations where quality definitions are clarified, quality is often measured by the number of defects, i.e. conformance quality.
As Plunkett and Dale pointed out, despite the various inconsistencies between models, the prevention, appraisal and failure (PAF) method of categorizing quality costs attributed to Feigenbaum features substantially in the literature. One problem with the PAF categorization is that the last update of BS 6143 advocates the use, where possible, of the more recently promoted division into conformance and non-conformance costs. However, the standard also advises that continued successful use of the PAF approach does not preclude the development of the process cost approach. The suggested link is to identify failure costs with costs of non-conformance while prevention and appraisal can be subsumed within conformance costs.
Typically in quality-cost models the relationship between quality and costs is conveyed graphically. Plunkett and Dale grouped the graphical models they found into five groups labelled A to E. However, for the purposes of this discussion these groups are further aggregated into three. Type one is the predominant model which unites Plunkett and Dale's groups A, D and E by portraying an economic optimum level of quality [ILLUSTRATION FOR FIGURE 1 OMITTED!.
Plunkett and Dale's three groups differ mainly between the positioning of the optimum and the slope of the cost curves. However, as they indicate, many of these graphs have unscaled axes and therefore differences in slope may arise from differences in scaling. Such graphs tend to obscure the cause-and-effect relationships which apply, since quality conformance levels are shown on the independent x axis and costs on the dependent y axis. In practice, management directs activities and in so doing determine the levels of prevention and appraisal costs. Defect levels and failure costs arise due to these management-directed actions, but lagged in time. As Bowbrick[12, pp. 185-203! indicates, it is not clear what the static portrayal of dynamic relationships are meant to convey. To illustrate, a change in prevention costs from P to Q [ILLUSTRATION FOR FIGURE 1 OMITTED] would take some time to impact on defect rates and hence on failure costs. Initially total quality costs would increase from T to U at defect level X but would presumably reduce along with defect rates to an (equilibrium?) position of V costs at defect level Y.
Type two (Plunkett and Dale's group C) includes those models that describe progression over time through different stages of quality awareness. A unifying strand for many of these models is the initial lowering of failure costs by increasing appraisal activity (and hence appraisal costs). Further reductions in failure costs arise from increased prevention costs. Finally, in some models the lowering of prevention costs is possible. Typical examples of this approach can be found in Veen and Teboul[13, p. 16!. BS 6143 contains a slightly different example of this approach in that instead of discrete stages, costs are plotted against an increasing quality improvement variable [ILLUSTRATION FOR FIGURE 2 OMITTED].
Type three models (Plunkett and Dale's group B) comprise actual quality cost data plotted against time. Unfortunately, reliable, complete data collected over long time periods are not readily accessible[6, p. 417]. Plunkett and Dale cite a number of references from which Campanella and Corcoran has been selected here as representative ([ILLUSTRATION FOR FIGURE 3 OMITTED! is based on their data) notwithstanding some drawbacks associated with their figures.
Making sense of published empirical data (such as the relative magnitudes of the PAF components) is often difficult due to the omission of values for the elapsed time and the quality awareness variables. Further complications arise from the differences between quality costs and the different compositions of the PAF categories for the diverse business and industry types. On the other hand, comparability is often assisted by the common practices of quoting quality costs as a percentage of total revenue and of quoting PAF elements as percentages of the total quality costs. The combination of some of the PAF categories sometimes causes problems, e.g. prevention and appraisal costs are sometimes quoted as a single figure. Notwithstanding the above difficulties, some representative empirical data are given in Table I[1-4, 14-16].
Porter and Rayner contains a useful summary quoting quality costs in relation to sales revenue and shows the substantial variation in such figures. Unfortunately many reports of quality costs are of limited utility since they lack accompanying data on quality levels.
[TABULAR DATA FOR TABLE 1 OMITTED]
Returning to consider models of type one, a number of writers cast serious doubts on the .concept of an optimum level of quality. An alternative view of the quality-cost relationship can be envisaged [ILLUSTRATION FOR FIGURE 4 OMITTED!.
These two views on the behaviour of quality costs are often described as a classic view, i.e. the existence of an optimum level of quality, and a more modern view where the trade-off does not exist[17, pp. 46-7].
However, these two apparently opposed views of the quality-cost relationship are potentially reconcilable by taking account of "third" variables. Both the classic and modern views of quality costs focus on the trade-off, or lack of it, between two variables. Other factors are excluded with the implicit assumption that other things are equal, i.e. ceteris paribus. In practice these general discussions apply to a wide variety of situations where other things are not equal and third variables can and do exert important influences. Bowbrick[12, pp. 183-203] demonstrates through a worked example how manipulating various factors can completely change the nature of interactions between two quality variables. The continuing debate about which one of the two opposing views is correct demonstrates to some extent the lack of sufficiently articulated theory underlying these quality-cost models.
Quality-cost models, more rigorous than the graphically-based models discussed above, have been constructed in the past. However, the numbers are limited and they have tended to follow an analytical modelling approach, as did Fine when making his important contribution. Some criticisms can be made of this approach. Although defining relationships in mathematical formalisms makes for unambiguous specifications, it restricts the variety that can be built into the model. For example, the variables featuring in the model are limited to well-behaved mathematical functions and are related through rigid forms such as equations. The continuous simulation method of system dynamics offers an alternative to the analytical path. System dynamics supplies an accessible, flexible language that has been used to explore similar problem domains; for example software quality. The following describes the use of the system dynamics method to construct a generic quality-cost model.
The quality-cost simulation model
The system dynamics model was built using ithink (the commercial version of the STELLA package) running on a Macintosh. The modelling activity, in keeping with the literature, concentrated on the internal dynamics of conformance quality and cost. The PAF classification was used since much of the literature, as demonstrated in the earlier review, discusses costs and provides empirical data according to that scheme. The simulation model represents an organization by a single process during which good and defective products are produced (see [ILLUSTRATION FOR FIGURE 5 OMITTED! for a simplified view of the model).
Some of the faulty products are discovered through appraisal activity (i.e. internal failures) while others flow through to customers and appear as external failures. The simulation was designed to examine changes over a three-year horizon using a time advance unit of quarters. This fits in with the time frames over which managers have to act and the time scales referred to when reporting changes in quality. The organization was assumed to be producing 10,000 units per quarter. Each unit was priced at [pounds]100, with a unit variable cost of [pounds]50. The fixed overheads were set at [pounds]400,000 per quarter. The total quality costs were assumed to be 10 per cent of the quarterly sales revenue, i.e. [pounds]100,000, and split respectively 10, 25 and 65 per cent between the PAF categories. This allocation is consistent with the data in Table I and is often quoted as representative for non-quality-enlightened organizations[10, p. 197!.
The model proved relatively insensitive to the initial apportionment of failure costs between internal and external categories. Results arising from the 65 per cent failure costs split so that 20 per cent occurred internally and 45 per cent externally were little different from those arising from splits of 45 per cent internal and 20 per cent external (e.g. the quality cost minima at zero appraisal and equilibrium prevention costs differed by less than 5 per cent). Such variations in internal/external cost allocations are evident in published empirical data[4, p. 135]. The ratio of unit external failure costs to unit internal failure costs was set at 2.5. For brevity the justification for this value is not dealt with in depth. Suffice to say the parameter's value is related to the equilibrium appraisal rate and the empirically determined ratio of the overall external and overall internal costs.
According to statistical quality control theory, process capability is one of the basic attributes affecting quality conformance levels[17, pp. 446-9!. Process capability is defined as the ratio of the design tolerance to the process variability. The relationship between defect levels and process capability [ILLUSTRATION FOR FIGURE 6 OMITTED] can be derived easily providing that process centring issues are assumed to be unimportant.
Figure 7 shows the fractional change in process variability arising from a specific amount of quarterly spent on prevention activities. The profile in Figure 7 implements a diminishing returns effect. An additional pound spend on prevention is not as proportionately effective as the previous pound in reducing process variability. The function also has a minimum amount ([pounds]10,000 per quarter) that has to be spent on prevention to ensure process stability. Any spend above [pounds]10,000 results in process improvement that can be interpreted as a learning effect. The process variability detiorates if the amount spent per quarter is less than [pounds]10,000.
Figure 8 demonstrates some alternative scenarios in which the amounts spent on prevention vary from quarter to quarter.
The response of process variability to changes in prevention costs was calibrated by reference to changes in quality costs and defect rates. The model was calibrated so that it produced reference mode behaviour comparable to empirical cost data as typified by Campanella and Corcoran. Figure 9 demonstrates how the doubling and tripling of prevention costs can affect defect rates.
The 50 per cent improvement times (the improvement half-life) for these two situations approximate four quarters and eight quarters respectively; figures not untypical of those quoted in the literature. Figure 10 illustrates how the detection rate of defects within the system is related to appraisal costs.
This is a similar form to that relating detection to effort in the model built by Abdel-Hamid and Madnick. A diminishing returns effect is included and the detection rate is asymptotic to the proportion of 0.9. This caters for the inability of 100 per cent inspection to detect 100 per cent of all defects. The response curve was set such that appraisal plus failure costs were at equilibrium for the standard level of prevention costs, i.e. no initial benefit would be gained from either increasing or decreasing appraisal costs away from the standard [pounds]25,000.
The simulation results
Two extreme values of initial process capability were selected to cover the wide range found in practice:
(1) low process capability - 0.66 (i.e. 5 per cent defectives or 95 per cent conformance);
(2) high process capability - 0.86 (i.e. 1 per cent defectives or 99 per cent conformance).
The model's response to changing prevention and appraisal costs for these two capability levels was studied. Prevention costs were varied from 0 to [pounds]50,000 in steps of [pounds]5,000 while appraisal costs were independently varied in steps of [pounds]10,000 (later added to by steps of [pounds]12,500) from 0 to [pounds]100,000. However, to begin with the quarterly quality costs are discussed here for different levels of prevention costs while maintaining appraisal costs constant [ILLUSTRATION FOR FIGURE 11 OMITTED!.
Point F is the initial "equilibrium" position which in this case comprises quarterly PAF costs of [pounds]10,000, [pounds]25,000 and [pounds]65,000 respectively, and a conformance level of 95 per cent. Increasing prevention costs to [pounds]15,000 per quarter results in the trajectory FEQ where Q is the quarterly quality cost achieved by the end of the three-year time horizon. Quarterly progression down the approximately straight line EQ is by a reducing amount for each additional quarter, i.e. the early reductions are the more substantial.
The approximately parallel trajectories FDR, FCS, FBT and FAU arise from increased quarterly prevention costs of [pounds]20,000, [pounds]25,000, [pounds]30,000 and [pounds]35,000 respectively. Trajectory FGP shows the impact of reducing prevention costs, in that the process variability deteriorates over time causing increasing failure costs. The quality cost envelope of PFQRSTU with its optimum at S, traces out a similar shape to that of the type one class of models. Figure 12 compares the shape of cost envelopes related to different initial quality levels of 95 and 99 per cent which in turn arise from the different starting process variabilities. The difference in the cost envelope slopes results from the increased unit cost of failure at the higher conformance level to equalize the overall failure, and quality, costs across the two scenarios.
The analysis above centred on varying prevention costs to determine the cost envelopes. Figure 13 comprises, for an initial conformance level of 95 per cent, a family of cost envelopes derived by varying the quarterly appraisal costs.
While Figure 13 shows the quarterly costs against conformance quality levels, Figure 14 contains the cumulative cost profiles over the three-year horizon for various combinations of quarterly appraisal cost and quarterly prevention cost.
Plotting cumulative quality costs against conformance levels gives a graph that is very similar in form to that of Figure 13 and therefore is not shown here. What is useful to examine is the connection between cumulative costs and prevention and appraisal costs as conveyed in Figures 15 and 16.
Figure 15 shows the cumulative costs over the three-year time horizon for various values of prevention and appraisal costs. Figure 16 contains an iso-cost plot derived by projecting the three dimensional surface of Figure 15 onto its base and adding contours connecting together points of equal cost. These figures illustrate that an optimum policy for minimizing quality costs over the time horizon requires a tripling of prevention costs and the setting of appraisal costs to zero. This policy results in a 23 per cent reduction in total quality costs over the three years compared with the equilibrium position. By the end of the time horizon quarterly quality costs are approximately halved [ILLUSTRATION FOR FIGURE 13 OMITTED].
The research activity successfully integrated the three model types identified in the review into one generic model displaying dynamic behaviour consistent with published empirical data. The three model types were derived by reducing Plunkett and Dale's categories. However, the similar profiles, but different slopes and different positions for the optimums evident in Figure 12, provide a post-hoc justification for collecting together Plunkett and Dale's models A, D and E into type one.
Although the analysis illustrated by Figures 15 and 16 examined the impact of varying prevention and appraisal costs, it has done so on the basis of assuming that such costs are constant for each quarter across the three years. A more complex analysis allows such costs to vary across the three-year period. Intuitively such a strategy should produce a better policy and improve on the 23 per cent reduction in quality costs over the three-year horizon. For example, in the early quarters defect levels are high and cash expended on appraisal will pay for itself in reduced failure costs. In later quarters expenditure on appraisal will not be recouped through reduced failure costs because of the lower defect levels and therefore should be set to zero. Models of type two tend to be expressions of such strategies of changing balances of prevention and appraisal costs. Various combinations of levels of prevention and appraisal costs were tried. At best cumulative costs over the horizon were reduced to 71.5 per cent of the total for the equilibrium position, i.e. an additional 5.5 per cent over the policy described above. This policy comprised a reduction of appraisal costs from [pounds]25,000 to 0 in six quarters while prevention costs were reduced from [pounds]40,000 to [pounds]10,000 in three years. The cost reductions identified here represent minimum bounds since the model parameters were set so that appraisal and prevention costs were at minima at the initial (equilibrium) position. For example, any initial appraisal costs above the equilibrium represents inefficient expenditure that could be saved.
The policy derived here is similar in form to the policies advocated in type two models [ILLUSTRATION FOR FIGURE 2 OMITTED] with one obvious difference, namely this policy's promotion of zero appraisal costs. However, this in itself is not an unusual position; Deming recommended the use of either zero or 100 per cent inspection. The policy of zero appraisal can be traced back to the model's core relationships which determine that over a constrained horizon the optimum policy will tend to prevention costs of [pounds]10,000 and appraisal costs of zero. Extending the time horizon, e.g. to five years from the existing three years, will make little difference to the above analysis since these "optimum" values are achieved before the end of three years. If the constraint on the time horizon is removed altogether then the optimum of [pounds]10,000 prevention costs is called into question, as the following shows.
Spending an extra [Delta]p above the equilibrium prevention costs in the forthcoming time period will result in some reduction [Delta]d in defect levels over all succeeding time periods t. If the cost of an unprevented defect is [k.sub.2] then the [Delta]p will be justified if:
[Delta]d [k.sub.2] t [greater than] [Delta]p
since all terms are non-negative [Delta]p is justified if:
t [greater than] [Delta]p/[Delta]d [k.sub.2]
i.e. providing the time horizon t extends far enough into the future, extra expenditure on prevention will always be justified. Hence altering the time horizon demonstrates that the two opposing views of quality-cost interaction, i.e. the classic and the modern, are reconcilable within the one model. However, it needs to be noted that the above analysis ignores the time value of money.
The criticisms that quality-cost models extant in the literature are imprecisely-defined and poorly-argued, have been answered in this paper by a clearly specified model and a close scrutiny of simulated behaviour. The constructed model, through its system dynamics nature, incorporates the three major types of model that appear separately in the literature. The model has enabled the rigorous examination of the main relationships connecting conformance quality and costs at the organizational level.
The simulation results indicate, for a three-year time horizon, the form of efficient allocations of expenditure to quality cost categories and illustrate the level of achievable savings (28.5 per cent of existing expenditure).
On a surperficial scanning of the graphical results the conclusion could be drawn that the simulation provides support for the classic view of quality-cost behaviour that an optimum level of quality exists. However, it has been demonstrated that this only applies in certain time-constrained situations. If the time horizon is infinite, or above a particular cut off value, then spending on prevention can always be justified, i.e. the modern view prevails. Hence the classic and modern views are reconciled within the one model. The explicitly stated assumptions comprising the model establish a firmer basis for future debate and present an opportunity to develop further insights into the quality-cost area.
The model was calibrated against available empirical data; in future it would be useful to gain access to more extensive data to enable more robust calibration of the model relationships. In addition a more ambitious project could integrate this model into a wider-scoped model that includes the potential to accommodate market reactions to quality changes.
1. Veen, B., "Quality costs", Quality, Vol. 2, 1974, pp. 55-9.
2. Campanella, J. and Corcoran, F.J., "Principles of quality costs", Quality Progress, Vol. 16 No. 4, 1983, pp. 16-22.
3. Edge, J. and Smith, D.J., "Quality-related costs", in Lock, D. (Ed.), Gower Handbook of Quality Management, Gower Publishing, Aldershot, 1990, pp. 57-70.
4. Dale, B.G. and Plunkett, J.J., Quality Costing, 2nd ed., Chapman and Hall, London, 1995.
5. Porter, L.J. and Rayner, P., "Quality costing for total quality management", International Journal of Production Economics, Vol. 27, 1992, pp. 69-81.
6. Juran, J.M. and Gryna, F.M. (Eds), Juran's Quality Control Handbook, McGraw-Hill, London, 1988.
7. Plunkett, J.J. and Dale, B.G., "Quality costs: a critique of some economic cost of quality models", International Journal of Production Research, Vol. 26 No. 11, 1988, pp. 1713-26.
8. Buzzell, R.D. and Gale, B.J., The PIMS Principles: Linking Strategy to Performance, The Free Press, New York, NY, 1987.
9. BS 6143, Guide to the Economics of Quality, British Standards Institution, London, 1992.
10. Oakland, J.S., Total Quality Management, 2nd ed., Butterworth-Heinemann, Oxford, 1993.
11. Chase, R.B. and Aquilano, N.J., Production and Operations Management, 6th edition, Richard D. Irwin, Homewood, IL, 1992.
12. Bowbrick, P., The Economics of Quality, Grades and Brands, Routledge, London, 1992.
13. Teboul, J., Managing Quality Dynamics, Prentice-Hall, London, 1991.
14. Webb, N.B., "Auditing meat processor quality control costs", Quality Progress, February 1972, pp. 13-15.
15. Burns, C.R., "Quality costing used as a total for cost reduction in the machine tool industry", Quality Assurance, Vol. 2 No. 1, 1976, pp. 25-32.
16. "Quality cost survey", Quality, June 1977, pp. 20-22.
17. Evans, J.R. and Lindsay, W.M., The Management and Control of Quality, 2nd ed., West, New York, NY, 1993.
18. Son, Y.K. and Hsu, L.-F., "A method of measuring quality costs", International Journal of Production Research, Vol. 29 No. 9, 1991, pp. 1785-94.
19. Fine, C.H., "Quality improvement and learning in productive systems", Management Science, Vol. 32 No. 10, 1986, pp. 1301-15.
20. Abdel-Hamid, T.K. and Madnick, S.E., "The elusive silver lining: how we fail to learn from software development failures", Sloan Management Review, Autumn, 1990, pp. 39-48.
21. Gilchrist, W., "Modelling capability", Journal of the Operational Research Society, Vol. 44 No. 9, 1993, pp. 909-23.
22. Stata, R., "Organisational learning - the key to management innovation", Sloan Management Review, Spring 1989, pp. 63-74.
23. Gitlow, H.S. and Gitlow, S.J., The Deming Guide to Quality and Competitive Position, Prentice-Hall, Englewood Cliffs, NJ, 1987.
|Printer friendly Cite/link Email Feedback|
|Publication:||International Journal of Quality & Reliability Management|
|Date:||Mar 1, 1996|
|Previous Article:||An improved minimal cut set algorithm.|
|Next Article:||Modern approaches to product reliability improvement.|