Printer Friendly

Economic design of x control charts: insights on design variables.

Introduction:

Growing competition in the marketplace and a recognition that product quality is a strategic asset have forced managers to re-examine the role of online and offline quality in product design and manufacturing. A direct consequence of this renewed emphasis on product quality is increased investment in product inspection and other quality assurance systems. In several well-publicized efforts (for example, see Klock, 1990; Pena, 1990), firms have set up elaborate systems for data collection and analysis to ensure high output quality. Typically, this involves establishing numerous process control charts to monitor status of production processes. The objective is to identify shifts in the process from the desired (in-control) state so as to take remedial action and restore it to the ideal state. The major decisions in such a scheme involve tradeoffs between inspection effort determined by sampling frequency and size, penalty for operating in out-of-control state, and the cost of restoring the process to the in-control state. While the basic design principles of control charts are well understood, in practice their design is based primarily on convenience and industry norms, and few schemes incorporate economics of various costs involved. Many researchers (for example, Montgomery, 1980) pointed out that this was partly due to the difficulty of obtaining and evaluating cost information. Developments in computer and information technologies and increasing emphasis on quality costs (for example, see Godfrey, 1988; Juran and Gryna, 1993, pp. 15-38) have resulted in easier access to such data and made this factor less critical. Our experience with wafer fabrication environment further suggests that a lack of understanding of the interrelationships between design variables, rather than the lack of data is the primary reason for above design of quality control schemes.

We recognize that study of control charts, in particular economic design of X charts, is a well researched topic with a long history extending to three to four decades. Literature on the subject is extensive and good summaries can be found in review papers by Ho and Case (1994); Montgomery (1980) and Vance (1983). The primary focus of research in the area has been on the development of computational procedures to determine the design variables. In addition, using a combination of computational experiments and sensitivity analyses, several researchers have obtained qualitative insights into the characteristics of the design variables. It is interesting to note that while these results are fairly comprehensive for single stage systems (and local quality control), they provide limited guidance for multistage systems. Hence our objective is to develop analytical results which can be used as a framework for application to multistage systems. Since exact cost functions are intractable, we rely on analyses of approximate cost function for deriving the necessary insights. Our results are consistent with the observations in previous studies and provide additional support for the approximations made in our models.

Specifically, in this paper our objective is to examine, in detail, the economic design of [Mathematical Expression Omitted] chart. Our choice is influenced by two considerations: [Mathematical Expression Omitted] chart is perhaps the most popular control chart used in the industry, and it is perhaps the simplest model and represents a good starting point for developing qualitative insights to support managerial decision making. The motivation for this research comes from our experience with a semiconductor manufacturer. The manufacturer, like many others in the industry, made substantial investments in setting up elaborate systems for online quality control and monitoring of process status. While the system provided a rich database, its application was rather myopic and limited to local process control. Further, the determination of control chart variables were not directly related to the company's costs or product requirements. It was widely recognized that utilization of this quality information at the plant level leading to an integrated quality management would require a better understanding of interprocess effects - impact of process status on subsequent operations. The work described in this paper represented a first step in this direction and forms the basis for subsequent research to examine related issues. For example, in Chen and Tirupati (1995) we report on the application of process control information to improve product inspection decisions. In that study, the results of this paper play a key role in the analysis.

The results of this work are based on the classical Duncan model (1956). While this model is a litle dated and a substantial amount of related literature has appeared since Duncan's seminal work, the model captures the basic issues of interest and the host of variants developed in subsequent research are not particularly useful in providing additional insights. Since the resulting cost functions are complex (even the Duncan's cost model), we focus on relatively stable systems in which the time between failures is large in comparison with the sampling interval. Accordingly, we make simplifying assumptions and derive conclusions based on approximate analyses.

The remainder of the paper is organized as follows: First, we describe the problem, and present Duncan's cost model for economic design of [Mathematical Expression Omitted] chart. The next section focuses on key results that describe the relationship between sample size, control limits and frequency of inspection and other production parameters. Computational experiments to examine robustness of our conclusions are discussed in the subsequent section. We conclude with a brief summary and some related remarks.

The cost model

Pioneering work in the economic design of control charts is due to Duncan (1956), who developed a cost model for [Mathematical Expression Omitted] control chart and presented a solution procedure to obtain approximately optimal values of design variables. Since Duncan's model of the [Mathematical Expression Omitted] control chart forms the basis for the results of this paper, we review it in some detail. A summary of the notation used in this paper is presented below.

h = the inspection time interval

n = sample size

k = parameter to define control limits, upper (lower) control limit is given as [[Mu].sub.0] + k[Sigma]/[square root of n]([[Mu].sub.0] - k[Sigma]/[square root of n].)

[Lambda] = process failure rate

H = production cycle time, a random variable

D = the time needed to restore the process following an action signal

e = time for inspection of one unit, time for inspecting a sample of size n is en.

[Tau] = the elapsed time between the last inspection in the in-control period and the process shift to out-of-control state.

[Alpha] = probability of type I error.

[Mathematical Expression Omitted]

[Beta] = probability of type II error

[Mathematical Expression Omitted]

W = the costs of finding the assignable cause and restoring the process to an in-control state

M = the penalty per unit time of operating in an out-of-control state

b = fixed cost per sampling

c = variable sampling cost per unit

T = cost of investigating false alarms

The model is based on the following process features:

* The quality of the output is determined by a single measurable parameter. The mean of this parameter depends on the process state.

* The process may be in one of two states: in-control or out-of-control. In the in-control state the process mean is set at the desired value (say [[Mu].sub.0]). A single assignable cause of variability will result in the shift of the mean by a fixed magnitude [Delta][Sigma] to an out-of-control state. Thus, the process mean in out-of-control state is either [[Mu].sub.0]-[Delta][Sigma] (with probability 0.5) or [[Mu].sub.0] + [Delta][Sigma] (with probability 0.5). (The standard deviation of output quality in the out-of-control state continues to be [Sigma].)

* The elapsed time before the shift occurs is exponentially distributed with mean 1/[Lambda].

* Once the process shifts to out-of-control state, it continues to remain in that state until it is reset.

* The penalty for operating in an out-of-control state is assumed to be constant at M per unit time.

Monitoring of the process by [Mathematical Expression Omitted] control chart involves the following steps:

* Choose random sample of size n at intervals of length h.

* The variable k, together with the sample size n specifies the control chart. The upper and lower control limits of the chart are respectively denoted as [[Mu].sub.0] + k[Sigma]/[square root of n] and [[Mu].sub.0] - k[Sigma]/[square root of n].

* If the sample mean is outside the control limits, a search will be initiated to determine the process status. If the process is confirmed to be out-of-control, the process will be stopped and corrective action will be taken. During the search, the process is allowed to continue operation.

* The time required to complete an inspection and plot the result is proportional to sample size, i.e., for a sample of size n, the time to complete inspection is en, where e is the sampling time per unit.

Thus, the cost of a production cycle comprises of the following:

* inspection costs;

* costs of investigating false alarm;

* additional product quality loss due to operating in an out-of-control state; and

* cost of detecting the cause for shift and restoring the process to in-control state.

Duncan (1956) provided an expression for the average cost per unit time, AC, comprising of the elements above as a function of decision variables n, k and h in the following. For the sake of brevity, we do not present the details but provide a brief summary in Appendix 1.

[Mathematical Expression Omitted] (2)

and [Mathematical Expression Omitted]

In the economic design of [Mathematical Expression Omitted] control charts, the objective is to choose control variables n, k, and h so as to minimize the cost function (1). It may be observed that there are similarities between the analysis presented in this paper and some of the prior research. For example, Goel et al. (1968), Lorenzen and Vance (1986) employ first order conditions in developing computational procedure. Collani (1986; 1989) obtains characteristics of design variables based on an analysis of a variant of the cost model. However, there are important differences.

For example, the focus in prior work has been on the development of procedures to determine optimal values of n, h and k, and qualitative insights into their behaviour are obtained by sensitivity analyses. In contrast, our objective is to develop results that could reveal the relationships between control variables and production parameters. Second, we develop a search procedure which provides, in addition to approximate solutions, optimal design variables. The guarantee of optimality is based on bounding procedures developed on the basis of our results. The reader may note that "exact" solutions presented in the literature for testing approximate/heuristic solutions are based on search procedures with pre-specified ranges for design variables n and k. While these ranges are usually wide enough to include the optimal solution, clearly the methods depend on production parameters and thus do not guarantee optimality.

Characterization of design variables

In this section, we focus on development of results which describe the relationships between process control variables n, k and h and parameters M, [Lambda], [Delta], b, c, T and e. These results are based on analyses of functions that closely approximate the average cost function (1). The simplifications are necessitated by the mathematical intractability of (1) in providing the desired qualitative insights. However, our approximations are based on practical considerations and should be reasonable in several manufacturing environments. Specifically, we consider relatively stable processes in which the expected time between shifts to out-of-control state (1/[Lambda]) is large in comparison with the sampling interval (h). We also assume that the time to find the assignable cause and to restore the process (D) is small relative to 1/[Lambda] and that the inspection time (e) is negligible. The implications of these assumptions for simplifying the cost function are discussed briefly below.

* A direct consequence of process stability is that [Lambda]h is small and the term [Lambda][h.sup.2]/12 representing second order effect may be ignored. It may be noted that this approximation has been used by several other researchers, most recently by Collani (1989) and Tagaras (1989).

* The major objective of process control charts is to obtain a quick feedback on the process status and in most cases the time lag due to sample inspection (en) is very small. In continuous manufacturing systems the production line is equipped with tools so as to make this time negligible.

* The expected cycle time, E[H], is perhaps the most intractable term in (1) since it is a function of all variables and occurs in the denominator. However, it includes a dominant term (1/[Lambda]) which makes it insensitive to choices of n, k and h and facilitates approximations at several levels. 1/[Lambda] represents the simplest approximation for E[H] and makes it independent of the decision variables. This is reasonable for very stable processes. A natural refinement is to treat E[H] as a constant in deriving optimality conditions for n, k and h, but evaluate it in accordance with equation (2). As a consequence of the latter approximation, we ignore variation in E[H] due to changes in n, k and h and treat the derivative of E[H] as zero.

Based on the discussion above, we consider two alternatives for E[H] (A1 and A2 presented below) in our analyses. Note that in each case the fourth term in (2), [Lambda][h.sup.2]/12, is ignored. In addition, we examine special cases in which sampling time is negligible (en [congruent] 0) and/or, sampling plans have high discriminating power ([Alpha], [Beta] [congruent] 0). It may be noted that the second alternative A2 implies that en [congruent] 0.

A1: E[H] is treated as constant and [Lambda][h.sup.2]/12 [congruent] 0

A2: E[H] = 1/[Lambda] and [Lambda][h.sup.2]/12 [congruent] 0

Our characterization of variables n, k and h is based on analyses of first order optimality conditions, that are also sufficient under some conditions. In deriving these results, we treat sample size n as if it were continuous and assume that cost function is differentiable with respect to n. Various numerical studies have indicated that cost function behaves smoothly in the neighbourhood of optimal n for the problems with realistic values of production parameters (for example, see Goel et al. (1968); Montgomery (1982) and references therein). Based on these observations and our computational results, we believe that this approximation is appropriate for the type of results developed in this paper. In this section, we first present a summary of the key results of our analyses. This is followed by a discussion of managerial implications of these findings. The details of the proofs are provided in Appendix 2.

Summary of results

Lemma 1: Suppose that conditions in A1 hold, then the average cost function, AC, reduces to the following:

AC = b + cn/h + [Alpha]T/[Lambda]hE[H] + M/E[H]78 (h/1 - [Beta] - h/2 + en + D) + W/E[H] (3)

Lemma 2: Suppose that conditions in A1 hold and variables n and k are given, then AC is convex in h. Furthermore, optimal h, [h.sup.*] is given by

[Mathematical Expression Omitted]. (4)

Lemma 3: Suppose that conditions in A1 hold and variables n and h are given, then optimal k satisfies the following:

k = [Delta][square root of n]/2 + 1/[Delta][square root of n] 1n 2T[(1 - [Beta]).sup.2]/[Lambda][h.sup.2]M (5)

Lemma 4: Suppose that conditions in A1 hold and variables h and k are given. If discreteness of n is ignored, first order optimality condition for n reduces to the following:

[Mathematical Expression Omitted] (6)

Proposition 1: Suppose that conditions in A1 or A2 hold and n is fixed, then optimal k is independent of M.

Proposition 2: Suppose that conditions in A2 hold and n is fixed, then optimal k is independent of M and [Lambda].

Proposition 3: Suppose that conditions in A1 hold and the inspection time e is negligible, or conditions in A2 hold, then optimal n and k are independent of M.

Proposition 4: Suppose that conditions in A2 hold, then optimal n and k are independent of M and [Lambda].

Corollary 1: Suppose that conditions in A1 hold and e [congruent] 0, or conditions in A2 hold. The upper control limit and lower control limit of control chart are independent of M.

Corollary 2: Suppose that conditions in A2 hold. The upper control limit and lower control limit of control chart are independent of M and [Lambda].

Corollary 3: Suppose that conditions in A2 hold, n is fixed and [Alpha], [Beta] [congruent] 0, then optimal h is given by

h = [square root of 2(b + cn)/[Lambda]M]. (7)

Corollary 4: Suppose that conditions in A2 hold, n is fixed, and [Alpha], [Beta] [congruent] 0, then optimal k is given by

k = [Delta][square root of n]/2 + 1/[Delta][square root of n] 1n T/b + cn. (8)

Corollary 5: Suppose that conditions in A2 hold, and discreteness of n is ignored. Then, the first order optimality condition for n reduces to the following:

[Mathematical Expression Omitted] (9)

Propositions 1-4 and Corollaries 1-5 characterize approximately the behaviour of decision variables n, k and h as a function of b, c, T, [Delta], M and E[H]. It may be noted that equations (7) and (8) are the results of special case when the control charts have high discriminating power ([Alpha], [Beta] [congruent] 0). It is interesting to note that (7) and (8) provide closed form expressions for h and k as function of n. As described later, these results are useful in deriving a quick heuristic to determine near optimal values of n, k and h.

Managerial implications

In this section, we discuss managerial implications of the characteristics described by the findings of the previous section. One of the interesting results in this context is provided by Proposition 4 and Corollary 2 which suggest that optimal sample size and upper and lower control limits of the [Mathematical Expression Omitted] chart are independent of M and [Lambda]. This may be counterintuitive since one might expect that the penalty of operating in out-of-control state should have a strong influence on the design of the control chart. Our result (equation 9) indicates that the sample size is determined primarily by the amount of shift and should be sufficient to discriminate between the process states. The control limits (alternately, variable k) are set accordingly. Thus, the sample size has an inverse relationship with [Delta]. It should be noted that these results do not imply that the control scheme and inspection effort are independent of M and [Lambda]. In fact, Lemma 2 suggests that the inspection interval is directly influenced by these parameters. We observe that this result is similar to the observations obtained by Collani (1986; 1989).

It may be noted that n and k depend primarily on [Delta]. In contrast, the inspection interval h is influenced by M (the penalty of operating in the out-of-control state), [Lambda] (process failure rate) and the cost of each inspection (b + cn). It is interesting to note that equations (4) and (7) are similar to the EOQ result from inventory theory. Our results indicate that optimal values of n, k and h are loosely coupled and a hierarchical approach (with some feedback) may be used for their determination. This partially explains the successes of various approximation schemes presented in the literature. For example, Lorenzen and Vance (1986) use the Fibonacci search for determining n. For each n, optimal k is obtained by the golden section search. And given n and k, Newton's method is used to solve for h. While the foregoing conclusions are based on several assumptions and approximations, we expect that they are quite general. Computational results described in the next section support these conclusions and indicate that these results are quite robust.

Computational experiments

In this section, we describe results of computational experiments designed to illustrate application of the analyses of the previous section. Since a number of assumptions and approximations were made in deriving these results, one of the major objectives of the computational experiment is to examine the robustness of these conclusions over a wide range of parameter values. Second, we also used the test problems to examine the nature of the approximations and the quality of the solutions presented by equations (4), (5), (7) and (8).

The choice of the parameters in our experiments is based on the extensive computations reported by Tagaras (1989). Since M and e are key parameters in our analyses, we used a wider range for these parameters than those used by Tagaras. For all other parameters ([Lambda], b, c, d, T), the range of values is the same as those adopted by Tagaras. Table I presents the details of our experimental design. In total, we generated 432 problems for our experiment.

[TABULAR DATA FOR TABLE I OMITTED]

Clearly, to test the validity of the propositions 1-4, it is necessary to derive optimal solutions for the test problems. As mentioned earlier, while a number of procedures described in the literature provide good solutions, they do not guarantee global optimality. By optimal solution we refer to values of n, k and h which minimize the cost function in (1). Hence, in these test problems we derived optimal solutions by using a search procedure similar to that presented by Goel et al. (1968) but without prescribing ranges for n and k. Instead, we refined the procedure by computing, in a dynamic fashion, bounds on variables n and k so as to generate optimality and also minimize the computational effort. The basic module in our scheme, which determines optimal h for given n and k, is similar to that used by Goel et al. (1968) and involves solving a closed form expression for h. In our implementation, described in Figures 2 and 3, we adopt a hierarchical approach for search over variables n and k. For each n we search over appropriate range of k and determine the optimal solution. The procedure is repeated for different values of n to obtain a global optimal solution for the problem. The range of search for n (and k) is limited by computation of upper bounds on these parameters. The bounding procedure is described in detail in Appendix 3.

Discussion of computational results:

It is encouraging to note that our computational results support the earlier conclusions and indicate that the approximations are very reasonable over the wide range of parameter values used in our test problems. Some examples of our results describing the behaviour of variables n and k are shown in Table II. In this table we present optimal values of n and k for problems defined by parameter set c = 0.1, b = 5.0, T = 500 and c = 1.0, b = 0.5, T = 50. It can be seen from this table that for value of e = 0, the discrepancies due to approximations have negligible effect and the computational results are in excellent agreement with Proposition 4. Even when the assumptions are significantly violated (for example, when e = 0.01 or 0.05) the conclusions are robust and the variations in n are fairly small. For example, for the case e = 0.01, c = 0.1, b = 5.0, T = 500, [Delta] = i and [Lambda], = 0.01 optimal n decreases from 27 to 23 as M increases from 50 to 1,000, a twentyfold increase. The results with the variable k which defines the control limit are quite similar. We note that the results of Table II are representative of the results we obtained with other problems. For the sake of brevity, we do not present the detailed results (which may be obtained from the authors). Instead we present a summary of the results in Table III which provides the average percentage deviations in n and k for test problems defined by parameters c, b, T, e and [Delta]. Note that for each data set defined by these parameters, three levels of [Lambda], and M result in nine test problems. The propositions of Section 3 suggest that the optimal values of n and k for the problems in each data set should be close, and variations in the values of these variables represent deviations from the propositions. In the Table, we present [TABULAR DATA FOR TABLE II OMITTED] [TABULAR DATA FOR TABLE III OMITTED] average relative absolute derivation for each data set defined in the following manner

[Mathematical Expression Omitted]

Where [n.sub.i] = optimal value of n for problem i in the data set, i = 1, 2, 3, ..., 9

[Mathematical Expression Omitted]

The measure for k is defined in a similar manner. Observe that for the nine problems in each data set the value of M varies twentyfold from 50 to 1,000, and [Lambda] varies fivefold from 0.01 to 0.05, which represents a substantial range. The results in the table are encouraging. For example, the largest average deviation of n for e = 0 (an assumption in Proposition 3 and 4) is only 4.44 per cent. The worst corresponding value is 22.98 per cent when the assumptions are significantly violated. It may be noted that deviations in k are typically smaller than those observed for n. This is partly due to the discrete nature of n. Also, it is interesting to note that these deviations in n and k do not have any appreciable effect on the costs, as long as the corresponding inspection interval h is optimal. For example, we found that using the values of n and k that are optimal for [Lambda] = 0.02 and M = 100 for the other problems in each data set results in typical cost penalties of less than 1 per cent if the value of h is appropriately determined.

The behaviour of variable h is rather interesting and warrants some elaboration. Recall that the approximate value of optimal h is expressed by equation (7) which is similar to the result of EOQ model. This formula involves a square root, as a consequence, the optimal value of h ([h.sup.*]) is not very sensitive to the estimates of b, c, [Lambda], and M. Similarly, the total cost curve (equation 1) is somewhat flat in the vicinity of [h.sup.*]. This result suggests that rounding off the [h.sup.*] value to facilitate implementation is not likely to result in any significant cost increase. This observation is exemplified by computational results shown in Fibre 4.

Other interesting results relate to the effect of [Delta], c, b and T on the sample size n. In Fibre 5 we present illustrative results which correspond to the value of M = 100. The results with other problem are similar and are omitted for the sake of brevity. The results in the figure support equation (9) and indicate that as process shift [Delta] increases, sample size n decreases. Similarly, increase in unit inspection cost (c) leads to decrease in sample size n. Likewise, increase in fixed inspection cost (b) and/or type I error cost (T) lead to increase in n. We note that these results are consistent with the earlier studies reported in the literature (see, for example, Chiu, (1976); Collani, (1986, 1989)

Other related comments

As mentioned earlier, we used the test problems to examine the quality of the approximate solutions provided by (4), (5), (7) and (8). It may be recalled that (7) and (8) provide closed form results to compute approximately optimal h and k for a given value of sample size n. We refer to solution provided by (7) and (8) as direct method (DM) because h and k are computed directly by closed form equations. And we refer to solution obtained from (4) and (5) as iterative method (IM) since it involves iterative solution of simultaneous equations to determine h and k. For the 432 problems we used the two methods to obtain h and k for optimal value of n. The results, summarized in Table IV are very encouraging. First, the approximate values of h and k are close to optimal. Second, the penalty in the cost function due to this suboptimality is negligible. From the table, it can been seen that the average cost error for IM is less than 0.1 per cent and the maximum error is less than 0.5 per cent. The average error with DM is less than 0.4 per cent and the maximum error is 6 per cent. It may be observed that the worst case error corresponds to a [Beta] value of 0.56, an unlikely situation in practice. This is clearly due to insensitivity of the cost function to h and k near their respectively optimal values. These results are consistent with those obtained by Goel et al. (1968) and Panagos et al. (1985). It may be noted that equation (6), which is based on treating n as a continuous variable, may be used to generate an initial choice for sample size. Our experience suggests that this sample size together with IM or DM procedure provides near optimal results, typically within 1 per cent of the optimal cost.

It is interesting to note that the computational results support the assumptions made to facilitate our analysis. For example, for the 432 problems in the experiment, the range of [Lambda]h is between 0.0041 and 0.256 for optimal choices of n, k and h. The corresponding range for [Lambda][h.sup.2]/12 is between 0.00011 and 0.108, which is negligible in comparison with I/[Lambda], the minimum value of which is 20.

We conclude this section with some brief comments about our solution procedure for determining optimal values of n, k and h. As described earlier, we [TABULAR DATA FOR TABLE IV OMITTED] used a search procedure that incorporated bounds computed in a dynamic fashion to generate exact solution. This scheme guarantees optimality, unlike the procedures reported in the literature, which typically rely on prescribed ranges for variables n and k. Our computational experience indicates that this is an efficient procedure with tight bounds on k and n. For example, in most instances the upper bound on n was within one-third of the optimal value. In the worst case, the upper bound was ten above the optimal.

Conclusions

Based on several simplifying assumptions and analyses of the resulting approximate cost function, we have derived several interesting properties characterizing the design variables of [Mathematical Expression Omitted] control chart. For example, they suggest that the sample size and the upper and lower limits of the control chart are independent of the penalty for operating in out-of-control state. Similarly, these variables are independent of the process failure rate. Our computational results demonstrate that these and other related results are fairly robust and hold approximately even when the assumptions are significantly violated. Besides providing a quick procedure to compute optimal values of n, k and h, these results contribute to an understanding of the dynamics of these variables. We believe that such studies help encourage implementation of economic approach to the design of process control charts and may provide insights to facilitate integration of online and offline quality decisions for total quality management.

References:

Chen, W.H. and Tirupati, D. (1995), "On-line total quality management: integration of product inspection and process control", Production and Operations Management, Vol. 4 No. 3, Summer, pp. 242-62.

Chiu, W.K. (1976), "On the estimation of data parameters for economic optimum [Mathematical Expression Omitted] charts", Metrika, Band 23, pp. 135-47.

Collani, V. (1986), "A simple procedure to determine the economic design of an [Mathematical Expression Omitted] control chart", Journal of Quality Technology, Vol. 18 No. 3, pp. 145-51.

Collani, V. (1989), The Economic Design of Control Charts, B. G. Teubner, Stuttgart.

Duncan, A.J. (1956), "The economic design of [Mathematical Expression Omitted] charts used to maintain current control of a process", Journal of American Statistical Association, Vol. 51 No. 274, pp. 228-42.

Godfrey, J.T. and Pasewark, W.R. (1988), "Controlling quality costs", Management Accounting, March, pp. 48-51.

Goel, A.L., Jain, S.C. and Wu, S.M. (1968), "An algorithm for the determination of the economic design of X Charts based on Duncan's model", Journal of the American Statistical Association, Vol. 63 No. 321, pp. 304-20.

Ho, C. and Case, K. (1994), "Economic design of control charts: a literature review for 1981-1991", Journal of Quality Technology, Vol. 26 No. 1, pp. 39-53.

Juran, J.M. and Gryna, F.M. (1993), Quality Planning and Analysis, 3rd ed. (International edition), McGraw-Hill, New York, NY.

Klock, J.J. (1990), "How to manage 3500 (or fewer) suppliers", Quality Progress, June, pp. 43-7.

Lorenzen, T.J. and Vance, L.C. (1986), "The economic design of control charts: a unified approach", Technometrics, Vol. 28 No. 1, pp. 3-10.

Montgomery, D.C. (1980), "The economic design of control charts: a review and literature survey", Journal of Quality Technology, Vol. 12 No. 2, pp. 75-87.

Montgomery, D.C. (1982), "Economic design of an [Mathematical Expression Omitted] control chart", Journal of Quality Technology, Vol. 14 No. 1, pp. 40-43.

Panagos, M.R., Heikes, R.G. and Montgomery, D.C. (1985), "Economic design of [Mathematical Expression Omitted] control charts for two manufacturing process models", Naval Research Logistics Quarterly, Vol. 32, pp. 631-46.

Pena, E. (1990), "Motorola's secret to total quality control", Quality Progress, October, pp. 43-5.

Tagaras, G. (1989), "Power approximation in the economic design of control charts", Naval Research Logistics Quarterly, Vol. 36, pp. 639-54.

Vance, L.C. (1983), "A bibliography of statistical quality control chart techniques, 1970-1980", Journal of Quality Technology, Vol. 15, pp. 59-62.

Appendix 1: details of the cost model for the [Mathematical Expression Omitted] control chart Expected cycle time E[H]

The cycle time comprises of in-control-period and out-of-control period as shown in Figure 1. The expected time of in-control period is 1/[Lambda]. The out-of-control period may be partitioned into three segments:

(1) time from the shift to out-of-control state to the next sampling inspection (denoted by h-[Tau]);

(2) time for additional inspections that may be required to detect the shift to out-of-control state; and

(3) a deterministic interval (en + D) to find the cause for the shift and restore the process to in-control state.

Thus, the expected cycle length, E[H] may be expressed as follows:

E[H]= 1 / [Lambda] + h / 1 - [Beta] - E([Tau]) + en + D,

where E([Tau]) is the expected elapsed time between the last inspection in the in-control period and the process shift to out-of-control state.

It may be noted that [Tau] is a random variable which follows a truncated negative exponential distribution in the interval [0, h], and its density function [f.sub.[Tau]](t) may be described as follows:

[f.sub.[Tau]](t) = [Lambda] exp(-[Lambda]t) / 1 - exp(-[Lambda]t) for 0 [less than or equal to] t [less than or equal to] h

Therefore, the expected value of [Tau] can be derived as

E([Tau]) = [integral of] t [f.sub.[Tau]](t) dt between limits h and 0 = 1 - (1 + [Lambda]h) exp(-[Lambda]h) / [Lambda](1 - exp(-[Lambda]h)) = h / 2 - [Lambda][h.sup.2] / 12 + [[Lambda].sup.3][h.sup.4] / 720 + higher order terms

Ignoring terms of order [[Lambda].sup.3][h.sup.4] and higher, Duncan (1956) approximates E[[Tau]] as

E([Tau]) [equivalent to] h / 2 - [Lambda][h.sup.2] / 12.

Thus, E[H] = 1 / [Lambda] + h / 1 - [Beta] - h / 2 + [Lambda][h.sup.2] / 12 + en + D (2)

Average total cost per unit time AC

Average total cost per unit time, AC is defined as the ratio of the expected cycle cost E[TC] to the expected cycle length E[H], i.e.

AC = E[TC] / E[H].

We note that the cycle cost TC comprises of the following:

* Inspection costs: the costs include a fixed component (b) that is independent of sample size and a variable cost (cn), resulting in sampling cost of (b + cn) for each inspection.

* Costs of investigating false alarm (type I error cost): The cost is directly proportional to the number of inspections during the in-control state, which is given by

[Mathematical Expression Omitted].

Ignoring higher order terms of [Lambda]h results in a simple approximation indicated by the last term in the equation above. The corresponding cost due to type I errors is [Alpha]T/[Lambda]h.

* Additional product loss due to operating in an out-of-control state:

M(h / 1 - [Beta] - h / 2 + [Lambda][h.sup.2] / 12 + en + D)

* Cost of detecting the cause for shift and restoring the process to in-control state: W.

The expected cycle cost (TC) may be obtained as the sum of the four components described above, and the corresponding average cost per unit time, AC, may be expressed as follows:

AC = b + cn / h + T[Alpha] / [Lambda]hE[H] + M / E[H] (h / 1 - [Beta] - h / 2 + [Lambda][h.sup.2] / 12 + en + D) + W / E[H] (1)

Appendix 2: analysis of the cost function AC - proof of results of section 3

Owing to the mathematical intractability of (1), our analysis is based on simplifying approximations. These approximations are reasonable for stable systems. The major assumptions are: the expected time between shifts to out-of-control state (1/[Lambda]) is large in comparison with the sampling interval (h); the time to restore the process (D) is small relative to 1/[Lambda]; and the inspection time (e) is negligible. Specifically, we consider two alternatives for E[H] (A1 and A2 presented below) in our analyses. Note that in each case [Lambda][h.sup.2]/12 and higher order terms are ignored in (1). In addition, we examine special cases in which sampling time is negligible (en [congruent] 0) and/or, sampling plans have high discriminating power ([Alpha], [Beta] [congruent] 0).

A1: E[H] is treated as constant and [Lambda][h.sup.2]/12 [congruent] 0. This approximation recognizes that E[H] is dominated by term (1/[Lambda]) and variations in E[H] due to changes in n, k and h can be ignored. Thus, the derivative of E[H] can be treated as zero.

A2: E[H] = 1/[Lambda] and [Lambda][h.sup.2]/12 [congruent] 0. This is the simplest approximation for E[H] and makes E[H] independent of n, k and h.

Outline of proof of Lemma 1

Lemma 1 is trivially true. Suppose that conditions in A1 bold, equation (3) below is obtained by eliminating [Lambda][h.sup.2]/12 from equation (1).

AC = b + cn / h + [Alpha]T / [Lambda]hE[H] + M / E[H] (h / 1 - [Beta] - h / 2 + en + D) + W / E[H] (3)

Outline of proof of Lemma 2

From (3), given n and k, we obtain the following:

[Delta]AC / [Delta]h = -b + cn / [h.sup.2] - [Alpha]T / [Lambda]E[H][h.sup.2] + M / E[H] (1 / 1 - [Beta] - 1 / 2)

[[Delta].sup.2] AC / [Delta][h.sup.2] = 2 b + cn / [h.sup.3] + 2[Alpha]T / [Lambda]E[H][h.sup.3] [greater than] 0

Thus, AC is convex in h. Equation (4) below follows by equating first order derivative to zero and solving h. Thus, Lemma 2 is obtained.

[Mathematical Expression Omitted] (4)

Outline of proof of Lemma 3

Equating to zero the first derivative of (3) with respective to k, we obtain

T / [Lambda]hE[H] [Delta][Alpha] / [Delta]k + Mh / E[H] [Delta] / [Delta]k (1 / 1 - [Beta]) = 0.

Then, it follows that T / [Lambda]h [Delta][Alpha] / [Delta]k + Mh / [(1 - [Beta]).sup.2] [Delta][Beta] / [Delta]k = 0.

Note that [Alpha] = 2 [integral of] exp(-[z.sup.2]/2) / [square root of 2[Pi]] dz between limits [infinity] and k and [Beta] = [integral of] exp(-[z.sup.2]/2) / [square root of 2[Pi]] dz between limits k-[Delta] [square root of n] and -k-[Delta] [square root of n] [congruent] [integral of] exp(-[z.sup.2]/2) / [square root of 2[Pi]] between limits k-[Delta] [square root of n] and -[infinity]

Thus, [Delta][Alpha] / [Delta]k = -2 exp (-[k.sup.2]/2) / [square root of 2[Pi]] and [Delta][Beta] / [Delta]k = exp (-[([Delta][square root of n] - k).sup.2]/2)/[square root of 2[Pi]].

Therefore, T / [Lambda]h -2exp(-[k.sup.2]/2) / [square root of 2[Pi]] + Mh / [(1 - [Beta]).sup.2] exp(-[([Delta][square root of n] - k).sup.2]/2) / [square root of 2[Pi]] = 0.

Or, -2 T / [Lambda]h exp(-[k.sup.2]/2) + Mh / [(1 - [Beta]).sup.2] exp(-[k.sup.2]/2 + k[Delta][square root of n] - [[Delta].sup.2]n / 2) = 0.

Cancelling out the common terms, we obtain

exp (k[Delta][square root of n] - [[Delta].sup.2]n / 2) = 2T / [Lambda]h [(1 - [Beta]).sup.2] / Mh.

Taking logarithm on both sides, it follows that

k[Delta][square root of n] - [[Delta].sup.2]n / 2 = ln [2T[(1 - [Beta]).sup.2] / [Lambda][h.sup.2]M],

or k = [Delta][square root of n] / 2 + 1 / [Delta][square root of n] ln [2T[(1 - [Beta]).sup.2] / [Lambda][h.sup.2]M]. (5)

Thus, Lemma 3 (equation 5) follows.

Outline of proof of Lemma 4

Equating to zero the first derivative of (3) with respective to n, we obtain

c / h - Mh / E[H] exp(-[([Delta] [square root of n] - k).sup.2]/2) / [square root of 2[Pi]][(1 - [Beta]).sup.2] [Delta] / 2[square root of n] + e M / E[H] = 0

i.e., c - M[h.sup.2] / E[H] exp(-[([Delta] [square root of n] - k).sup.2]/2) / [square root of 2[Pi]][(1 - [Beta]).sup.2] [Delta] / 2[square root of n] + e h M / E[H] = 0 (6)

Thus, Lemma 4 is obtained.

Outline of proof of Proposition 1

Under A1 (or A2) and Lemma 3, optimal value of k is given by (5). To prove that k is independent of M for a given n, it is sufficient to show that M can be eliminated from (5) so that k is a function of n, h, [Delta], [Lambda], b, c, and T. By Lemma 2, equation (4), it follows that

[Mathematical Expression Omitted]. (p1)

Substituting the above in (5), we obtain k as a solution to

k = [Delta][square root of n] / 2 + 1 / [Delta][square root of n] ln T(1 - [[Beta].sup.2]) / [Lambda]E[H] (b + cn) + T[Alpha]. (p2)

Observe that for a given n, (p2) is independent of M, and the proposition follows.

Outline of proof of Proposition 2

Under A2, E[H] is approximated as 1/[Lambda]. Substituting this value in (p2), optimality condition for k reduces to

k = [Delta][square root of n] / 2 + 1 / [Delta][square root of n] ln T(1 - [[Beta].sup.2]) / (b + cn + T[Alpha]). (p3)

Clearly (p3) is independent of M and [Lambda] and we obtain Proposition 2.

Outline of proof of Proposition 3

The proof of Proposition 3 is similar to that of Proposition 1. By substituting the right-hand side of (p1) for [h.sup.2]M and noting that e [congruent] 0, (6) may be rewritten as

[Mathematical Expression Omitted]. (p4)

Since optimal n and k are obtained by simultaneously solving (p3) and (p4), which are independent of M, the proposition follows.

Outline of proof of Proposition 4

Under A2, E[H] is approximated as 1/[Lambda]. Substituting this value in (p4), optimality condition for n reduces to

c = (b + cn + T[Alpha])[Delta] / (1 - [[Beta].sup.2])[square root of 2[Pi]n] exp(-[([Delta][square root of n] - k).sup.2]/2) (9)

Since optimal n and k are obtained by simultaneously solving (p3) and (9), which are independent of M and [Lambda], the proposition follows.

Outline of proof of Corollary 1

It is noted that upper (lower) control limit is [[Mu].sub.0] + k[Sigma]/[square root of n] ([[Mu].sub.0] - k[Sigma]/[square root of n]), where [[Mu].sub.0] and [Sigma] are assumed to be constant. By Proposition 1 and 3, it is obvious that k[Sigma]/[square root of n] is independent of M and Corollary 1 is obtained.

Proof of Corollary 2

This proof is similar to that of Corollary 1 and is omitted.

Proofs of Corollary 3 and 4

The proofs follow directly from equations (4) and (p2) respectively with appropriate substitutions for [Alpha], [Beta] and E[H].

Proof of Corollary 5

The proof follows directly from the proof of Proposition 4.

Appendix 3: results in support of the procedure for obtaining optimal n, k and h Additional notation:

k(n): Optimal choice in k for a given n

h(n, k): Optimal choice in h for a given n and k

h(n): Optimal choice in h for a given n and k(n)

AC(n, k, h): Average cost for given n, k and h (defined by equation (1)

AC(n, k): Average cost for given n, k and h(n, k)

AC(n): Average cost for given n, k(n) and h(n)

[Alpha](n, k): The value of a defined by n and k

[Alpha](n): The value of a defined by n and k(n)

[Beta](n, k): The value of [Beta] defined by n and k

[Beta](n): The value of [Beta] defined by n and k(n)

H(n, k, h): Expected cycle time for given n, k and h (defined by equation (2)

H(n, k): Expected cycle time for given n, k and h(n, k)

H(n): Expected cycle time for given n, k(n) and h(n)

[n.sub.u]: Upper bound on sample size

[k.sub.u]: Upper bound on k for a given n

It may be recalled that

H(n, k, h) = 1 / [Lambda] + en + D + h {1 / 1 - [Beta](n, k) - 1 / 2 + [Lambda]h / 12}

Hence, H(n, k) = 1 / [Lambda] + en + D + h(n, k) {1 / 1 - [Beta](n, k) - 1 / 2 + [Lambda]h(n, k) / 12}

and H(n) = 1 / [Lambda] + en + D + h(n) {1 / 1 - [Beta](n) - 1 / 2 + [Lambda]h(n)/12}

It may be noted that AC(n, k) = [min.sub.h] AC(n, k, h) and AC(n) = [min.sub.k] AC(n, k)

Definitions

Upper bound on k. For a given n, [k.sub.u] is an upper bound on k if there exists a [k.sub.1] [less than or equal to] [k.sub.u] and the following inequality is satisfied.

AC(n, k) [greater than or equal to] AC(n, [k.sub.1]) for all k [greater than] [k.sub.u]

Upper bound on sample size. [n.sub.u] is an upper bound on sample size n if there exists a [n.sub.1] [less than or equal to] [n.sub.u] and the following inequality is satisfied.

AC(n) [greater than or equal to] AC([n.sub.1]) for all n [greater than] [n.sub.u]

Assumptions:

In what follows, we demonstrate that the procedure described in Section 4 and Figure 2 and Figure 3 provides optimal values of n, k and h under the following conditions. While these conditions hold for the test problems considered in our experiment, we conjecture that they hold in general. However, we do not attempt to prove these results since the algebra involved is rather tedious and not particularly insightful.

(1) The discriminating power of the control chart increases with the sample size, i.e.,

[Beta]([n.sub.1]) [greater than] [Beta]([n.sub.2]) if [n.sub.1] [less than] [n.sub.2]

(2) The sampling interval h is much smaller than the average time between shifts to out-of-control, i.e., h [less than] 1/[Lambda].

(3) The cost of restoring the process to in-control state (W) is smaller than the penalty for operating the process in out-of-control state for a period of 1/[Lambda], i.e., M/[Lambda] [greater than] W. It may be noted that in our test problems the value of (M/[Lambda] - W) ranged between 965 and 99,965.

Proposition A1:

Suppose that optimal h and k for a sample of size [n.sub.1] have been determined, then an upper bound [n.sub.u] for the optimal sample size may be computed as the smallest value of n that is not smaller than [n.sub.1] and satisfies the following condition:

[Mathematical Expression Omitted]

[Mathematical Expression Omitted]

and g(n, h) = 1 / [Lambda] + en + D + h {1 / 1 - [Beta]([n.sub.1]) - 1 / 2 + [Lambda]h / 12}

Outline of proof

From Assumption 1, it follows that 1/(1 - [Beta]([n.sub.1])) [greater than] 1/(1 - [Beta](n)) for n [greater than] [n.sub.1], and hence H(n) [less than] g(n, h). The definition of [Mathematical Expression Omitted] implies that

[Mathematical Expression Omitted]

In particular the above is true for h(n) and

[Mathematical Expression Omitted] (a1)

It may be noted that for all n

[Mathematical Expression Omitted] (a2)

The last inequality follows from (a1) above. The definition of [n.sub.u] implies that

[Mathematical Expression Omitted] (a3)

Inequalities (a2) and (a3) together imply that AC(n) [greater than] AC([n.sub.1]) for all n [greater than] [n.sub.u] and hence [n.sub.u] represents a valid upper bound on the sample size.

Proposition A2:

Given n and k, a lower bound on h(n,k), denoted by [Mathematical Expression Omitted], is given by

[Mathematical Expression Omitted]

Outline of proof

Since n and k are given, we omit these arguments and denote AC(n, k, h) as AC(h) in this proof. Similarly, H(h), [Beta] and [Alpha] respectively denote H(n, k, h), [Beta](n,k) and [Alpha](n,k). Using the definition of H(h) from equation (1), the cost function AC(h) may be expressed as follows:

AC(h) = b + cn / h + [Alpha] T / [Lambda]hH(h) - (M - [Lambda]W) / [Lambda]H(h) + M (a4)

The proof follows from an analysis of first order optimality condition for (a4). Observe that

dAC(h) / dh = [Delta]AC(h) / [Delta]h + [Delta]AC(h) / [Delta]H(h) dH(h) / dh

Since [Delta]AC(h) / [Delta]h = -(b + cn) / [h.sup.2] - [Alpha]T / [Lambda]H(h)[h.sup.2],

[Delta]AC(h) / [Delta]H(h) = 1 / H[(h).sup.2] (M - [Lambda]W / [Lambda]) - W / H[(h).sup.2] - T[Alpha] / [Lambda]hH[(h).sup.2]

and dH(h) / dh = 1 / 1 - [Beta] - 1 / 2 + [Lambda]h / 6

it follows that dAC(h) / dh = -[(b + cn) / [h.sup.2] - 1 / H[(h).sup.2] (M - [Lambda]W / [Lambda]) (1 / 1 - [Beta] - 1 / 2 + [Lambda]h / 6)] - [Alpha]T / [Lambda]hH(h) [1 / h + 1 / H(h) (1 / 1 - [Beta] - 1 / 2 + [Lambda]h / 6)], (a5)

Observe that dAC(h)/dh = 0 implies that

[Mathematical Expression Omitted] (a6)

From (a6) it follows that

[Mathematical Expression Omitted]

Denoting the right-hand side of the last inequality as [Mathematical Expression Omitted], we obtain the proposition, i.e.,

[Mathematical Expression Omitted]

Corollary A1:

Define [Mathematical Expression Omitted].

For a given n, [Mathematical Expression Omitted] and [Mathematical Expression Omitted] is increasing in k.

Outline of proof:

Since [Mathematical Expression Omitted],

(from Proposition A2)

and the first part of Corollary A1 follows.

To prove the second part of the result, it is sufficient to show that [Mathematical Expression Omitted], (n, k)/(1 - [Beta](n, k)) is increasing in k and [Mathematical Expression Omitted] is decreasing in k.

Since [Mathematical Expression Omitted]

and observing that [Beta](n, k) is increasing in k for a given n, it follows that [Mathematical Expression Omitted] is increasing in k and [Mathematical Expression Omitted] is decreasing in k. Hence, Corollary A1 follows.

Proposition A3:

The procedure described in Figure 3 provides optimal value of k and the corresponding optimal inspection interval h for a given sample size n.

Outline of proof

The procedure in Figure 3 involves search over the interval (0, [k.sub.u]) for optimal k, denoted by [k.sup.*], i.e., AC(n, k) [greater than] AC(n, [k.sup.*]) for k [less than or equal to] [k.sub.u]. Hence it is sufficient to show that there exists a [k.sub.1] [less than or equal to] [k.sub.u] such that AC(n, k) [greater than or equal to] AC(n, [k.sub.1]) for all k [greater than] [k.sub.u]

Observe that one of the key steps in the procedure involves computation of parameters [Mathematical Expression Omitted] and [Mathematical Expression Omitted] for all values of k for which the search is performed. These parameters are defined as follows:

[Mathematical Expression Omitted] (a7)

[Mathematical Expression Omitted].

In addition, at the termination of the procedure, we identify values of [k.sub.1] and [k.sub.u] as shown in Figure 3 and the following conditions hold.

* [k.sub.u] is the last and the largest value of k examined in the search procedure and represents the effective range of k.

* [k.sub.1] is such that [Mathematical Expression Omitted] for 0 [less than or equal to] k [less than or equal to] [k.sub.u]

* [Mathematical Expression Omitted]

Since [Mathematical Expression Omitted] for k [greater than] [k.sub.u], and [Mathematical Expression Omitted] from Corollary A1, and noting that (W - M/[Lambda]) [less than] 0 from Assumption 3, we have the following:

[Mathematical Expression Omitted]

Substituting the right-hand side of equation (a7) for [Mathematical Expression Omitted] in the inequality above, we obtain

[Mathematical Expression Omitted]

Thus, it follows that AC(n,k) [greater than or equal to] AC(n, [k.sub.1]) for all k [greater than] [k.sub.u] and the proposition follows.
COPYRIGHT 1997 Emerald Group Publishing, Ltd.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1997 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Wen-Hsien, Chen; Tirupati, Devanath
Publication:International Journal of Quality & Reliability Management
Date:Feb 1, 1997
Words:8929
Previous Article:Total Quality Management: A Cross-functional Perspective.
Next Article:The measurement of service quality: a new P-C-P attributes model.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters