Printer Friendly

Part family formation for variety reduction in flexible manufacturing systems.


In spite of the significant presence of parts coding and classification analysis (PCA) systems in industry, reports on how they are used to effect machine-part clusters are few and far between. Very few of the well known PCA systems[1, 2], provide the level of analysis needed to effect machine-part groups. Most are coding systems and do not necessarily classify. For many that do classification the exercise boils down to mere sequential search or sort of the database for parts with identical codes at specified code fields. However, the concept of part family formation for group technology is based more on part similarity than on identity.

Our objective in this paper is to present a means for converting the weighted codes of the PCA systems into some measure of similarity as used in the production flow analysis (PFA) systems[3]. We believe that such a tool will make the method more robust and amenable to some of the grouping algorithms already developed in the PFA literature. Specifically, the similarity coefficient method and the hierarchical clustering algorithm have been successfully employed to form machine part groups using 0-1 machine-part incidence matrices of the PFA. We define an appropriate means for converting the weighted codes of the PCA to a measure of similarity and investigate the relative effectiveness of the single linkage clustering (SLC) and average linkage clustering (ALC) technologies in forming part families in flexible manufacturing systems (FMSs) using this measure. The SLC and ALC technologies were chosen because they are the most widely used similarity-coefficient-based clustering algorithms in the literature[4]. The effectiveness measure used is the cost of intercellular materials handling.

The implications of the study are that efficient formation of part families for existing part codes will result in variety and setup time reduction, improved throughput and overall production efficiency of FMSs. Furthermore, one shortcoming of some of the 0-1 based algorithms of the PFA is that they do not have the mechanisms to handle weighted machine-part incidence matrices. Consequently, they cannot be used to address the problems of production volume and machine loading. The similarity measure presented in the paper is suited to deal with weighted machine-part codes and can be used to improve the robustness of some of the PFA-based clustering algorithms.

In the period 1964 to the mid-1970s the interest in group technology research was primarily in the PCA[5, 6]. Little or no attention was given at the time to the PFA and machine grouping. The PFA method of group technology (GT) studies the routeing of parts through the machines in the facility in order to determine the appropriate machine-part groups. Since the pioneering work by Burbidge[7] on PFA, much of the GT literature has been emphasizing this approach over the traditional PCA shape-based systems. Interestingly, reports in the literature suggest that more and more companies are implementing PCA systems. For example, Hyer and Wemmerlov[8] reported that about 62 per cent of the companies they surveyed indicated that they have coding and classification systems of one form or another. Furthermore, Eckert[9] noted that coding and classification systems are indispensable parts of the group technology process as they try to group similar parts for efficient production. Efficiency in production is a major motivation for the use of group technology in flexible manufacturing systems. The part family concept of GT is especially credited with the modest successful integration of CAD/CAM[10]. Furthermore, it improves the planning and scheduling of the production process, and reduces setup and materials handling costs.

The reason for the diminished research interest in coding and classification approach to GT is not known exactly but labour intensiveness of the coding process, costs, and proprietary nature of most coding systems have been suggested. For example, a project to code and classify 31,000 parts was reported to have cost $177,000, with a net savings of $933,000 over 2.5 years [11, p. 48]. Although the benefit of coding is quite significant, as is evident in this example, the initial cash outlay is discouraging to many potential users, especially because most companies have part databases with thousands of part numbers. To address the labour intensive problem of coding, Kaparthi and Suresh[12] suggest automating the process and present a neural network-based model for identifying part families from part geometries.

The rest of the paper is organized as follows. In section two we present a brief review and classification of some of the better known coding and classification systems. Defining the measure of similarity between any two parts targeted for grouping, and how that measure can be used to find appropriate part families are the subject of the third section. A numerical example of the method suggested is presented in the fourth section using a real world database found in the literature[6]. Our computational experience with the proposed method is presented in section five for several test problems. The results of various tests that compare the ALC, SLC and exhaustive search method are also presented in this section. Finally in section six, we present some concluding remarks.

Parts coding and classification systems

Parts coding and classification analysis is mostly concerned with the use of individual part design features for group formation[13]. Such features include tolerances, materials requirement, and parts shapes and sizes. This concept of using design features for the purpose of describing and grouping similar parts was introduced by Mitrofanov[14]. Opitz[15-17] later extended the idea to production cells and developed a comprehensive coding and classification system for work pieces. Since these pioneering efforts several other parts coding and classification systems have been developed to facilitate part grouping: SAGT[18], MICLASS[1,2,19]. PCA systems are traditionally design oriented or shape based although some do incorporate production-based attributes. They fall into one of three categories: monocode, polycode, or hybrid. For further discussions on these categories of the PCA, see Eckert[9].

Part family formation for shape-based systems

One of the masons production flow analysis-based methodologies for machine-part family formation has been so successfully implemented and researched is the ease with which the 0-1 machine-part incidence matrix can be computerized, and a measure of performance determined. Several methodologies have been employed for this purpose including array-based methods[20,21,22]), mathematical programming and similarity coefficient-based methods[23-25]. The tremendous amount of reported work in the literature suggests that the similarity coefficient-based methods are by far the most widely used. We introduce the similarity coefficient-based methodology to the parts coding and classification analysis system, and define the similarity between any two parts as follows[3, 26]):

[S.sub.ij] = [summation of] [S.sub.ijk] [where] k = 1 to K / [summation of] [[Delta].sub.ijk] [where] k = 1 to K (1)

[S.sub.ijk] = 1 - [absolute value of [x.sub.ik] - [x.sub.jk]] / [R.sub.k] (2)

[[Delta].sub.ijk] = {1 If comparison between part i and part j is possible for attribute k

= {0 Otherwise


[S.sub.ij] = similarity between part i and part j;

[s.sub.ijk] = the score between part i and part j on attribute k;

[X.sub.ik] = weight assigned to part i for attribute k;

[x.sub.jk] = weight assigned to part j for attribute k;

[R.sub.k] = the range of attribute taken over the population of parts;

K = number of attributes.

This definition of similarity is such that when [x.sub.ik] = [x.sub.jk], [for every]k[element of]K, then [s.sub.ijk] = 1 and [S.sub.ij] = 1, for maximum similarity Similarly, when [x.sub.ik] [not equal to] [x.sub.jk] and either [x.sub.ik] = 0 or (R.sub.x]) and [x.sub.jk] = [R.sub.x] or (0) [for every]k[element of]K, then, [s.sub.ijk] = 0 and [S.sub.ij] = 0, for minimum similarity. If [[Delta].sub.ijk] = 0, this implies that there is no basis for comparing attribute k for parts i and j. Therefore [s.sub.ijk] is unknown and can be set equal to zero. If [[Delta].sub.ijk] = 0, [for every]k[element of]K, then [S.sub.ij] is undefined but can also be set equal to 0 for convenience. Otherwise 0 [less than or equal to] [S.sub.ij] [less than or equal to] 1, [for every]i [not equal to] j = 1, 2, ..., n and [S.sub.ij] = 0, [for every]i = j = 1, 2, ..., n. However, it is unlikely for [[Delta].sub.ijk] = 0, [for every]k[element of]K. This is because in practice, the same coding system is often used to code all the parts in an installation. The parts will therefore share a common database. If [[Delta].sub.jk] = 0, [for every]k[element of]K this implies that the parts have nothing in common. The number of groups, G = M the number of parts and there is therefore no basis for group technology. Burbidge[27] submits, however, that this situation rarely, if ever, occurs in practice.

The last term in equation (2) is a dissimilarity measure that captures the relative difference between the code for part i and any other part j, for attribute k, as compared with part i and any other part l, for the same attribute. The term therefore ensures that dissimilar parts are less likely to be grouped together. The objective of the formulation is then to maximize the sum of the similarity measures.

The grouping algorithm

Step 1: Form the part-attribute matrix for the M parts with K attributes using appropriate coding and classification system.

Step 2: Compute the similarity coefficient [S.sub.ij], [[for every].sub.ij]i [not equal to] j, using (1) and (2) and form the triangular matrix of similarity measures.

Step 3: Initiate the group [G.sub.i] by selecting the maximum [S.sub.ij] from Step 2. If the number of maximum [S.sub.ij] = g, g [greater than or equal to] 2, initiate another group(s) [G.sub.j], j = 2, ..., g.

Step 4 (a): The single linkage technology. Recompute the similarity coefficients between the ungrouped parts and grouped ones using the single linkage technology. The technology is such that the similarity, [S.sub.ij(k)], between part i, j [element of] [G.sub.r] and part k [element of] M is given by [S.sub.ij(k)] = Max([S.sub.i(k)], [S.sub.j(k)]), [for every]k [not element of] [G.sub.r].

Step 4 (b): The average linkage technology. Recompute the similarity coefficients between the ungrouped parts and grouped ones using the average linkage technology. The similarity, [S.sub.ij(k)], between part ij [not element of] [G.sub.r] and part k [not an element of] M[or][G.sub.s] is given by

[S.sub.ij(k)] = [summation over [for every]k [element of] M[or][G.sub.s]([S.sub.i(k)], [S.sub.j(k)])/[Psi], [for every]k [not element of][G.sub.r],

with [Psi] = [G.sub.r] [center dot] k[or][G.sub.s], defined as the product of the number of parts in [G.sub.r] and that in k or [G.sub.s] (the symbol "[or]" is interpreted as an "or").

Step 5: If [S.sub.ij] is such that i(j) [element of] G, add i(j) to G. Otherwise select the maximum [S.sub.ij] find initiate another group [G.sub.j].

Step 6: If i [element of] G, [for every]iSTOP. Else, go to step 4.

Numerical example

Haworth[6] presented real-world data for 2,498 part codes developed from the Opitz system[16]. Initial subgrouping (step 1) of the data resulted in a total of 24 distinct code numbers for the 2,498 parts. For example, 142 parts were identified as having the code number 0 0 2 0 0. We use a sample of 14 of these distinct part codes (Table I) to illustrate the use of the model. We then apply the similarity coefficient measures of equations (1) and (2), and use the SLC (ALC) algorithms to cluster the sample in an attempt to identify the four groups reported by the author.

Step 2

The similarity coefficients for the data are presented in Figure 1. As an example, [S.sub.121] = [S.sub.122] = [S.sub.124] = [S.sub.125] = 1 - [absolute value of 0 - 0]/9 = 1.00, and [s.sub.123] = 1 - [absolute value of 3 - 2]/9 = 0.89. Therefore,

[S.sub.12] = [summation of] [S.sub.12k] [where] k = 1 to 5 / [summation of] [[Delta].sub.12k] [where] k = 1 to 5 = 0.98.

Notice that [S.sub.12k] = 1, [for every]k, in this example. The rest of the coefficients in the figure are calculated in this manner. Since [S.sub.ij] = [S.sub.ji], [for every]ij [element of] M, only the [S.sub.ij]s are presented, resulting in the triangular matrix of similarities. Also, by definition [S.sub.ii] = [S.sub.jj] = 0.
Table I. Codes for part attributes using the Opitz system

Parts         1         2         3         4         5
1             0         0         2         0         0
2             0         0         3         0         0
3             0         1         1         0         2
4             0         2         1         0         4
5             0         2         3         0         4
6             1         0         1         0         0
7             1         1         1         0         0
8             1         2         0         3         0
9             2         0         0         0         0
10            2         3         0         0         0
11            2         5         0         0         0
12            7         0         0         0         3
13            7         0         0         3         3
14            7         0         0         6         3


Step 3

Applying the single linkage algorithm shows that initially parts 1 and 2 ([G.sub.1]) and 6 and 7 ([G.sub.2]) constitute the first two groups.

Step 4(a) The SLC technology

The recomputed similarities using the SLC technology are presented in Figure 2. For example, the similarity between [G.sub.1] (parts 1 and 2) and [G.sub.2] (parts 6 and 7) is found as max([S.sub.16], [S.sub.17], [S.sub.26], [S.sub.27]) = Max(0.96, 0.93, 0.93, 0.91) = 0.96. Hence, the similarity for the other parts are found in this manner.

Step 5

It is seen from Figure 2 that part 9 is joined to group 2 (parts 6 and 7), hence group 1 (parts 1 and 2) at a threshold level of 0.96. Also, parts 4 and 5 ([G.sub.3]), and parts 10 and 11 ([G.sub.4]) are joined at the same threshold level of 0.96.

Step 6

Since all the parts are not in one group, go to step 4(a) and recompute the new similarity measures and continue. Further application of the algorithm resulted in all the parts joining into one group (i.e. the database) at a 0.82 threshold level. This result is presented as a dendogram in Figure 3. The threshold level at which the respective groups resulted is also evident in the figure. For example, [TABULAR DATA FOR FIGURE 2 OMITTED] part 8 can belong to the same group as parts 1, 2, 3, 4, 5, 6, 7, 9, 10 and 11 only at a threshold level of 0.89.

Step 4(b). The ALC technology

The ALC is different from the SIC only in the criterion used to recompute the similarity between entities to be clustered. Instead of the maximum similarity (single linkage) criterion, the average similarity between all pairs of parts in the groups is used. As an example on the recomputation of the similarities, from Figure 1, parts 1 and 2, and 6 and 7 form the first two groups [G.sub.1] and [G.sub.2] as in the SLC. However, the similarity between group 1 (parts 1 and 2) and group 2 (parts 6 and 7) is computed as [S.sub.(12)(67)] = (0.96 + 0.93 + 0.93 + 0.91)/4 = 0.93 The rest of the similarity measures are recalculated in this manner. Application of this criterion to the problem resulted in the dendogram in Figure 4.

Results from the two dendograms [ILLUSTRATION FOR FIGURES 3 AND 4 OMITTED] show that the part groups and threshold levels at which parts or groups are admitted to an existing group are better defined and separated with the average linkage, than with the single linkage method. Table II shows the number of groups that is formed with the respective methods at various threshold levels, with their corresponding average number of parts per group. This is typical of the ALC algorithm. Since its admission criterion favours lower threshold levels, the points at which parts are clustered are better defined than in the SLC. Indeed, when the two methods were employed to form machine-part groups from an 11 x 22 machine-part incidence matrix, the average linkage method came closer to forming a diagonal pattern of mutually separable clusters (minimize intercellular materials handling) than the single linkage method[4].

In the original problem[28] presented in Table I, the groups are (1,2,3,4,5), (6,7,8), (9,10,11) and (12,13,14). Comparing the two clustering methods (SLC and ALC) the ALC seems to have come closer to duplicating this solution than does the SLC. Notice that in either method, part 8 is clustered into the same group as part numbers 10 and 11, at different threshold levels. However, in the solution presented by Gombinski[28] part numbers 9, 10, 11 constitute a single group. This discrepancy can be attributed to the fact that the part families were formed based on part identity at specified code number fields, as can be seen in Table I. The measure presented here is based on part similarity rather than identity because this is the basis for part family formation in FMSs.

Computational experience

It is important to analyse the dendograms beyond their separability and consider their practical implications. It is obvious that the number of groups for the problem is between one, when all the parts belong to the same group, and 14, when they are disjoint. When the number of groups is not a parameter or constraint of the problem, the problem then becomes that of determining the number of groups that can optimize a measure of performance. Two of the key measures of performance for flexible manufacturing systems are minimization of setup and intercellular materials handling costs. As was pointed out earlier, the part family grouping philosophy is an essential means for achieving these objectives. Setup is at a minimum when G = 1 and at a maximum when it is M, the number of parts. On the other hand, intercellular travel is at a maximum when G = 1 and at a minimum when it is M. As G [approaches] 1, parts tend to be in a group they are not a part of and this increases the likelihood of intercellular movements. As we discussed earlier, if it is indeed appropriate for G = M then there is no basis for group technology. With setup and intercellular materials handling cost as the measure of performance, we define the following minimization problem,

[Mathematical Expression Omitted] (3)
Table II.

Number of groups and parts per group for SLC and ALC

                Single linkage            Average linkage
Threshold   Number of    Number of    Number of     Number of
levels        groups    parts/group     groups     parts/group

0.98            12          1.2           12           1.2
0.95             8          2.8            9           1.6
0.92             3          4.7            7           2.0
0.89             3          4.7            5           2.8
0.86             3          4.7            4           3.4
0.83             2          7.0            2           7.0
0.80             1         14.0            2           7.0
0.70             1         14.0            2           7.0

such that

[summation of] [C.sub.[i.sub.j]] where j=1 to M = 1, [for every]i = 1, 2, ..., M (4)

[C.sub.[i.sub.j]] = 0, 1, [for every]i, j = 1, 2, ..., M (5)

[t.sub.j] is the setup time for part family j, [n.sub.[i.sub.j]] is the number of movements between arts in machine cells i and j, [d.sub.[i.sub.j]] is the distance between machine cells i and j, and [c.sub.j] and [c.sub.[i.sub.j]] are associated costs for setup and intercellular materials movements, respectively. The purpose of constraint (4) is to ensure that each part i, belongs to exactly one group, and that of constraint (5) is to ensure integrality. M is the number of parts and G the number of groups.

In most flexible manufacturing systems it is reasonable to assume that the setup for the SLC and ALC methods will not be significantly different. This is because FMSs are preprogrammed and, with tool changing capabilities, intragroup setups or changeover times (costs) are not often significant and "are reduced to virtually zero"[29, p. 15]. Therefore for the purpose of our discussion, the first term (setup) in (3) is assumed to be approximately equal for both the SLC and ALC methods and the difference in cost between the two methods is driven by intercellular materials movements. We posit in this paper that the average linkage algorithm will usually provide a lower cost schedule than the single linkage algorithm.

To investigate this hypothesis several problems were randomly generated and solved using both the SLC and ALC technologies. The objective of the analysis was to determine the number of groups contained in each solution at different threshold levels. Table II shows a typical result of the analyses. The larger number of groups in ALC shows a corresponding smaller number of parts per group. Since intercellular movements are more likely the more parts there are in a group, the cost will be greater with SLC than with ALC.

Furthermore, we compared the SLC and ALC algorithms to an exhaustive search method. Batches of 500 runs each were randomly generated for problems ranging in size from four to eight parts. Since the size of the similarity coefficient matrix is independent of the number of part attributes, the latter was held constant at ten. The randomly selected attribute values were then allowed to vary from two to nine. To make a comparison we used a performance measure (PM) computed as follows:

[Mathematical Expression Omitted]


G = number of groups in the solution;

g = index of the groups in the solution, g = 1, 2, ..., G;

[i.sub.g] = [] part in group g;

[j.sub.g] = [] part in group g;

[n.sub.g] = number of parts in group g;

[S.sub.[i.sub.g][j.sub.g]] = similarity between parts [i.sub.g] and [j.sub.g] as computed by (1).

The division by [n.sub.g] is required so that PM is independent of the number of parts in any particular group. For example, if there were equal similarity between all pairs of parts, then the PM would be the same for any arrangement of parts for a given number of groups, which obviously should be the case. If, however, the decision maker wants to give preference to solutions that possess a more equal distribution of the number of parts in each group, perhaps to equalize work loads, then she/he merely has to multiply the PM for each group by an appropriate weighting factor according to the deviation of the number of parts in that group from the average number of parts.

Since PM is always maximum when G = 1, this measure cannot be used to compare solutions with different numbers of groups. Therefore, for each problem of n parts the methods were compared for solutions with two to n - 1 groups. The results of the comparisons are shown in Figures 5 and 6.

Figure 5 shows the results obtained when comparing the ALe algorithm to the SLC algorithm. For example, for problems with six parts in which the desired number of part families was three, the ALC algorithm outperformed the SLC algorithm 226 times out of 500 runs while the SLC algorithm outperformed the ALC algorithm only 11 times during these same runs. Figure 5 makes it apparent that the ALC algorithm is superior to the SLC algorithm. It should also be noted that as the number of groups increases, the superiority of the ALC tends to improve. This agrees with the results of Table II presented earlier.

Figure 6 shows the results obtained when comparing the ALe with the exhaustive search method. For example, for problems with six parts in which the desired number of part families was three, the performance of the ALC algorithm equalled the exhaustive search 415 times out of 500 runs. Figure 6 indicates that as the problem size increases the ALC algorithm becomes less effective compared to the exhaustive search method.


The similarity coefficient method for group technology has been presented to alleviate the part family formation problem in flexible manufacturing systems. Specifically, a method for converting the weighted codes of the PCA to similarity measures often used in the PFA was presented. Since flexible manufacturing systems rely on the efficient grouping of parts for much of their success in reducing setup, and increasing throughput and flexibility, insights to the part family formation problem can improve these FMS benefits.

A comparison between the SLC and ALC algorithms shows that the former favours fewer groups (more parts per group) than the latter, and therefore is more likely to lead to increased cost due to intercellular movements.
Figure 5. Comparing the performance of the SLC:ALC algorithms

Parts                           Groups
            2          3          4          5          6          7
3         0:0
4         0:106      0:0
5        13:179      0:157      0:0
6        11:198     11:226      0:162      0:0
7        13:250     16:298      7:270      0:147      0:0
8        21:268     12:368      4:344      3:267      0:126      0:0
Figure 6. The number of times the ALC algorithm performed as well as
the exhaustive search method

Parts                          Groups
           2          3          4          5          6           7
3         500
4         415        500
5         346        411        500
6         290        345        430        500
7         238        246        327        451        500
8         211        232        273        348        445        500

Further comparisons between the SLC, ALC and an exhaustive search method of group formation showed that the ALC is superior to the SLC. The comparisons also showed that the ALC equalled the performance of the exhaustive search method for as high as 83 per cent of the time for small problem sizes and smaller number of groups. This performance seemed to diminish as the problem size increased, to as low as 42 per cent for an eight-part two-group problem.


1. Houtzeel, A., "Classification and coding: a tool to organize information", Proceedings of IREAPS Technical Symposium, San Diego, CA, 1982, pp. 457-80.

2. Houtzeel, V.F., "An introduction to MICLASS system", Proceedings of CAM-I's Seminar on CAPP Applications, P-75-PPP-01, CAM-I, Arlington, TX, 1975, pp. 159-78.

3. Offodile, O.F., "Application of similarity coefficient method to parts coding and classification analysis in group technology", Journal of Manufacturing Systems, Vol. 10 No. 6, 1991, pp. 442-8.

4. Seifoddini, H.K., "Single linkage versus average linkage clustering in machine cells formation applications", Computers and Industrial Engineering, Vol. 16, 1989, pp. 419-26

5. Abou-Zeid, M.R, "SAGT: a new coding system and the systematic analysis of metal cutting operations", PhD dissertation, Purdue University, West Lafayette, IN, 1973.

6. Haworth, E.A., "Group technology - using the Opitz system", The Production Engineer, Vol. 47, 1968, pp. 25-35.

7. Burbidge, J.L., "Production flow analysis", The Production Engineer, Vol. 42, 1963.

8. Hyer, N.L. and Wemmerlov, U., "Group technology in US manufacturing industry: a survey of current practices", International Journal of Production Research, Vol. 27, 1989, pp. 1287-304.

9. Eckert, R.L., "Codes and classification systems", American Machinist, Vol. 12, 1975.

10. Ham, I., "Current trends and future prospects of group technology applications related to integrated computer aided manufacturing", Proceedings of Conference on International Manufacturing Engineering, August 1980, Institute of Engineers, Australia.

11. Manufacturing Engineering, Society of Manufacturing Engineers, Ann Arbor, MI, May 1979.

12. Kaparthi, S. and Suresh, N.C., "A neural network system for shape-based classification and coding of rotational parts", International Journal of Production Research, Vol. 29, 1991, pp. 1771-84.

13. Offodile, O.F., "Design and analysis of a coding and classification system for a systematic interactive computer-aided robot selection procedure (CARSP)", unpublished PhD dissertation in Industrial Engineering, Texas Technical University, Lubbock, TX, 1984.

14. Mitrofanov, S.P., Scientific Principles of Group Technology, Part I, National Lending Library for Science and Technology, Boston, MA, 1966.

15. Opitz, H., Eversheim, W. and Wiendahl, H.P., "Workpiece classification and its industrial applications", International Journal of Machine Tool Design and Research, Vol. 9, 1969, pp. 39-50.

16. Opitz, H., A Classification System to Describe Workpieces, Parts I and II, Pergamon Press, New York, NY, 1970.

17. Opitz. H. and Wiendahl, H.P., "Group technology and manufacturing systems in small and medium quantity production", International Journal of Production Research, Vol. 9, 1971, pp. 181-203.

18. Abou-Zeid, M.R., "Group technology", Industrial Engineering, Vol. 7, 1975, pp. 32-9.

19. Hyde, W.F., Improving Productivity by Classification, Coding, and Data Base Standardization The Key to Maximizing CAD/CAM and Group Technology, Marcel-Dekker, New York, NY, 1981.

20. King, J.R, "Machine-component grouping in production flow analysis: an approach using a rank order clustering algorithm", International Journal of Production Research, Vol. 18, 1980, pp. 213-32.

21. McCormick, W.T., Schweitzer, P.J. and White, T.W., "Problem decomposition and data reorganization by a clustering technique", Operations Research, Vol. 20, 1972, pp. 993-1009.

22. Purcheck, G.J.K., "A mathematical classification as a basis for the design of group technology production cells", Production Engineer, Vol. 53 No. 1, 1975.

23. McAuley, J., "Machine grouping for efficient production", The Production Engineer, Vol. 51, 1972, pp. 53-7.

24. Seifoddini, H.K. and Wolfe, P.M., "Application of the similarity coefficient method in group technology", IIE Transactions, Vol. 19, 1985, pp. 217-77.

25. Tam, K.Y., "An operation sequence based similarity coefficient for part family formation", Journal of Manufacturing Systems, Vol. 9, 1990, pp. 55-68.

26. Gower, J.C., "A general coefficient of similarity and some of its properties", Biometrics, Vol. 27, 1971, pp. 857-81.

27. Burbidge, J.L., "Change to group technology: process organization is obsolete", International Journal of Production Research, Vol. 30 No. 5, 1992, pp. 1209-19.

28. Gombinski, J., "Fundamental aspects of component classification", Annals of the CIRP, Vol. 17, 1969, pp. 367-74.

29. Duncan, W.L., Just-In-Time in American Manufacturing, Society of Manufacturing Engineers, Dearborn, MI, 1988.
COPYRIGHT 1997 Emerald Group Publishing, Ltd.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1997 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Offodile, O. Felix; Grznar, John
Publication:International Journal of Operations & Production Management
Date:Mar 1, 1997
Previous Article:Simulation analysis of maintenance policies in just-in-time production systems.
Next Article:Optimal allocation of arrivals to a collection of parallel workstations.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |