Printer Friendly

A comparative evaluation of nine well-known algorithms for solving the cell formation problem in group technology.

INTRODUCTION

It is generally accepted that group technology, GT, was conceived during the 1940s in the Soviet Union (Gallagher, Grayson and Phillips (1971); Suresh and Meredith (1985)). It has since been developed and used in many other countries as a means for achieving manufacturing efficiencies. GT groups parts into families and machines into groups so that a family of parts can be produced within a group of machines. The problem of determining part families and machine groups is called the cell formation, CF, problem (Wemmerlov and Hyer (1986)).

GT is best suited for a batch-flow production system (Wu, Venugopal and Barash (1986)) where many different products, having relatively low annual volumes, are produced intermitently in small lot sizes. In this production system the volume for any particular part is not high enough to require a dedicated production line for that part. On the other hand, the total volume for a family of similar parts is high enough to efficiently utilize a machine-cell.

The benefits from applying GT are many, and are discussed in Burbidge (1975); Ham, Hitomi and Yoshida (1985); Hyer and Wemmerlov (1984); and Suresh and Meredith (1985).

The purpose of this paper is to compare and evaluate nine algorithms that have been developed to solve the CF problem. Sixty large CF problems are randomly generated and eight CF problems are taken from the literature. Each problem is solved by the nine algorithms. A hierarchy of measures is developed to gauge the effectiveness of each algorithm for solving large CF problems.

The organization of this paper is as follows: in the next section the CF problem is reviewed; then each algorithm considered in this investigation is described; functions are developed for measuring the goodness of a solution to the CF problem; the test problems used to assess algorithm performance are described; then the computational results are presented; they then are interpreted and discussed in the next section, where guidelines are offered for using the algorithms; concluding remarks are given in the last section.

THE CELL FORMATION, CF, PROBLEM

The cell formation problem is embedded in a larger cell design process. (See Wemmerlov and Hyer (1986) for a broad discussion of the cell design process.) Many analytical procedures have been developed for solving the CF problem, and a number of taxonomies have been proposed (Burbidge (1975); Gongaware and Ham (1981); King and Nakornchai (1982); and Wemmerlov and hyer (1986)) for classifying these procedures. Wemmerlov and Hyer's taxonomy (1986, p. 134) divides these procedures into four approaches, one of which "identifies part families and machine groups simultaneously." This can be done manually or by using an algorithm. The CF problem is a difficult problem to solve. King and Nakornchai (1982) note that a relaxed version of the CF problem is NP-complete. Nine well-known algorithms (Table 1) for simultaneously identifying part families and machine groups are compared in this paper. The following example illustrates the spirit of these algorithms.

Example

Figure 1A shows the initial part/machine matrix, [A.sup.(i)] with elements [a.sub.ij] = 0 or 1, i = 1,2,...,M, j = 1,2,...,N, for a problem (taken from Ham et al. (1985)) N = 8 parts and M = 10 machines. (All [a.sub.ij] = 0 values are shown as blank entries.) Notice that [A.sup.(i)] can represent a single part/machine cell, consisting of ten machines within which eight parts are produced. The cell density is low, [Mathematical Expression Omitted].

Figure 1B shows the machine-part matrix after it has been rearranged by the ROC algorithm--the final part/machine matrix, [A.sup.(f)]. The parts and machines have been re-ordered so that the non-zero [a.sub.ij]'s are clustered around the diagonal of the matrix. That is, the part/machine matrix has been rearranged into standard block diagonal form.

[A.sup.(f)] can be partitioned into three diagonal submatrices. The upper submatrix defines a part/machine cell consisting of parts 1, 6, 3, and machines 1, 4, 8, 2. The next diagonal submatrix has parts 8, 5, 2, with machines 3, 6, 7. The lower submatrix consists of parts 7, 4, and machines 10, 5, 9. This is a three-cell solution. With two exceptions, all parts can be produced entirely within their own cells. Part 8, which is assigned to the second cell requires some processing in the first cell (on machine 1), and part 2 which is also assigned to the second cell must visit machine 10 in the last cell. Hence, this solution has two instances of intercell travel. The cell densities are 0.92, 0.89 and 0.83 for cells 1, 2 and 3 respectively.

A solution consisting of only two cells could be formed by combining cells two and three. There would be only one instance of intercell travel but the cell two density decreases to 0.47.
 TABLE 1
ALGORITHMS FOR OBTAINING A SOLUTION TO THE CF PROBLEM
 Underlying Procedure Algorithm Section
1. Rank order clustering ROC / ROC 3.1
2. Similarity coefficient SC / ROC 3.2
3. Similarity coefficient SC / SC 3.2
4. Modified similarity coeff. MSC / ROC 3.3
5. Modified similarity coeff. MSC / MSC 3.3
6. Modified rank order clust. MROC 3.4
7. Seed clustering ISNC 3.5
8. Seed clustering SC-Seed 3.6
9. Bond energy BEA 3.7


ALGORITHMS FOR SOLVING THE CF PROBLEM

The Rank Order Clustering, ROC, Algorithm

King (1980) first proposed that the pattern of entries in each row or column of the part/machine matrix be read as a binary word, and that rows or columns be rearanged in order of decreasing binary word value. Later King and Nakoranchai (1982) noted that reading entries as binary words restricted the size of the CF problem to those having less than 47 machines (rows) and 47 parts (columns), because the largest integer representation in many computers was [2.sup.48]-1 or less. To overcome this limitation, element by element comparisons could be used to carry out the row or column ranking. For example, row 1 (10100101) and row 2 (10000101) of the initial part/machine matrix in Figure 1 can be compared by considering each digit successively. Three comparisons are needed in this case before concluding that row 1 has a higher rank than row 2, because the first two digits in each row are the same. An efficient rank order algorithm is outlined in King and Nakornchai (1982).

Chandrasekharan and Rajagopalan (1986a, p. 1223) discuss four problems with the rank order algorithm; the two most serious being:

1. Different arrangements of the same initial matrix lead to different final matrices (and hence different cells).

2. While there is a tendency for 1s to collect in the top left corner of the final matrix, the rest of the final matrix may be disorganized.

On the other hand, the ROC algorithm is easy to use and understand, and has modest computational requirements. It can also be modified to form more complex (and hopefully more effective) algorithms. See, for example, the section entitled "The Modified Rank Order Clustering, MROC, Algorithm" and Chandrasekharan and Rajagopalan (1986a).

The ROC algorithm is applied by alternately re-arranging rows (machines) and columns (parts) until no further re-arrangement is possible. We denote this implementation ROC/ROC because both machines and parts are grouped by the ROC algorithm. The algorithm used in this study is listed in the Appendix.

The Similarity Coefficient, SC, Algorithm

McAuley (1972) was the first to employ similarity coefficients to solve CF problems. A similarity matrix contains entries [s.sub.ij], i = 1,2,...,M; j = 1,2,...,M; called similarity coefficients which represent the degree to which pairs of machines perform operations on the same parts. A pair of machines will have a high similarity coefficient when they perform operations on the same parts:

[s.sub.ij] = [n.sub.ij]/[m.sub.ij]

where [n.sub.ij] is the number of parts processed on both machines i and j, and [m.sub.ij] is the number of parts processed on either machine i or j. After these coefficients are computed, the pair of machines with the highest similarity coefficient is grouped into a cell. Similarity coefficients between this cell and all other machines are computed from the original similarity coefficients. If K1 is the set of machines in cell k1, then

[Mathematical Expression Omitted]

This is called the "single linkage clustering" formula. After these coefficients have been computed, the pair of machines (or a machine and a cell, or two cells) with the highest similarity coefficient is grouped together. This process continues until the required number of machine cells has been formed. In this paper this single linkage clustering procedure based on similarity coefficients is called the SC algorithm.

Shortcomings of the SC algorithm, and modifications of it, appear to be due to the definitions of the similarity coefficients--equations 1 and 2 (see, for example, King and Nakornchai (1982) or Seifoddini and Wolfe (1986)). Another shortcoming of the SC algorithm is that, while it forms machine cells, it does not prescribe how parts should be assigned to the machine cells. Parts may be assigned to cells by 1) using informed judgement, 2) using the ROC algorithm, or 3) by using a version of the SC algorithm for parts, where (see Carrie (1973, p. 404)) the similarity coefficient between a pair of parts, u and v, is

s = [n.sub.uv]/[m.sub.uv], u = 1,2,...,N; v = 1,2,...,N

where [n.sub.uv] is the number of machines used by both parts u and v, and [m.sub.uv] is the number of machines used by either part u or v. Hence we denote by SC/ROC the algorithm which uses the SC algorithm to group machines and the ROC algorithm to group parts, and SC/SC as an algorithm which uses the SC algorithm with equations 1 and 2 to group machines and the SC algorithm with equations 2 and 3 to group parts. The SC/ROC and SC/SC algorithms used in this study are listed in the Appendix.

The Modified Similarity Coefficient, MSC, Algorithm

A problem with the SC algorithm, called the "Chaining problem," seems to be caused by equation 2. The similarity coefficient between two cells k1 and k2, consisting of the sets of machines K1 and K2, is

[Mathematical Expression Omitted]

As a result the SC algorithm may join two cells together merely because two machines, i [epsilon] K1 and j [epsilon] K2, have a high similarity coefficient [s.sub.ij] (see, for example, Anderberg (1973) or King and Nakornchai (1982)). A solution that has been proposed for this problem (Anderberg (1973)) replaces equation 2 with

[s.sub.k1,k2] = [n.sub.k1,k2]/[m.sub.k1,k2]

where [n.sub.k1,k2] is the number of parts processed in both cells k1 and k2, and [m.sub.k1,k2] is the number of parts processed in either cell k1 or k2. That is, each cell is considered to be a "virtual" machine which is visited by all parts requiring processing on any machine in the cell. The similarity coefficient thus computed can be thought of as an "average" similarity coefficient. Equation 5 is called the "average linkage clustering" formula. Another approach is to let [s.sub.k1,k2] be equal to the average similarity coefficient (see Seifoddini and Wolfe (1986)).

[Mathematical Expression Omitted]

Like the SC algorithm, the MSC algorithm groups the machines but not the parts. Hence we will evaluate two procedures--MSC/ROC and MSC/MSC. The first procedure uses the MSC algorithm to group machines, after which the ROC algorithm is used to group the parts. The second procedure uses the regular MSC algorithm to group machines, and an adaption of MSC to group parts. The adaption consists of replacing equation 5 by

[s.sub.k1,k2] = [n.sub.k1,k2]/[m.sub.k1,k2]

where [n.sub.k1,k2] is the number of machines used by parts in group k1 as well as by parts in k2, and [m.sub.k1,k2] is the number of machines used by either parts in k1 or k2. The MSC/ROC and MSC/MSC algorithms used in this study are listed in the Appendix.

The Modified Rank Order Clustering MROC, algorithm

The modified rank order clustering, MROC, algorithm developed by Chandrasekharan and Rajagopalan (1986a) seeks to remedy the problems of the rank order algorithm while still keeping its simplicity. They noticed that the RPC algorithm tends to produce a final part/machine matrix where there are "ones in the top left corner while the rest of the matrix is disorganized" (1986a, p. 1225). The MROC algorithm takes the largest submatrix from the top left corner and groups the machines and parts in the submatrix into a cell. The columns in the matrix are then removed from the part/machine matrix and the ROC algorithm is executed again. The resulting final matrix will again contain a submatrix in the top left corner from which the next grouping of machines and parts is formed. This process is repeated until the last part/machine group is formed.

This procedure often produces many small part/machine groups, and so a clustering procedure is run to combine the part/machine groups into larger groups. This is done by computing a "measure of association," [[sigma].sub.ij], between pairs of part/machine groups, i and j. [[sigma].sub.ij] is defined to be the number of machines common to both groups i and j, divided by the minimum number of machines in either group i or j. The pair of groups having the largest value of [[sigma].sub.ij] are joined to form a single part/machine group, and the clustering procedure is repeated.

The MROC algorithm used in this study is listed in the Appendix.

An Ideal Seed Non-Hierarchical Clustering, ISNC, Algorithm

Chandrasekharan and Rajagopalan (1986b) developed an algorithm which uses graph theory and an iterative non-hierarchical clustering approach to group parts and machines into cells. The algorithm draws upon work done by MacQueen (1967) and Anderberg (1973).

The maximum number of cells, k, for any CF problem is shown to be

[Mathematical Expression Omitted]

where 3 = [Mathematical Expressions Omitted]. Parts and machines are represented as vectors (with elements [a.sub.ij], i = 1,2,...,M, for each part j; and [a.sub.ij], j = 1,2,...,N, for each machine i).

The ISNC algorithm consists of three stages. In the fist stage k machine groups and k part groups are constructed by grouping vectors which are close together (in the euclidean sense). In the second stage machine groups and part groups are matched to form cells in such a way that the "clustering efficiency" (see the section entitled "Measuring the Goodness of a Solution to the CF Problem") is high. In the last stage the cells are adjusted to form "imaginary perfect groups." Machines and parts are then reassigned to the closet imaginary group in an attempt to improve upon the initial grouping. The resulting grouping is the final solution.

The ISNC algorithm is complex and has high computational requirements. The algorithm used in this study is listed in the Appendix.

Using Similarity Coefficients to Select Seeds for Clustering, SC-Seed

This algorithm is an attempt by us to simplify and improve the ISNC algorithm.

The SC-seed algorithm consists of three stages. In the first stage the similarity coefficient algorithm is used to form groups of parts. A seed or centroid is computed for each group that contains more than one part. Parts are then clustered around the nearest seed. In the second stage seeds are determined for the machines, and machine groups are formed by clustering machines around these seeds. The last stage is the same as the last stage in the ISNC algorithm.

In addition to being faster than the ISNC algorithm, the SC-seed algorithm has the advantage that its starting seeds are not random--they are determined by the similarity coefficient algorithm.

The algorithm is listed in the Appendix.

The Bond Energy Clustering Algorithm, BEA

McCormick, Schweitzer and White (1972) developed an algorithm whch reorders the rows and columns of a matrix for the purpose of moving the numerically larger matrix elements closer together. The "total bond energy" of a matrix, A, with M columns and N rows has been defined as:

[Mathematical Expression Omitted]

McCormick et al. also derive a simpler expression which is equivalent to

[Mathematical Expression Omitted]

A search over the M!N! possible matrices, obtained by reordering rows and columns, can be made to find the matrix that maximizes BE. McCormick et al. (1972) present a heuristic procedure, called the nearest neighbor heuristic, that seems to produce near optimal solutions with modest computational effort. This heuristic is used in our study (see Appendix).

MEASURING THE GOODNESS OF A SOLUTION TO THE CF PROBLEM

Each algorithm's solution for a CF problem can be presented as a fnal part/machine matrix, [A.sup.(f)]. To illustrate, we apply the SC/ROC algorithm to the example in Figure 1. As there are M = 10 machines, there can be up to 10 cells (Figure 2). Notice that the solution for k cells is formed from the solution for k + 1 cells when the two cells with the largest similarity coefficient are combined. When two cells are combined, the ordering of the machines within the new cell retains the ordering of the machines in the two cells. For example, the second cell in the two cell solution (Figure 2) retains the ordering of cells two and three from the three cell solution.

[A.sup.(f)] is partitioned into k submatrices {[D.sub.r]~ r=1,2,...,k} where submatrix [D.sub.4], consisting of machines [M.sub.r], and parts [C.sub.r], is the r'th part/machine cell. The partition is done so that two objectives are achieved:

Objective 1. Within a part/machine group each machine is visited by many parts--that is, there is a high usage of the machines by the parts; and

Objective 2. Few parts require processing on machines in other cells.

Once a final part/machine matrix [A.sup.(f)] has been formed, it is relatively easy to partition it into k part/machine cells so that objectives 1 and 2 are achieved. How this was done in this research is illustrated in Figure 3. Figure 3a shows the initial part/machine matrix for a randomly generated problem. Applying the SC-Seed algorithm gives the final part/machine matrix shown in Figure 3b. The upper left corner and the lower right corner of the final part/machine matrix are then marked. Beginning from the upper left corner, and moving toward the lower right corner, select k points in the matrix which partition the matrix into k + 1 groups in such a way that objectives 1 and 2 are achieved. (How these objectives are measured is discussed later.) for example, in Figure 3c the first point selected is [machine 22, part 4} which means that the first cell will consist of machines 12, 18, 19, 25, 9 and 22, and parts 9, 1, 29, 2, 10, 19, 33 and 4. Moving from {machine 22, part 4} towards the lowr right corner, a second point is selected, say {machine 24, part 24} which defines a second cell consisting of machines 14, 15, 16, 5, 11, 23, 8, 4 ,17, 6, and 24, with parts 13, 16, 20, 7, 5, 21, 12, 6, 15 and 24. The last cell begins after this cell and extends to the lower right of the final part/machine matrix.

In this research, each algorithm's solution to a cell formation problem was displayed as a final part/machine matrix in standard block diagonal form. The computer and the researchers then interactively partitioned the matrix into cells so that the following measure, M1, was maximized.

Primary Measure--The Grouping Measure, M1

Achieving objectives 1 and 2 gives cells in which there is a high usage of machines by parts, and where few parts require processing on machines in more than one cell.

(M1) Given a final part/machine matrix, [A.sup.(f)], select k, {[D.sub.r]}, [M.sub.r] and [C.sub.r], r=1,2,...,K; so that the grouping measure

[[zeta].sub.g] = [[zeta].sub.u] - [[zeta].sub.m], - 1 [is less than or equal to] [[zeta].sub.g] [is less than or equal to] 1,

is maximized, where

[[zeta].sub.u] is a measure of the usage of the machines by the parts in the part/machine groups.

[Mathematical Expression Omitted]

(~*~ denotes the number of elements in set *.)

[[zeta].sub.m] is a measure of the movement of parts between part/machine groups.

[Mathematical Expression Omitted]

Large values of [[zeta].sub.u] occur when each part in a part/machine group uses most of the machines in the group. Therefore, large values of [[zeta].sub.u] are preferred. Small values of [[zeta].sub.m] occur when there are few parts that require processing by machines outside their part/machine group. Therefore, small values of [[zeta].sub.m] are preferred. Large values of [[zeta].sub.g] = [[zeta].sub.u] - [[zeta].sub.m] represent machines used by many parts with little part movement between part/machine groups. Because 0 [is less than or equal to] [[zeta].sub.u] [is less than or equal to] 1 and 0 [is less than or equal to] [[zeta].sub.m] [is less than or equal to] 1, the grouping measure ranges from -1 to 1.

M1 follows from work done by Chandrasekharan and Rajagopalan (1986b). While our expressions for the usage, [[zeta].sub.u], are the same, our expressions for the inter-group movement, [[zeta].sub.m], and the overall measure, [[zeta].sub.g], are different. The above authors compute the ratio of the number of zeros in the off-diagonal blocks to the total number of elements in the off-diagonal blocks and use this as a measure of the amount of inter-group movement (equation 7 in Chandrasekharan and Rajagopalan (1986b)):

[Mathematical Expression Omitted]

We use a different measure for three reasons.

1. [[zeta].sub.2] does not satisfy 0 [is less than or equal to] [[zeta].sub.2] [is less than or equal to] 1. In the above example, [[zeta].sub.2] = -[infinity] when there is only one group.

2. [[zeta].sub.2] does not have the property that [[zeta].sub.2] = 0 when there is no inter-group movement. Suppose that [a.sub.1,8] = 0 and [a.sub.10,2] = 0 in the above example. Then creating the three cells shown in Table 2 results in no inter-group movement. However, [[zeta].sub.2] = .547, while our [[zeta].sub.m] = 0.

3. [[zeta].sub.2] is more complex than [[zeta].sub.m].

While M1 is a good measure of the quality of a solution to the CF problem, two other measures are considered to enrich the analysis.

Secondary Measure--The Clustering Measure, M2

When a final part/machine matrix is displayed in standard block diagonal form, then often the more "tightly" the non-zero elements are clustered around the diagonal the better will be the resulting and/machine cells (in terms of usages of machines by parts and instances of part travel between cells). We saw this in Figure 3 where, for example, most of the instances of part travel between cells were non-zero elements located away from the diagonal. In one extreme case, when all the non-zero elements are on the diagonal, each cell will consist of one machine and one part and the grouping measure will be [[zeta].sub.g] = 1.0. The other extreme case is the original part/machine matrix where the non-zero elements are randomly located over the entire matrix (creating one large cell).

It follows, therefore, that another way of gauging the effectiveness of a solution algorithm is to examine how tightly it cluster the [a.sub.ij]'s around the diagonal of [A.sup.(f)]. That is:

(M2) Given a final part/machine matrix, [A.sup.(f)], calculate the clusting measure

[Mathematical Express Omitted]

where

[[delta].sub.h]([a.sub.ij]) - is the horizontal distance between a non-zero element [a.sub.ij][epsilon] [A.sup.(f)] and the diagonal. [[delta].sub.h]([a.sub.ij]) = i - j(M-1)/(N-1) - (N-M)/(N-1). [[delta].sub.v]([a.sub.ij]) - is the vertical distance between a non-zero element [a.sub.ij][epsilon] [A.sup.(f)] and the diagonal. [[delta].sub.v]([a.sub.ij]) = j - i(N-1)/(M-1) + (N-M)/(M-1).

[[delta].sub.h]() and [[delta].sub.v] () are illustrated in Figure 4. The denominator in the clustering measure, [Sigma] [a.sub.ij], normalizes the measure, since the numerator will increase as the number of machines and parts increase, and so [[zeta].sub.c] is the "average Euclidean distance" of a non-zero [a.sub.ij] from the diagonal.

To illustrate these measures, consider Figure 1 again. Table 2 shows the grouping measures and clustering measures for different partitions of the part/machine matrix in Figure 1. When two groups are formed, the partition that gives the highest grouping measure has machines [M.sub.1] {1,4,8,2} and parts [C.sub.1] = {1,6,3} in the first group and machines [M.sub.2] = {3,6,7,10,5,9} and

[TABULAR DATA OMITTED]

parts [C.sub.2] = {8,5,2,7,4} in the second group. The grouping measure is .557. However, when three groups are formed (see [A.sup.(f)], in Figure 1) the grouping measure increases to .812. The clustering measure, [[zeta]sub.c], improves from 4.05 for the initial part/machine matrix to 1.55 for the final matrix.

Secondary Measure--The Bond Energy Measure, M3

A third measure is suggested by the premise underlying the bond energy algorithm--namely, that the non-zero elements of the part/machine matrix should be located close to each other, thus forming clusters.

A bond energy measure, [[zeta]sub.BE], where

[Mathematical Expression Omitted]

can be computed for a final part/machine matrix. Larger values of [[zeta]sub.BE] are preferred (see the section entitled "The Bond Energy Clustering Algorithm, BEA"). Notice that [[zeta]sub.BE] is McCormick et al.'s expression for the bond energy of a matrix, equation (9), divided by the number of non-zero elements in the matrix. [[zeta]sub.BE] is, therefore, the "average bond energy" of a part-machine operation (i.e., a non-zero [a.sub.ij]). Normalizing the measure this way permits bond energy measures to be compared across different problems.

Where the final part/machine matrix is in standard block diagonal form, a high bond energy measure leads to a high grouping measure (and a high clustering measure). As we will see in the section entitled "Main Experimental Data Set," at times it will be difficult to interpret a high value of the bond energy measure. However, the bond energy measure still enriches the analysis in that high values of the measure indicate that groups of parts and machines exist which are suitable for cellular manufacturing.

The last column in Table 2 gives the bond energy measures for the example in Figure 1. The initial part/machine matrix has [[zeta]sub.BE] = .19. This increases to [[zeta]sub.BE] = 1.23 for the final matrix.

During the testing of the solution algorithms (in the section "Description of the Test Problems") we compute the grouping measures, [[zeta]sub.g], the clustering measure, [[zeta]sub.c], and the bond energy measure, [[zeta]sub.BE], for each solution to a CF problem, and use these measures to evaluate and compare the algorithms. The grouping measure is the primary measure, with the other measures being used to enrich the analysis.

DESCRIPTION OF THE TEST PROBLEMS

In order to generalize the results of our investigation, both problems from the open literature and randomly generated problems are solved. The characteristics used to generate the random problems are based upon an examination of cell formation problems reported in the open literature, and are experimentally varied over a wide range of representative values.

Problems from the Open Literature

Eight well-known CF problems from the literature are used in this study. The problems are described in Table 3. The first three problems (Burbidge's problem, Carrie's problem and the Black & Decker problem) are large and are adapted from actual production problems. The other problems are small. The large problems have lower densities (18 to 20%) than the small problems (31 to 57%), where density = ([[zeta]sub.ij] [a.sub.ij])/MN.

[TABULAR DATA OMITTED]

Main Experimental Data Set

This data set contains 60 randomly generated large problems. Problems are characterized by the number of machines, the number of parts and the density in the initial part/machine matrix. Each characteristic assumes one of up to three different values (part 1 of Table 4). The values for each characteristic encompass a range of representative values for that characteristic. Five different problems are randomly generated for each combination of characteristic values (part 2 of Table 4). Figure 3 shows a problem from this data set.

COMPARATIVE RESULTS

Each problem in the two data sets was solved by the nine algorithms described in the section entitled "Algorithms for Solving the CF Problem." The three measures developed in the section entitled "Measuring the Goodness of a Solution to the CF Problem" were used to gauge the goodness of the solutions. All algorithhms were coded in C and were run on an IBM PS/2 model 80 microcomputer. A time limit of 30 CPU minutes per problem for each algorithm was set. With the exception of the MSC/MSC algorithm, all algorithms were able to solve each problem within the time limit specified. Unfortunately MSC/MSC was unable to solve problems having either 50 machines or 50 parts within this time limit. Consequently, this algorithm was judged to be unsuitable for large CF problems. On average, less than one minute of CPU time was required per problem by the other algorithms.

Literature Data Set

Figure 5 depicts in graphical form the solutions to the problems in the literature data set. A graph for each performance measure is presented. Each line on a graph represents one problem from the literature data set, with each point on the line representing the solution produced by a particular solution algorithm.

Primary Measure--Grouping Measure (M1)

Recall that the higher the grouping measure, the better is the solution. Notice that three of the lines are relatively level (Chandrasekharan and Rajagopalan #2; Chan and Milner; and Ham et al.), which indicates that, for these problems, all algorithms produced solutions that were very similar in terms of the grouping measure. Three lines (Burbidge, Carrie, and Chandrasekharan and Rajagopalan #1) show that the ROC/ROC algorithm produced poor solutions for these problems, while lines Burbidge, Black & Decker, and Seifoddini and Wolfe indicate that the "seed" algorithms--ISNC and SC-Seed--can produce good solutions.

Secondary Measure--Clustering Measure (M2)

Recall that the lower the clustering measure, the better is the solution. Again a number of lines are relatively level (Chandrasekharan and Rajagopalan #2; Chan and Milner; Ham et al.; and Seifoddini and Wolfe), indicating that, for these problems, all algorithms produced solutions that were very similar in terms of the clustering measure. In the remaining lines the quality of the solutions vary, but no algorithm or algorithms consistently outperform all other algorithms.

Secondary Measure--Bond Energy Measure (M3)

Recall that the higher the bond energy measure, the better is the solution. As expected, the bond energy algorithm produced the best solutions for all problems. Furthermore, it appears that the solutions produced by the other algorithms were quite similar, in terms of the bond energy measure.

These observations are explored further in the analysis of the problems from the main experimental data set.

MAIN EXPERIMENTAL DATA SET

An analysis of variance was performed for each of the three performance measures.

Primary Measure--Grouping measure (M1)

In part 1 of Table 5 the variability in the grouping measure is partitioned among the four factors--the solution algorithms, the number of machines, the number of parts and the density of the part/machine matrix--and the interactions between these factors, with the remainder representing variability within the randomly generated problems (not "expalined" by the factors). All factors and all interactions between pairs of factors have a significant effect on the grouping measure. The number of machines and the density have large effects. (Notice the large mean squares for these factors in Table 5.) The solution algorithm and the number of parts have smaller effects.

As we will see, for the problems considered in this data set the number of machines has a larger effect than the number of parts for the grouping measure as well as for the clustering

[TABULAR DATA OMITTED]

measure and the bond energy measure. This may be because when the number of machines is increased, for a given number of parts and a given density, then each part will visit more machines. The more machines that each part visits, the more difficult it is to form part/machine cells where all the machines required for all the parts are contained within the same cell. One may hypothesize that the number of machines and the number of parts have equal effects on the performance measures, and that the difference observed in Table 5 is because the factor levels for the number of machines are 25 and 50 (a 100% change) compared to 35 and 50 (a 43% change) for the number of parts. However in part 2 of Table 5, 95% interval estimates are computer for, among other things, the mean grouping measure for problems having 50 machines and for problems having 50 parts. Since the interval for problems having 50 parts is higher than the interval for problems having 50 machines (recall that high grouping measures are preferred) and the intervals do not overlap, we conclude that problems having 50 machines are "more difficult" than problems having 50 parts. (The same result occurs for the clustering measure and for the bond energy measure.) That is, for this data set, the number of machines has a larger effect on the performance measures than does the number of parts.

Notice the small error mean square (MSE = .0005). This indicates that the factors and the indicated interactions between factors explain 92% (1.0-.1816/2.2031) of the variability in the randomly generated data.

In part 2 of Table 5, comparisons are made between mean grouping measures within each of the four factors. First comparisons are made between the eight solution algorithm are computed. Comparing the eight confidence intervals shows that:

1. The ISNC algorithm is significantly better than all other algorithms.

2. ROC/ROC, SC/ROC, MSC/ROC, BEA and SC-Seed are not significantly differently from each other, but are significantly better than MROC, and

3. SC/SC is significantly worse than all other algorithms.

Part 2 of Table 5 also shows that, with respect to the density, the mean grouping measure is significantly better when the density is 10% than when the density is 15 or 20%. (A similar result is obtained for the clustering measure--see Table 6.) This simply reflects the reality that it is very difficult to obtain good part/machine cells when the density is large.

Large densities mean parts visit many machines. Consequently it may not be possible to isolate parts and machines into self-contained cells. What is somewhat surprising, is that this may occur at very low densities. Notice, in part 2 of Table 5, that the mean grouping measure drops 30% (from .2867 to .1999) when the density increases 5% (from 10 to 15%). That is, it is much more difficult to obtain good solutions for problems where the density is 15 or 20%, compared to problems where the density is 10%. (This result is also obtained for the clustering measure.)

Similar computations for the number of machines and the number of parts reiterate what was revealed in the analysis of variance table; that these factors have a significant effect on the mean clustering measure.

Note that using 95% interval estimates to compare the mean grouping measure for each algorithm results in an effective system confidence level which is less than 95%. A procedure such as Tukey's Test (see, for example, Montgomery (1984, p. 69)) would ensure a type 1 error of 1-.95 = .05 for all pairwise comparisons (of algorithms, densities, and numbers of machines and parts), but would not change the results. The simpler confidence intervals are used for clarity of presentation.

Secondary Measure--Clustering Measure (M2)

Part 1 of Table 6 presents the analysis of variance results for the clustering measure. All four factors have a significant effect on this measure. The number of machines and the number of parts (both of which determine the "size" of the problem) have large effects (i.e., large mean squares), while the solution algorithm and the density have smaller effects (i.e., smaller mean squares). The factors and two- and three-way interactions between the factors account for 91%

[TABULAR DATA OMITTED]

(1.0-787.6/9141.6) of the variability in the randomly generated data.

Part 2 of this table compares mean clustering measures within each factor. With respect to the solution algorithms:

1. ROC/ROC is significantly better than all other algorithms,

2. SC/ROC, MSC/ROC, ISNC and SC-Seed are not significantly different from each other,

3. SC/SC is significantly worse than all other algorithms, and

4. BEA is better than SC/SC but is significantly worse than all other algorithms.

A similar analysis for the density supports the results obtained for the grouping measure: namely, that the mean clustering measure is significantly better when the density is 10% than when the density is 15 or 20%, and that there is no singificant difference in mean clustering measures between densities of 15 and 20%. The computations for the number of machines and the number of parts reiterate what was revealed in the analysis of variance table; these factors have a significant effect on the mean clustering measure.

Secondary Measure--Bond Energy Measure (M3)

The analysis of variance results for the bond enegy measure are presented in part 1 of Table 7. The solution algorithm and the density have large effects (large mean squares) on this measure.

The mean bond energy measures within each factor are analyzed in part 2 of the table. It is not surprising that the bond energy algorithm outperforms all other solution algorithms on this measure. MROC is the second best algorithm, while SC/SC is the worst solution algorithm. ROC/ROC, SC/ROC, MSC/ROC, ISNC and SC-Seed are not sigificantly different from each other. With respect to the density of the part/machine matrix, notice that the denser the matrix the higher the bond energy. This follows from the definition of the bond energy measure (in the section "Measuring the Goodness of a Solution to the CF Problem").

[TABULAR DATA OMITTED]

Example

It is interesting to review the difference between solutions developed by algorithms using rank ordering, seeds and bond energy. Figure 6 shows three solutions for problem RP9.2 (see Figure 3.a).

The first solution shows how the ROC/ROC algorithm moves entries out of the corners of the part/machine matrix towards the diagonal, creating empty regions in the corners. Since these regions are furthest away from the diagonal, and because the clustering measure computes euclidean distances from the diagonal, algorithms based on rank ordering perform well on the clustering measure. (The best clustering measure was 7.11 for ROC/ROC, compared to 8.64 for SC-Seed and 11.30 for BEA. Recall that low clustering measures are better than high measures.)

The second solution shows how the SC-Seed algorithm clusters entries around "seeds" located near the diagonal, with the result that "dense" clusters are created. This usually results in a good grouping measure because of the usage term, [[zeta].sub.u], in M1, unless there are many instances of inter-cell travel. Notice in Figure 6 that the SC-Seed algorithm has the best usage term, [[zeta].sub.u] = .223, but because the intercell movement term is large, [[zeta].sub.m] = .244, the overall grouping measure for SC-Seed, .223-244 =-.022, is not as good as for the ROC/ROC algorithm, .159-.151 = .008. (Recall that high grouping measures are better than low grouping measures.) The SC-Seed algorithm is not as successful at moving entries away from the corners of the matrix.

BEA rearranges the entries to form clusters everywhere in the part/machine matrix. When, for example, two or more BEA clusters occupy the same rows, then these clusters will require the same machines. For example, clusters u and v in the BEA solution in Figure 3 would both require machines 24, 23, 18, etc. This is not a problem in situations where not all parts will be selected for cellular manufacturing, or where machines can be duplicated. Where this is not the case, a solution having a high bond energy measure may not be feasible. (More on this follows in the next section.)

DISCUSSION

Five results warrant further discussion.

Problem Size

The size of the problem--as measured by the number of machines, the number of parts and the density of the part/machine matrix--greatly affects the quality of the solution as gauged by all three performance measures. Rarely do large problems have solutions which consist of a number of "dense" cells (i.e., each part visits most of the machines in the cell) with little travel of parts between cells.

This is not necessarily a problem in practice. It is interesting to review how an initial solution (from a CF algorithm) would be improved upon in an actual industrial problem. The first two parts of Figure 6 show sets of part/machine cells for the ROC/ROC algorithm and the SC-Seed algorithm. Since there are many instances of part travel between cells, it may be the case that some parts are not appropriate for cellular manufacturing and some machines may need to be duplicated. Therefore, we might remove from the initial matrix the 10 to 15% of the machines processing the most parts and the 10 to 15% of the parts requiring the most machines, and then re-apply the solution algorithms (see Burbidge (1975); and Wemmerlov and Hyer (1986, p. 143)). This is done in Figure 7. The four machines (15% of the total number of machines) processing at least six parts, and the three parts (9% of the total number of parts) requiring at least five machines are removed from the initial part/machine matrix. Applying the ROC/ROC algorithm and the SC-Seed algorithm gives the final part/machine matrices and cells shown in Figure 7. As expected, the grouping and clustering measures for this reduced problem are much improved. Another remedy is to change the routing for some parts, and to duplicate some bottleneck machines.

For the problems considered in the main experimental data set, the number of machines has a larger effect on the quality of the solution than does the number of parts. While there are practical reasons for believing that this is true for many real problems, the experimental design used in this research does not allow this result to be extended beyond the main experimental data set. More research is required on this subject.

Performance Measures

Each of the primary and secondary performance measures--the grouping measure, and the clustering and bond energy measures--provides information that is useful for forming part/machine cells. A grouping measure with a value close to 1.0 means highly "utilized" cells (i.e., each part visits most of the machines in the cell) with travel between cells. A low value for the clustering measure means that entries are located close to the diagonal, making it easier to assign parts to cells. High bond energy values occur when dense clusters of parts and machines are located around the final part/machine matrix.

All measures can be used as follows when analyzing a CF problem. A solution algorithm produces a solution which is displayed as a final part/machine matrix in standard block diagonal form. Values for the clustering measure and the bond energy measure are computed. If both values are sufficiently good, the user (or computer) proceeds to partition the final part/machine matrix into cells in such a way that the value of the grouping measure is maximized (see the section entitled "Measuring the Goodness of a Solution to the CF Problem"). If either the clustering or the bond energy measure value is poor, then another algorithm is selected, or selected parts and/or bottleneck machines are removed, selected routings are changed, etc., and the process is repeated.

ISNC Algorithm

The ISNC algorithm was statistically better than all other algorithms or the primary measure (the grouping measure) and performed well on both secondary measures, for the problems in the randomly generated data set. The ISNC algorithms and the SC-Seed algorithm, which, like the ISNC algorithm, is also based on clustering around seeds, both performed well on the primary measure for the problems in the literature data set.

This is not surprising since algorithms based on clustering around seeds first forms dense seeds (which results in a high usage term in the grouping measure and a high bond energy measure) and then cluster the remaining non-zero entries around these seeds (which results in a good clustering measure).

The computational requirements for the ISNC algorithm are higher than those for the other algorithms.

Different Types of Solutions

A careful examination of the final part/machine matrices for the problems considered in both data sets leads to the following observations concerning problems where the solution is well structured, problems where the solution consists of large, loose cells, and problems where the solutions consist of small, tight cells. It should be noted that solutions to most problems are some combination of these. That is, the solution may contain a well-structured cell and other large, loose cells, and so on.

Well Structured Solutions

A careful examination of the final part/machine matrices for the problems in the literature data set showed that when a very well-structured solution exist for a CF problem, all algorithms will find that solution most of the time. (A small exception is the ROC/ROC algorithm which, in three of the eight problems from the literature data set, found a good solution but did not find the "best" solution.)

Unfortunately, one usually never knows whether a well-structured solution exists for a randomly generated problem (or for a real problem) until after the problem is solved.

We have seen though that there are conditions under which a well-structured solution is extremely unlikely to exist. For example, when there are 50 machines, 50 parts and a density of 15% or more, then the problem is not likely to have a well-structured solution.

Large, Loose Cells

A careful examination of the final part/machine matrics for the problems considered in the main experimental data set showed that some algorithms produced solutions where almost all of the non-zero entries in the final part/machine matrix were located close to the diagonal of the matrix. When these final matrices were partitioned into cells, a small number of relatively large cells resulted. (See, for example, the ROC/ROC solution in Figure 6.) Each cell consisted of a large number of parts and contained a large number of machines. These cells are described as "loose cells" because only a small fraction of the total number of machines within a cell is required to produce any particular part assigned to the cell.

Algorithms that performed well on the clustering measure (i.e., achieved a low value) and were based on rank order clustering, usually produced this type of solution. Two such algorithms were the ROC/ROC and SC/ROC algorithms.

More research is needed to fully understand this behavior and learn how it may be used to advantage in real problems. One possible application may be the following. Suppose a functionally organized job shop has grown so large that it has become congested and difficult to manage, and management wishes to relocate some of the machines and parts to a new facility (which will also be run as a job shop). Algorithms like ROC/ROC and SC/ROC may be used to identify a small number of large, loose cells that can be arranged in the two facilities. (It should be noted, that these algorithms have the advantage that they are well-known, easy to understand and use, and have the lowest computational requirements.)

Note that in Figure 5, there were three problems in the literature data set where the SC/ROC algorithm produced good solutions while the ROC/ROC algorithm did not. These three problems do not fit this case, as their solutions consisted of many small cells where each machine in a cell was visited by many parts.)

Small, Tight Cells

A careful examination of the final part/machine matrices for the problems considered in the main experimental data set showed that some algorithms produced solutions where most of the non-zero entries in the final part/machine matrix were located in dense groups, and those entries that were not part of a dense group were randomly scattered around the matrix. When these final matrices were partitioned into cells, a large number of relatively small cells resulted. (See, for example, the SC-Seed and BEA solutions in Figure 6.) Each cell consisted of a relatively small number of parts and machines.

Algorithms that performed well on the bond energy measure (i.e., achieved a high value) usually produced this type of solution. Two such algorithms were the BEA and MROC algorithms.

Again more research is needed to fully understand this behavior and learn how it may be used to advantage in real problems. One possible application may be the following. Suppose in the functionally organized job shop considered above, management decides that the new facility will not be run as a job shop. Rather it will be run as a cellular manufacturing facility (with an appropriate production planning and control system, multi-skilled employees, and so on.) An algorithm such as the BEA or MROC algorithm may be used to identify the cells that should be created in the new facility. Those parts and machines not selected for cellular manufacturing would remain in the existing job shop.

Extensions

Three areas where more research is needed to further our understanding of solutions algorithms to the cell formation problem are the following:

1. An interesting extension to this research would be to examine real cell formation problems from industry to determine whether they are like the problems in the literature data set in that they produce clearly defined part/machine cells, or whether they are more like the problems from the randomly generated data set which do not always produce clearly defined cells.

2. It is possible to randomly generate routings (non-zero entries for each part in the initial part/machine matrix) which are known a priori to lead to well structured part/machine cells. However, these cells are not obvious because the non-zero entries are dispersed over the entire initial part/machine matrix. If such routings can be randomly generated, then an interesting study would be to determine which algorithms are best able to find these "hidden" cells.

3. What are the computational requirements and the space requirements (i.e., data storage requirements) for each solution algorithm. Will the requirements be such that some algorithms cannot be used when the problems to be solved contain hundreds of parts and machines?

CONCLUSION

Nine solution algorithms for the CF problem were compared and evaluated by analyzing 64 solutions to problems taken from the open literature and 480 solutions to large, randomly generated problems. A hierarchy of measures was presented for gauging the quality of a solution to the CF problem.

No solution algorithm was found to be better than all other algorithms on all performance measures for both the randonly generated data set and the literature data set. With respect to the primary performance measure, the grouping measure, the ISNC algorithm was significantly bette than all other algorithms for the randomly generated data set. It also performed well on the secondary performance measures. Therefore, it appears to the most appropriate algorithm to use as a general purpose solution algorithm for CF problems having between 25 and 50 machines, 35 and 50 parts, and densities between 10 and 20%. (It may not be appropriate for larger problems since its computational and space requirements appear to be high.) After the ISNC algorithm, there is no significant difference between most of the other algorithms.

However, different kinds of solutions are developed by some of these other algorithms. The well-known SC/ROC and ROC/ROC algorithms develop solutions which consist of a small number of relatively large cells. The BEA and MROC algorithms develop solutions which consist of relatively small cells (where each machine in the cell is visited by most of the parts assigned to that cell) and one large cell for those parts not assigned to the small cells. When very well structured solutions exist all algorithms will find them most of the time. How this behavior can be used to advantage in real problems requires further research.

REFERENCES

Anderberg, M.R. Cluster Analysis for Applications. New York: Academic Press, 1973.

Burbidge, J.L. The Introduction of Group Technology. London: John Wiley and Sons, Inc., 1975.

Carrie, A.S. "Numerical Taxonomy Applied to Group Technology and Plant Layout." International Journal of Operations Research, vol. 11, no. 4, 1973, 399-416.

Chan, H.M., and D.A. Milner. "Direct Clustering Algorithm for Group Formation in Cellular Manufacture." Journal of Manufacturing Systems, vol. 1, no. 1, 1982, 65-75.

Chandrasekharan, M.P., and R. Rajagopalan. "MODROC: An Extension of Rank Order Clustering for Group Technology." International Journal of Production Research, vol. 24, no. 5, 1986a, 1221-1233.

Chandrasekharan, M.P., and R. Rajagopalan. "An Ideal Seed Non-Hierarchical Clustering Algorithm for Cellular Manufacturing." International Journal of Production Research, vol. 24, no. 2, 1986b, 451-464.

Gallagher, C.C., T.J. Grayson, and A. Phillips. "A Review of the Development and Application of Group Technology at the University of Birmingham." International Journal of Production Research, vol. 9, 229, 1971.

Gongaware, T., and I. Ham, "Cluster Analysis Applications for Group Technology Manufacturing Systems." Proceedings: Ninth North American Metalworking Research Conference. Dearborn, MI: Society of Manufacturing Engineers, 1981.

Ham, I., K. Hitomi, and T. Yoshida. Group Technology: Applications to Production Management. Boston: Kluwer-Nijhoff Publishing, 1985.

Hyer, N.L., and U. Wemmerlov, "Group Technology and Productivity." Harvard Business Review, July-August, 1984, 140-149.

King, J.R. "Machine-Part Grouping in Production Flow Analysis: An Approach Using a Rank Order Clustering Algorithm." International Journal of Production Research, vol. 18, no. 2, 1980, 213-237.

King, J.R., and V. Nakornchai. "Machine Part Group Formation in Group Technology--Review and Extension." International Journal of Production Research, vol. 20, no. 2, 1982, 117-133.

MacQueen, J.B. "Some Methods for Classification and Analysis of Multivariate Observations." In Proceedings of the Fifth Symposium on Mathematical Statistics and Probability. Vol. 1. Berkeley, CA: University of California-Berkeley, 1967, 281-288.

McAuley, J. "Machine Grouping for Efficient Production." The Production Engineer, vol. 51, no. 2, 1972, 53-57.

McCormick, W.T., P.J. Schweitzer, and T.W. White. "Problem Decomposition and Data Reorganization by a Clustering Tecnique." Operations Research, vol. 20, 1972, 993-1009.

Montgomery, D.C. Design and Analysis of Experiments. 2nd ed. New York: John Wiley & Sons, 1984.

Seifoddini, H., and P.M. Wolfe. "Application of the Similarity Coefficient Method in Group Technology." IIE Transactions, vol. 18, 1986, 271-277.

Suresh, N.C., and J.R. Meredith. "Achieving Factory Automation through Group Technology Principles." Journal of Operations Management, vol. 5, no. 2, 1985, 151-167.

Wemmerlov, U., and N.L. Hyer. "Procedures for the Part Family/Machine Group Identification Problem in Cellular Manufacturing." Journal of Operations Management, vol. 6, no. 2, 1986, 125-147.

Wu, H.L., R. Venugopal, and M.M. Barash. "Design of a Cellular Manufacturing Approach: A Syntactic Pattern Recognition Approach." Jouran of Manufacturing Systems, vol. 5, 1986, 81-88.

APPENDIX

ALGORITHMS FOR SOLVING THE CF PROBLEM

Rank Order Clustering Algorithms

ROC/ROC

1. For each row i=1,2,...,M in the current part/machine matrix, read the entries as a binary word, [Mathematical Expression Omitted] [2.sup.j-1] [a.sub.ij]. Reorder the rows in decreasing binary word value. In the case of ties, keep the original order.

2. For each column j = 1,2,...,N in the current part/machine matrix, read the entries as a binary word [Mathematical Expression Omitted] [2.sup.i-1] [a.sub.ij]. Reorder the columns in decreasing binary word value. In the case of ties, keep the original order.

3. If the new part/machine matrix is unchanged from the last part/machine matrix, then stop. Otherwise go to step 1.

Similarity Coefficient Algorithms

SC/ROC

1. Compute the similarity matrix (containing the similarity coefficients for all pairs of machines). Use equation 1.

2. Find the largest similarity coefficient value, [s.sub.i'j'] in the current similarity matrix. Group i' and j' (the two machines, the machine and machine cell, or the two machine cells), into a new machine cell k. Let k(k)={(i') U k(j')} be the ordered set of machines in machine cell k, where k(i)={i} if i is a machine. Remove i' and j' from the similarity matrix. Add machine cell k to the similarity matrix.

3. Update the similarity matrix by computing the similarity coefficients for the new machine cell k using the single linkage clustering formula--equation 2.

4. When the similarity matrix consists of only one machine cell go to step 5. Otherwise go to step 2.

5. Form the part/machine matrix where the machines are ordered according to the elements in k(k) where k is the machine cell from step 4.

6. Rank order the parts in the part/machine matrix. That is, for each column j=1,2,...,N in the current part/machine matrix, read the entries as a binary word, [Mathematical Expression Omitted] [2.sup.i-1.sub.ij]. Reorder the columns in decreasing binary word value. In the case of ties, keep the original order.

SC/SC

1. Order the machines.

1.1 Same as step 1 in SC/ROC.

1.2 Same as step 2 in SC/ROC.

1.3 Same as step 3 in SC/ROC.

1.4 Same as step 4 in SC/ROC.

1.5 The machine ordering is k(k) where k is the single machine cell from step 1.4.

2. Order the parts.

2.1 Compute the similarity matrix (containing the similarity coefficients for all pairs of parts). Use equation 3.

2.2 Find the largest similarity coefficient value, [s.sub.u'v'], in the current similarity matrix. Group u' and v' (the two parts, the part and part family, or the two families), into a new part family k. Let k(k)={k(u') U k(v')} be the ordered set of parts in family k, where k(u)={u} if u is a part. Remove u' and v' from the similarity matrix. Add family k to the similarity matrix.

2.3 Update the similarity matrix by computing the similarity coefficients for the new part family k using the single linkage clustering formula--equation 2.

2.4 When the similarity matrix consists of only one part family go to step 2.5. Otherwise go to step 2.2.

2.5 The part ordering is k(k) where k is the single part family from step 2.4.

3. Steps 1.5 and 2.5 give the machine and part orderings for the final part/machine matrix.

Modified Similarity Coefficient Algorithms

MSC/ROC

1. Same as step 1 in SC/ROC.

2. Same as step 2 in SC/ROC.

3. Same as step 3 in SC/ROC except that equation 4 replaces equation 2.

4. Same as step 4 in SC/ROC.

5. Same as step 5 in SC/ROC.

6. Same as step 6 in SC/ROC.

MSC/MSC

1.1 Same as step 1.1 in SC/SC.

1.2 Same as step 1.2 in SC/SC.

1.3 Same as step 1.3 in SC/SC except that equation 4 replaces equation 2.

1.4 Same as step 1.4 in SC/SC.

2.1 Same as step 2.1 in SC/SC.

2.2 Same as step 2.2 in SC/SC.

2.3 Same as step 2.3 SC/SC except that equation 6 replaces equation 2.

2.4 Same as step 2.4 in SC/SC.

2.5 Same as step 2.5 in SC/SC.

The Modified Rank Order Clustering Algorithm

MROC

0. Set k = 0 where k is the k'th part/machine cell.

1. Increment k: k=k+1. Execute the ROC/ROC algorithm.

2. Identify the largest block of non-zero entries in the top left corner of the part/machine matrix from step 1. A block is a set of consecutive columns, beginning with the first column, where each column has the property that it can be divided into two sets where the first set contains consecutive non-zero entries and the second set contains consecutive zero entries. (See Figure A1.) Part/machine cell k consists of the machines and parts in the block.

3. Remove the columns corresponding to the parts in cell k from the part/machine matrix.

4. When no columns remain in the part/machine matrix then go to step 5. Otherwise go to step 1.

5. Compute the measure of association matrix for the part/machine cells. The measure of association between two cells i and j is the number of machines common to cells i and j divided by the minimum number of machines in cell i or j.

6. Find the largest measure of association in the current matrix. Group cells i' and j' into a new cell k'. Remove i' and j' from the matrix. Add k' to the matrix.

7. Update the matrix by computing the measure of association for the new cell k'.

8. When the matrix consists of only one cell then go to step 9. Otherwise go to step 6.

9. Form the part/machine matrix where the machines and parts are ordered according to the machines and parts in part/machine cell from step 8.

The Ideal Seed Non-Hierarchical Clustering Algorithm

ISNC

Stage 1: Clustering Columns and Rows

1. Compute k, the maximum number of cells, from equation 7.

2. Choose k arbitrary seeds [S.sub.unkeyable]], [unkeyable]=1,2,...k, where [S.sub.[unkeyable]] is a column vector from the part/machine matrix with elements [s.sub.i,[unkeyable], i=1,2,...,M.

3. For all columns [C.sub.v], v=1,2,...,N;

a) Compute the distance between the column and each seed

[Mathematical Expression Omitted]

b) Determine the minimum D([C.sub.v],[S.sub.[unkeyable]]),

c) Allocate column v to cluster [unkeyable],

d) Update the seed for cluster [unkeyable] from [s.sub.i,[unkeyable]] = ([s.sub.i,[unkeyable]] + [a.sub.i,v])/2 for i=1,2,...,M.

4. Using the updated seeds, [S.sub.[unkeyable]], [unkeyable]=1,2,...,k; allocate columns to clusters by executing steps 3a, 3b and 3c for all columns [C.sub.v], v=1,2,...,N.

5. Reduce k by eliminating clusters to which no parts have been allocated.

6. Repeat steps 2 to 5 for the machines (rows).

7. Iterate until the number of part clusters and machine clusters are the same.

8. Reorder rows and columns in order of the cluster membership.

Stage 2: Group Part Clusters and Machine Clusters

1a. Computer [F.sup.[unkeyable].sub.i] for each column cluster [unkeyable]=1,2,...k not yet allocated, where i=1,2,...,k is a machine cluster and [F.sup.[unkeyable].sub.i] = q[[e.sup.[unkeyable].sub.i]/[M.sub.i.N.sub.[unkeyable]] + (1-q)[1 - ([e.sub.i]-[e.sup.[unkeyable].sub.i])/[M.sub.i] (n-[N.sub.[unkeyable]]) is a measure for the part-machine clustering efficiency. [e.sub.i] is the number of ones in the i'th machine cluster; [e.sup.[unkeyable].sub.i] is the number of ones in the [unkeyable]'th block of the i'th machine cluster; [M.subi.i] is the number of machines in the i'th cluster; [N.sub.[unkeyable]] is the number of parts in the [unkeyable]'th cluster; and q is a weighting factor (set to 0.5).

1b. Determine the maximum [F.sup.[unkeyable].sub.i]. Allocate part cluster [unkeyable] to machine cluster i.

2. Reorder the columns as in the new order of clusters.

Stage 3: Clustering by Ideal Seeds

1. Generate ideal seeds for the current column clusters from [s.sub.i,[unkeyable]] = 1 if i [xi] cluster [unkeyable], otherwise [s.sub.i,[unkeyable]] = 0.

2. Execute step 3 of stage 1 omitting part d.

3. Repeat 1 and 2 for the current row clusters.

Using Similarity Coefficients to Select Seeds for Clustering

SC-Seed

1. Use similarity coefficients to group parts into k groups.

1.1 Compute the similarity matrix (containing the similarity coefficients for all pairs of parts). Use equation 3.

1.2 Find the largest similarity coefficient value, [s.sub.u'v'], in the current similarity matrix. Group u' and v' (the two parts, the part and part family, or the two families), into a new part family k. Let k(k)={k(u') U k(v')} be the ordered set of parts in family k, where k(u)={u} if u is a part. Remove u' and v' from the similarity matrix. Add family k to the similarity matrix.

1.3 Update the similarity matrix by computing the similarity coefficients for the new part family k using the single linkage clustering formula--equation 2.

1.4 Repeat steps 1.1 to 1.3 until all parts have been grouped into a number of distinct groups, k.

1.5 Calculate the seed, [S.sub.[unkeyable], [unkeyable]=1,2,...,k; for each group.

[S.sub.i,[unkeyable]] is a column vector with elements.

[s.sub.i,[unkeyable]]= [sigma] ([a.sub.i,v]) / number of elements.

v[xi]cluster [unkeyable] in cluster [unkeyable]

2. Group all column vectors around the seeds.

2.1 Compute the distance between the column, [C.sub.v], v=1,2,...,N; and each seed

[Mathematical Expression Omitted]

2.2 Determine the minimum D([C.sub.v], [S.sub.[unkeyable]]).

3. Group all row vectors around the corresponding row seeds.

3.1 Create k row seeds, [S.sub.unkeyable], [unkeyable]=1,2,...,k; each with elements [s.sub.i,[unkeyable]] = 1 if i [xi] part cluster [unkeyable], otherwise [s.sub.i,[unkeyable]] = 0, for i = 1,2,...,N.

3.2 Compute the distance between the row, [C.sub.v], v=1,2,...,M; and each seed

[Mathematical Expression Omitted]

3.2 Determine the minimum D([C.sub.s],[S.sub.[unkeyable]]). Allocate row to v cluster [unkeyable].

3.3 After all rows have been allocated to clusters, reorder the rows in each cluster in increasing order of D([C.sub.v],([C.sub.[unkeyable]]).

Bond Energy

1. Arbitrarily select one of the part vectors and place it in the first position. Set [unkeyable]=1.

2. Try placing individually each of the remaining N-[unkeyable] part vectors in each of the positions 1,2,...,[unkeyable]+1 and compute the bond energy;

[Mathematical Expression Omitted]

Place the part vector that gives the largest BE in its best position. Increment i by 1 and repeat until i=N.

3. When all the columns have been ordered, repeal 1 and 2 for the machine vectors.
COPYRIGHT 1991 American Production and Inventory Control Society Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1991 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Special Issue on Group Technology and Cellular Manufacturing
Author:Miltenburg, John; Zhang, Wenjiang
Publication:Journal of Operations Management
Date:Jan 1, 1991
Words:11287
Previous Article:A goal programming approach to the cell formation problem.
Next Article:A tool cluster based strategy for the management of cutting tools in flexible manufacturing systems.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters