Printer Friendly

The concept of using DEA to determine university peer groups.

METHODOLOGY

The extended Banker-Charnes-Cooper Data Envelopment Analysis model (BCC) (1984) maximizes relative efficiency as defined by a set of weighted outputs divided by a set of weighted inputs. This concept requires that each efficiency measure is less than or equal to unity. The BCC model assumes constant returns to scale (CRS). However, the model easily permits variable returns to scale (VRS) by requiring that the dual prices of the decision making units sum to unity. Each feasible model solution has one dominant DMU. A scaled output of a particular DMU dominates all others in the contribution to efficient output. This dominant DMU has unitary efficiency and therefore lays on the frontier. It is possible that more than one DMU has equal dominance, however, in practice such occurrence should be rare.

The method proposed in this paper is a minor extension of the technique proposed by Andersen and Petersen(1993). A process of collapsing the frontier permits an evaluation of new dominant DMUs on a new frontier curve. The process may discover an infeasible solution during the ranking process. This occurs in the standard BCC model when a set of DMUs have zero values for a particular output or input. Similar infeasible solutions will occur when using variable returns to scale.

[ILLUSTRATION OMITTED]

The example presented in this section illustrates how the collapsing efficiency frontier results in a list of rank ordered peers. Figure 1 illustrates a situation with a fixed level of one input and variable amounts of two outputs. The goal is for each DMU to produce as much of Output 1 and Output 2 as possible with the fixed input. The target entity is DMU d. Three DMUs, a, b, and c, define the frontier region and are considered efficient. Though these DMUs provide differing amounts of Output 1 and Output 2, they are considered efficient. The input and output weights for DMU a differ from the weights for DMU b. DMU d is not efficient and should produce at level d' in order to obtain unitary efficiency. In this case, a linear combination DMU a (vector [bar.oa]) and DMU b (vector [bar.ob]) gives the optimal output d' (vector [bar.od]') by using the same or lower level of input. Vector [bar.ox] represents the proportion of vector [bar.ob] used in the solution. Similarly, vector [bar.ox] represents the proportion of vector [bar.ob] used in the solution. Since vector [bar.ox] is longer than vector [bar.xd] we conclude that DMU b is dominant. The dual price of each DMU determines the corresponding contribution toward d' as prescribed in the maximal output vector definition.

The ranking procedure follows a step by step elimination of dominant DMUs. Since DMU b is dominant it is eliminated from the frontier and a new frontier (a-d-c) is constructed as shown in Figure 2. This time DMU d, our target DMU, is efficient and is the dominant DMU. After DMU d is eliminated, the efficiency frontier collapses to (a-c). The focus still remains on producing the same output as DMU d. At this point, a higher level of input is required by a and c to meet the output for d. Subsequent iterations remove DMUs a and c, leaving only DMUs e and f. Since neither DMU e nor f produces any Output 2, there is no feasible solution. These two DMUs are equally ranked on the bottom of the list. In this example the ranking results are b-1, d-2, a-3, c-4, e-5.5, f-5.5.

UNIVERSITY PEER RANKING

Detailed information for this university ranking was selected from the 1994-95 Integrated Postsecondary Education Data System (IPEDS) established by the National Center for Education Statistics. The IPEDS universe includes 10,403 postsecondary institutions in the United States and its outlying areas. There are hundreds of variables for each institution. A limited set of these institutions were selected for the study.

Institutions selected from IPEDS offer a full range of baccalaureate programs and are committed to graduate education through the master's degree. They award 40 or more master's degrees annually in three or more disciplines and do not offer any doctoral programs. They report library holdings, operate on a semester basis, have student dormitories, do not have hospital revenues or expenditures, and are publicly funded. These selection criteria resulted in 49 universities and colleges.

Three input variables were identified as significant and consistent measures across all the selected institutions. The first input variable is Total Current Fund Revenues. This value includes all private or endowment income and all public appropriations. The second and third variables include the total number of faculty and the total number of full time equivalent undergraduate students.

[ILLUSTRATION OMITTED]

Four output variables were chosen as significant quantity and quality measures of institutional goals. The list includes the number of baccalaureate degrees conferred, the number of accredited degree programs, the number of volumes held at the end of the fiscal year in the library, and the number of tenured faculty. Conceptually, the use of levels (e.g., number of baccalaureate degrees conferred) has a far superior interpretation than using rates (e.g., graduation rates). If constant returns to scale are assumed, then a doubling of inputs results in the doubling of number of baccalaureate degrees conferred. However, the graduation rate is not doubled but remains constant. The quality of graduates is also a concern for institutions. The measures of tenured faculty, library holdings, and accredited programs, lend quality to an institution and, hopefully, infer quality to its graduates.

Slippery Rock University of Pennsylvania was chosen as the target institution. Two rankings were determined assuming constant returns to scale and variable returns to scale. The following table lists the top eight DMUs from each method. The CRS method produced a ranking of forty-nine institutions. However, the VRS method produced a ranking of only eleven institutions before becoming infeasible.

On first glance, the CRS model performs an excellent job of selecting institutions with similar goals and resource utilization as Slippery Rock University. By examining the first eight in the ranking, we discover that there are five from Pennsylvania, two from New York, and one from Massachusetts. There is an obvious geographic similarity and most likely a similar political environment. Lower ranked institutions show a lower adherence to Slippery Rock's goals. The VRS model is not as consistent, though the CRS and VRS rankings list the same two schools in the first two positions.

Constant Returns to Scale Ranking(CRS)

1 Bloomsburg Univ. of Pennsylvania

2 Slippery Rock Univ. of Pennsylvania

3 West Chester Univ. of Pennsylvania

4 Fitchburg State College, MA

5 California Univ. of Pennsylvania

6 Kutztown Univ. of Pennsylvania

7 SUNY College at Oneonta

8 SUNY College at Oswego

Variable Returns to Scale Ranking (VRS)

1 Slippery Rock Univ. of Pennsylvania

2 Bloomsburg Univ. of Pennsylvania

3 California State University-Fresno

4 James Madison University, VA

5 Kutztown Univ. of Pennsylvania

6 SUNY College at Buffalo

7 West Chester Univ. of Pennsylvania

8 Southwest Missouri State Univ.

CONCLUSION

The constant returns to scale DEA model appears to accurately select a group of peers that utilize resources in a similar fashion to achieve similar goals and objectives. The VRS model, however, does not seem to select a reasonable set of peers. As with any model, the selection criteria of institutions, selection of resource inputs, and selection of output goals all influence the results.

The exact placement of a peer in the ranking process is subject to interpretation. An institution may appear more efficient because of a one time large drop in funding. However, large institutions, such as those used in this study, do not experience great fluctuations in enrollment, funding, or staffing.

The ranking process presented here does not create a list of the best colleges and universities. Rather, it is designed to create peer groupings.

REFERENCES

Andersen, P., & Petersen, N. (1993). A procedure for ranking efficient units in data envelopment analysis. Management Science, 39(10), 1261-64.

Banker, R.D., Charnes, A., & Cooper, W. (1984). Some models for estimating technical and scale inefficiencies in data envelopment analysis. Management Science, 30(9), 1078-92.

Stolp, C. (1990). Strengths and weaknesses of data envelopment analysis: An urban and regional perspective. Computers, Environmental and Urban Systems, 14, 103-116.

1994-95 Integrated Postsecondary Education Data System (IPEDS) [Machine-readable data file]. National Center for Education Statistics. Washington, D.C.

Ronald T. Konecny, University of Nebraska at Kearney

Sandra Lebsack, University of Nebraska at Kearney
COPYRIGHT 1997 The DreamCatchers Group, LLC
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1997 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:data envelopment analysis
Author:Konecny, Ronald T.; Lebsack, Sandra
Publication:Academy of Educational Leadership Journal
Article Type:Report
Geographic Code:1USA
Date:May 1, 1997
Words:1407
Previous Article:Using an accounting information system to facilitate course integration.
Next Article:Qualitative vs quantitative variables: a multidimensional measurement in predetermination of success for retention of minority health students.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters