Printer Friendly

Measuring efficiency in higher education: an empirical study using a bootstrapped data envelopment analysis.

Abstract This paper applies data envelopment analysis (DEA) to assess technical efficiency in a big public university. Particular attention has been paid to two main activities, teaching and research, and on two large groups, the Science and Technology (ST) sector and the Humanity and Social Science (HSS) sector. The findings, based to data from 2005 to 2009, suggest that the ST sector is more efficient in terms of quality of research than the HSS sector, that instead achieves higher efficiency in teaching activities. The efficiency estimates strongly depend on the output specification, given that the use of several quality proxies, such as three research and two student questionnaire-based teaching alternative indices, reduce performance and its differentials for both research and teaching activities. A bootstrap technique is also used to provide confidence intervals for efficiency scores and to obtain bias-corrected estimates. The Malmquist index is calculated to measure changes in productivity.

Keywords Teaching and research efficiency * Data envelopment analysis * Quality diversification * Malmquist index * Bootstrap techniques

JEL * 121 * 123 * C14 * C67

Introduction

Quality never ceases to be a key issue in the context of higher education. Substantial reforms have taken place in recent years in order to make "higher education not just bigger but also better" (Giannakou 2006, p. 12). The move towards higher standards has also been pushed because in many countries a substantial part of the funding received by universities is public. As an analysis carried out by the Organization for Economic Co-operation and Development (OECD) remarked, "formulas to allocate public funds to higher education institutions are now related to performance indicators such as graduation or completion rates." Moreover, "research funding has also increasingly been allocated to specific projects through competitive processes rather than block grants" and has been linked "to assessments of research quality" (OECD 2008, p. 49). In other words, universities are financed according to their virtuosity level in order to achieve higher research performances and to promote academic excellence. Following this direction, the Italian higher education system was reformed. (1) Both quantitative and qualitative indicators were developed to accurately evaluate the management of public universities, their productivity in research and teaching, and the overall success of their administration. Borrowing an expression made in an OECD Report (2008, p. 50), which also describes the situation the Italian universities started to deal with, "higher education institutions have become increasingly accountable for their use of public funds and are required to demonstrate value for money." Clearly efficiency cannot be the only goal of higher education. Equity considerations play an important role as well. The extent to which the expansion of post-compulsory education has enhanced equality of access and the distribution of costs and benefits of public spending on post-compulsory education are important issues to take into account. Indeed, "over the past 30 years participation rates in post-compulsory education have increased rapidly. This is reflected in the higher attainment rates of people. A question arises over whether this overall expansion in educational opportunity has been equitably shared" (Blondal et al. 2002, p. 38). In fact, some efficiency loss may be traded off against equity gain, depending on political preferences. Indeed, given the social outcomes of higher education and its contribution to social mobility and equity "efforts to improve student completion and institutional productivity must be carefully undertaken so that they do not further inhibit access and success for sub-populations already underrepresented in higher education" (Tremblay et al. 2012, p. 31).

A growing number of researchers analysed the efficiency of higher education institutions (HEIs) through non-parametric methods. (2) Using these approaches, the vast majority of the literature has traditionally focused on two types of efficiency evaluation. One type uses data at the institutional level, comparing the efficiency of different universities (see Agasisti and Dal Bianco 2009; Bonaccorsi et al. 2006; Agasisti and Johnes 2010 for an analysis of Italian universities). The other type focuses on measuring efficiency at the department level, both across different HEIs (Madden et al. 1997; Thursby 2000; Johnes and Johnes 1993, 1995; Leitner et al. 2007) and, more significantly for the context of our paper, within the same university (Halkos et al. 2012; Buzzigoli et al. 2010; Tauer et al. 2007; Kao and Hung 2008; Tyagi et al. 2009; Koksal and Nalcaci 2006). (3)

In this paper data envelopment analysis (DEA) is applied to evaluate departments and faculties at the University of Salerno (a big public university in the South of Italy), and to assess, respectively, their research and teaching activities. Clarification on why teaching and research performances are separately assessed is needed. Indeed, whether they are joint or separate products is not new in the literature (Chizmar and Zak 1983, 1984). In general, that the university typically engages in both teaching and research activities suggests that there might be economies from joint production (i.e., economies of scope). However, in Italy, until a reform was made, (4) faculties were taking care of teaching activities and departments were taking care of research. In other words, the university undertakes both activities in the same institution, but reaches their institutional teaching and research goals through different structures, namely faculties and departments (see the Research design section for more details on their characteristics and structures).

Beyond the many DEAs of universities already carried out, our contribution brings new evidence to the importance of evaluating the efficiency of different units operating within a tertiary education institution. Such evaluation processes could be helpful for university managers to shed some light on the effectiveness of various entities within the university as well as to better allocate both human and financial resources. In addition to the standard inputs and outputs already used, we rely on three alternative indices as a measure of research quality and on a student questionnaire-based evaluation as a measure of teaching quality. Specifically, in order to increase the homogeneity of the decision making units (5) (DMUs), departments and faculties have been divided according to their characteristics into two large groups, namely the Science and Technology (ST) sector and the Humanity and Social Science (HSS) sector. Moreover, we contribute to the literature using a bootstrap technique in order to provide more accurate estimates and confidence intervals. Finally, in order to measure changes in productivity across time, a Malmquist index approach has been used. The study shows that the ST sector is more efficient in terms of quality of research than the HSS sector, achieving higher efficiency in teaching activities. Several output measures are used to highlight the sensitivity of efficiency analysis showing that the results strongly depend on the output specification. Bootstrapped bias-corrected estimates are obtained, pointing out the sensibility of efficiency scores relative to sampling variations of the estimated frontier. Finally, the Malmquist index is calculated, revealing that the change in productivity is due to a mixed pattern of technological change (i.e., an outwardly shifting production frontier) and changes in technical efficiency.

Methodology

In the literature, the main methods used to calculate efficiency are nonparametric and parametric. The non-parametric methods, such as DEA and FDH (Free Disposable Hull), proposed by Chames et al. (1978) and due to the original contribution of Farrel (1957), are based on deterministic frontier models (also Cazals et al. 2002). The DEA model, extended by Banker et al. (1984), is especially adequate to evaluate the efficiency of non-profit entities that operate outside the market, since for them performance indicators, such as income and profitability, do not work satisfactorily (for more theoretical details on DEA see Coelli et al. 1998; Cooper et al. 2004; Thanassoulis 2001). Instead, parametric approaches, such as the Stochastic Frontier Approach (SFA), Distribution-Free Approach (DFA) and Thick Frontier Approach (TFA) are based on stochastic frontier models (Aigner et al. 1977). This study focuses on a non-parametric method such as DEA because it does not require building a theoretical production frontier, but does require the imposition of certain a priori hypotheses about the technology (free-disposability, convexity, constant or variable returns to scale). (6) Moreover, the multiple input-output nature of production in HEI makes DEA, rather than SFA, a more reliable technique in this context. Indeed, the DEA approach allows us to overcome some well-known problems concerning computation of technical efficiency in a parametric multi input-output set up (Greene 1980). A disadvantage of this technique, however, is that it is very sensitive to the presence of outliers. The likelihood of confusion regarding deviations from the efficient frontier generated by extreme values versus deviations caused by inefficiencies in the production process is very high. (7)

In this paper, we focus on technical efficiency (8) using an output-oriented DEA method, with variable returns to scale (VRS). The DEA-VRS is probably the most reliable in our case as suggested by Agasisti (2011, p. 205) who argued that the assumption of constant returns to scale (CRS) is restrictive because it is reasonable "that the dimension (number of students, amount of resources, etc.) plays a major role in affecting the efficiency" especially if we consider, the DMUs achieving pre-determined outputs, given certain inputs. Efficiency estimates have been obtained through an output oriented model, following Agasisti and Dal Bianco (2009, p. 487) who claimed that "as Italian universities are increasingly concerned with reducing the length of studies, and improving the number of graduates, in order to compete for public resources, the output-oriented model appears the most suitable to analyse higher education teaching efficiency." Moreover, output-oriented models seem to be particularly appropriate in the context of tertiary education because the resources used can be considered fixed and universities cannot influence, at least in the short run, available human, financial and physical capital (Bonaccorsi et al. 2006). Consequently, we present an output-oriented version of the model.

Research Design

Selection of Decision Making Units (DMUs)

This study focuses on measuring both research and teaching performances of departments and faculties at the University of Salerno. With respect to research performances, one type of DMU has been considered, the departments. According to university regulations, departments have the role of promoting and developing research. They also gather professors and researchers according to the scientific research activities they are in charge of in the higher education institutions. (9) It is debated in the literature whether departments within the same university are treated as homogenous. We base ours on the Tyagi et al. (2009) argument according to which departments inside a university may be considered as homogenous because they conduct similar activities and are willing to achieve similar goals. In respect to teaching performance, one type of DMU has been taken into account, namely the faculties. They group different departments according to similarities. They are organized into different subject areas, each offering a number of degree courses, aiming to coordinate teaching activities. (10) We again treat faculties within the same university as homogenous because they conduct similar activities, using both academic and non-academic staff for teaching purposes (Tyagi et al. 2009).

Moreover, in order to consider more carefully the homogeneity issue and to explore whether the subject mix might adequately affect and explain the efficiency differentials between DMUs, we divide both departments and faculties, according to their characteristics, into two large groups, namely the ST sector and the HSS sector. (11)

Inputs

The first input (used for measuring the efficiency of both departments and faculties) is what we call the equivalent personnel (EP), namely the total number of academic staff and non-academic staff (12) human resources available to departments and faculties, respectively, for research and teaching activities. The academic staff has been categorized as professors, associate professors, researchers and assistant professors. We assign weights to each category according to their salary and the number of institutional, educational, and research duties the academic staff has to deal with (Madden et al. 1997) and assuming that a professor is expected to produce more research and teaching work than an associate professor and so on (Carrington et al. 2005). Similarly to Halkos et al. (2012) (13) we use the following aggregate measure of human capital input: (14)

EP - 1*professors + 0.8*associate professors + 0.6*researchers+ +0.4*assistant professors + 0.2*non-academic staff (1)

The second input (measuring the efficiency of departments) is the total amount of financial resources the department spends on research activities (ER). The third input (measuring the efficiency of faculties) is the total amount of financial resources the faculty allocates for teaching activities (ET). The fourth input (measuring the efficiency of faculties) is the total number of students enrolled (STU) (15) as a measure of faculty teaching burden.

Outputs

The first output (measuring the efficiency of departments) is the number of publications (NP). We follow Harris (1988) and Halkos et al. (2012), including all articles in refereed journals, (16) in order to deal with an important issue such as how many and which journals to include. Number of publications is categorized as articles published in international journals, national journals, international books and national books. We assign weights (17) to each category according to the importance of the publication. We apply a procedure similar to the one proposed by Tyagi et al. (2009) using the following aggregate index of publications: (18)

NP = [1.sup.*] articles in international journals + 0.[75.sup.*] articles in national journals+ +0.[5.sup.*] articles in international books + 0.[25.sup.*] articles in national books (2)

The second output (measuring the efficiency of departments) is the total external research funding obtained by the university (FR). (19) It is a bit controversial though whether it should be used as an input or as an output. (20) In agreement with the main part of the literature (among others Bonaccorsi et al. 2006; Agasisti et al. 2011; Tomkins and Green 1988), and following Buzzigoli et al. (2010), (21) we consider the amount of money received for financing research as a good proxy for the value of the research and therefore as an output.

The third output (measuring the efficiency of departments) is an alternative way of measuring scientific production. There are three different indices (defined in publications of "Nucleo di Valutazione" of the University of Salerno) which can be used, namely research productivity index (RPI), capacity of attracting resources index (CARI) and research productivity per cost of the academic staff index (22) (RPCASI).

The fourth output (measuring the efficiency of faculties) is the number of graduates weighted by their degree classification (NG), which captures the quantity and the quality of teaching. (23) According to Catalano et al. (1993) the task assigned to universities is to produce graduates with the utilization and the combination of different resources. Madden et al. (1997) used the number of graduates under the hypothesis that the higher the number of graduates the higher the quality of teaching. (24)

The fifth output (measuring the efficiency of faculties) is represented by two different indices which have been calculated evaluating a questionnaire given respectively to regular students, the student satisfaction index (SSI), and to those students who are specifically preparing for the degree, the undergraduate satisfaction index (USI). The aim of the surveys was to collect student's opinions on the organization of faculties, facilities used such as libraries, classrooms, computer rooms, and on classes attended (such as appeal of the topic studied and teacher quality). (25)

Specification of the Models

The use of different proxies for both research and teaching performances allows us to explore whether the results obtained are sensitive to the specification of the outputs used. For this reason, we implement different models (Appendix Table 1).

When we analyse the departments' performance in the benchmark model (Model la, Appendix Table 1), EP and the expenses for research (ER) (26) are used as inputs, while NP is used as output. Keeping constant the input side, we explore first whether FR and then the three research indices might represent an alternative way of measuring department performance in terms of research activities (Models 2a, 3a, 4a, 5a and 6a, Appendix Table 1). On the other hand, in order to analyse faculty performance, in the benchmark model (Model 1b, Appendix Table 1) the EP, ET (27) and STU are used as inputs, while NG is used as output. We then use the quality teaching indices as outputs to informally test for the reliability of the benchmark teaching output (Models 2b and 3b, Appendix Table 1).

The input and output choice, from both the quantity (number of inputs and outputs) and quality (type of inputs and outputs) has many implications in the analysis. Unfortunately, as pointed out by Johnes and Johnes (1995, p. 307), "while statistical inference can be used as a means of judging whether or not a variable should be included as a regressor in a statistical analysis, DEA is not a statistical technique, and no such guide is available." However, the number of DMUs relative to the number of input and output performance measures must be large enough to obtain meaningful efficiency estimates. Dyson et al. (2001) claimed that the number of DMUs must be at least 2*m*s where m is the number of inputs and s the number of outputs. Following Halkos et al. (2012), we use this approach. In our study we have, at most, 2*2*3 = 12 when departments are taken into account and 2*3*1 = 6 when faculties are analyzed, indicating an appropriate number of inputs/outputs used. (28)

Data

The dataset used in this paper was constructed using annual publications of "Nucleo di Valutazione" of die University of Salerno and data which are publicly available on the National Committee for the Evaluation of the University Sector (CNVSU) website (http:// www.cnvsu.it). Moreover, for evaluation of teaching activity we use two CNVSU questionnaires which anonymously collect information and opinions about teaching activity from regular students and from students who are preparing for degrees. For more details about the scheme of the questionnaires and the way indices have been constructed, see the CNVSU website. The data refer to the period from 2005 to 2009. All financial data have been deflated to 2007 values using the Retail Price Index (RPI) data from the National Institute of Statistics (http://www.istat.it). For more details, see descriptive statistics in Tables 2 and 3 in the Appendix. (29) In estimating our DEA models, Malmquist index and for bootstrapping, we rely on two packages based on the freeware R (FEAR 1.13, Benchmarking 0.18).

The Empirical Evidence

DEA Efficiency Scores

The DEA method has been applied to estimate technical efficiency of departments (Models 1a, 2a, 3a, 4a, 5a and 6a) and faculties (Models 1b, 2b and 3b) within the University of Salerno over the period 2005-2009, taking into account that technology might change over time (i.e., estimates are carried out year by year). The efficiency estimates are presented in the Appendix (Tables 4 and 5) for both departments and faculties belonging to the HSS sector and ST sector and across time. (30)

The analysis of departments aims to capture the quality of research activity. Starting from the baseline model (Model la, Appendix Table 4), where NP is used as output, it is clearly evident that the HSS sector is less efficient than the ST sector. We then use funds obtained for research and scientific productivity indices in order to capture the quality of output in an alternative way (see Models 2a, 3a, 4a, 5a and 6a, Appendix Table 4). The empirical evidence of Model la is confirmed by the results of Model 2a (where FR is used as output), although we note an increase in efficiency estimates both for the ST and HSS sectors. Interestingly, the empirical evidence of Models la and 2a is only partly confirmed by the results of Models 3a, 4a, 5a (where RPI, CARI and RPCASI are used, respectively, as outputs), where the differentials between the ST and HSS sectors are reduced, as also showed by the results of Model 6a, where the three scientific productivity indices are used, at the same time, as outputs. This is particularly evident in Model 3a where the weighted sum of the publications (RPI) is measured taking into account that departments belonging either to the ST or HSS sector (i.e., departments belonging to the ST sector have a higher number of publication in international journals). (31) To be more precise, results obtained in Model 5a look as good as those in Model la and 2a, giving credit to the research productivity per cost of the academic staff index (RPCASI) used as output. Given the political nature of university funding, this result suggests that the RPCASI output would seem to be as important as any of the other main output measures.

The analysis of faculties aims to capture the quality of teaching. Starting from the baseline model (Model 1b, Appendix Table 5), where the output is represented by NG, we find that the HSS sector is more efficient than the ST sector. (32) This evidence is partially confirmed using the satisfaction indices as output. Nevertheless, considering the SSI (Model 2b), we find that the HSS sector efficiency estimates (Appendix Table 5) are drastically lowered, even if the HSS sector maintains higher scores than the ST sector. Using the USI as output (Model 3b), instead, reduces the differentials between the ST and HSS sector, compared to the baseline model (Model 1b). (33)

Summing up, our evidence suggests that the efficiency estimates strongly depend on the output specification and the use of quality proxies reduces the performance and its differentials for both research and teaching activities. Moreover, it seems that the scientific sector has an impact in explaining efficiency differentials, confirming that "scientific areas tend to differ regarding their teaching and especially their research productivity" (Sarrico 2009, p. 290).

Efficiency Changes Over 2005-2009 Period: A Malmquist Analysis

We perform a productivity analysis using the Malmquist index (see Caves et al. 1982 for more technical details) to disentangle the changes in efficiency due both to pure efficiency improvements (or worsening) and technological improvements (or worsening), (34) focusing on Models la, 2a, 6a and Models lb, 2b and 3b described in Appendix Table 1, over the 2005-2009 period. (35) We follow a generalized approach suggested by Fare et al. (1994).

Several issues could be addressed in the computation of the Malmquist indices of productivity growth over the period analyzed. The first one is the measurement of productivity change over the 2005/2006-2008/2009 period (TFPC). Indeed, if TFPC is >1 then productivity gains occur, but if TFPC <1, productivity losses occur. The second one is to decompose changes in productivity into technical efficiency change (E) and technological change (TC) in order to analyse whether the productivity change is due, respectively, to changes in technical efficiency or to an outward shift in the production frontier. (36) The third issue that could also be addressed is that (E) could be further decomposed to identify the main source of improvement, in pure technical efficiency change (PEFC) or changes in scale efficiency (SC). (37)

Starting from the analysis of departments, and considering the TFPC the empirical evidence shows that for both ST and HS sectors overall there is an improvement in productivity (TFPC>1, Appendix Table 6). Given that the productivity change can be decomposed in E and TC we can further underline whether most of the increase was due to E (movement towards the frontier) or to TC (outward shift in the efficiency frontier). Take for instance model 1a, Appendix Table 6, and taking into account both sectors (ALL), there is a decrease in TFPC between 2005-2006 and 2008-2009 of around 50% (from 1.7954 to 1.28 50). (38) It seems that in this specific case the fall in productivity was brought about predominately by a decrease in E rather an outward shift in the TC. Indeed, the empirical evidence shows that TC worsened by around 4% (from 1.0975 to 1.0580) while E decreased by almost 40% (from 1.6334 to 1.2146). This result is mainly driven by the HHS sector. Indeed, when we analyse separately the two sectors we still find a rise in productivity by 30% (from 1.0653 to 1.3694) for the ST sector while there is a slight fall in productivity by 0.7% (from 1.9857 to 1.9783), for the HHS sector. For the ST sector the increase in productivity seems to be due more to improvements in efficiency more than from an expansion in the frontier-related inputs to outputs (E increases by around 35% from 0.9320 to 1.2860 and TC decreases by 8% from 1.1416 to 1.0649). For the HSS sector the sustained improvement in productivity is a consequence of technology change rather than technical efficiency change (E decreases by around 15% from 1.7664 to 1.6136 and TC increases by 13% from 1.1242 to 1.2561). Decomposing E in pure technical (PEFC) and scale (SC) efficiencies, overall and for both sectors (Appendix Table 6), the results indicate that PECF is higher than SC suggesting that the major source of efficiency change is due to an improvement in pure technical efficiency rather than an improvement in scale efficiency.

Regarding the faculties (Appendix Table 7), results still show an improvement in technology (TFPC>1) for Models 2b and 3b. Considering the ST sector, except for Model lb, where the change in technology seems to drive the improvement in productivity, (39) in Models 2b and 3b the sustained improvement in productivity seems to be more the result of changes in technical efficiency. (40) This mixed evidence is also observed in the HSS sector. Indeed, it is evident that, considering Model 2b, the main source of productivity is represented by the improvement in technology, (41) while in Models lb and 3b the change in technical efficiency explains the rise in productivity (42) When we decompose E, the empirical evidence shows that both PEFC and SC efficiencies contribute equally to the technical efficiency change

Overall, the estimates show a mixed pattern of positive (negative) technology change versus negative (positive) efficiency change. While technology change was found to be the main source of technological improvement (Flegg et al. 2004; Worthington and Lee, 2007; Johnes 2008), in some cases the improvement in efficiency largely drives the positive productivity change rather than an outward shifting of the frontier. It has to be mentioned, however, that the empirical estimation of this decomposition of the Malmquist productivity change index (Johnes 2008) should be treated with caution, since it mixes VRS and CRS efficiencies in the estimation of its component (Ray and Desli 1997). A possible interpretation of these results is that the university, in order to improve its research (i.e., departments) and teaching (faculties) performances, relies on technological changes but without doing so at the price of technical efficiency. Among the most important sources of change in production activity involving universities are information technology and e-learning. As Johnes (2008) underlined, the increased use and application of technology might have positive effects on many aspects of university activities. For instance, information is more accessible to users (i.e., students), causing changes in teaching, and increasing administrator efficiency. Thus, these policies do not have to emphasize financial and individual outcomes more than non-financial and social outcomes (i.e., equality of access is not always achieved). Given these social costs and consequences, efforts to improve student completion and institutional productivity must be carefully undertaken so that they do not further inhibit access and success for sub-populations already underrepresented in higher education. The evidence of changes in technical efficiency indicates that most departments and faculties operate near the best-practice frontier suggesting the use of management, teaching and research practices with the aim of improving outputs.

Efficiency Bias Correction and Confidence Intervals Construction

The bootstrapping technique, introduced by Efron (1979) and Efron and Tibshirani (1993), was attractive in analysing the sensitivity of efficiency and productivity measures to sampling variation. Many researchers advocate for this technique (Atkinson and Wilson 1995; Ferrier and Hirschberg 1997; Simar and Wilson 1998). Basically, bootstrapping is particularly useful when little or nothing is known about the underlying data generating process (DGP) for a sample of observations. In the higher education sector this method, which ascertains estimates precision, is still very rare. Following Simar and Wilson (1998, 1999), bootstrapping has been used to calculate confidence intervals for efficiency scores. (43) Our evidence suggests the importance of using a bootstrapping DEA approach (confirming what has been found by Halkos et al. 2012). Indeed, the main results are confirmed, but a strong bias is found in our estimation (Appendix Tables 4 and 5), meaning that the efficiency scores calculated without bootstrapping might be over-estimated. Specifically, considering the analysis of departments and when bias-corrected efficiency estimates are taken into account, it is evident that the HSS sector is less efficient than the ST sector when the scientific productivity indices are used as alternative outputs (i.e., the reduction in the differentials underlined before is less accentuated).

Conclusion

This paper analyses the performances (i.e., efficiency) of departments and faculties at the University of Salerno over the 2005-2009 period. We apply a DEA approach in order to estimate efficiency scores using an output-oriented model when variable returns to scale are assumed. First, we take into account that the tertiary education institution analyzed carries out two of its main activities, such as teaching and research, through different structures, namely faculties and departments. Second, differently from the main literature and in order to achieve a higher degree of homogeneity of DMUs, departments and faculties have been divided, according to their characteristics, into the ST sector and the HSS sector. Third, we propose the use of different outputs such as three alternative indices as a measure of research quality as well as two student questionnaire-based evaluation indices as teaching performance measure. The empirical findings suggest that the ST sector is more efficient in terms of quality of research than the HSS sector that instead achieves higher efficiency in teaching activities. This suggests the importance of taking into account differences in subject mix when measuring efficiency in higher education, in order to avoid distorted estimates (Sarrico et al. 2009). Our evidence also suggests that the efficiency estimates strongly depend on output specification because the use of quality proxies reduces performance and its differentials for both research and teaching activities. When research activities are taken into account, the results suggest that the number of publications and the funds for research seem to better capture scientific production within higher education institutions. On the other hand, when teaching activities are considered, the use of different output measures did narrow the distance between the SC and HSS sectors. More specifically, it is interesting to note that the distance between sectors did particularly narrow when the student satisfaction index was used as output. Indeed, student opinion is becoming an important qualitative indicator for higher educational institutes and shows the key factors for meeting users' needs. Moreover, higher education is a customer-oriented to the students. This evidence could be very useful for university administrators and call into question whether universities should incorporate some measure of satisfaction in their recruitment initiatives. Fourth, we apply a bootstrapping method in order to investigate the efficiency score sensibility relative to sampling variations of the estimated frontier obtaining bias-corrected efficiency estimates, in contrast to a straightforward application of DEA (Halkos et al. 2012). Finally, the Malmquist index has also been calculated in order to disentangle changes in efficiency due both to pure efficiency improvements (or worsening) and technological improvements (or worsening) finding that the change in productivity is due to a mixed pattern of technological change (i.e., an outwardly shifting production frontier) and changes in technical efficiency (for a comparison on the Malmquist analysis results, see Flegg et al. 2004; Worthington and Lee, 2007; Johnes 2008). To sum up, through this analysis we contribute to the literature an attempt to measure the performances of HEIs. Universities' regulators might take advantage of these studies and make, through appropriate policy decisions (i.e., focusing on the distribution of available additional resources either among more efficient units, as a reward, or more inefficient units, helping them to improve their efficiency), the tertiary education system more effective. In the immediate future, our task will be to deepen the role of the variables used as quality proxies in the production process to give regulators additional information to make more accurate policy decisions.

DOI 10.1007/s11294-015-9558-4

Published online: 5 February 2016

Appendix
Table 1 Empirical models for departments and faculties.
The production set

Departments

Models    1a    2a    3a    4a     5a       6a

Inputs    EP    EP    EP    EP     EP       EP
          ER    ER    ER    ER     ER       ER

Outputs   NP    FR    RPI   CARI   RPCASI   RPI
                                            CARI
                                            RPCASI

Faculties

Models    1b    2b    3b

Inputs    EP    EP    EP
          ET    ET    ET
          STU   STU   STU

Outputs   NG    SSI   USI

EP equivalent personnel, ER expenses for research, NP number of
publications, FR funds for research, RPI research productivity
index, CARI capacity of attracting resources index, RPCASI research
productivity per cost of the academic staff index, ET expenses for
teaching, STU number of student enrolled, NG number of graduates,
SSI student satisfaction index, USI undergraduate satisfaction
index

Table 2 Descriptive statistics. The production set:
Mean values, years 2005-2009--Departments

          EP        ER            NP        FR

ALL
  Total   28.8      669,250.9     38.37     540,858.7
          (10.66)   (674,036.8)   (43.83)   (628,471)
HSS
  Total   25.38     277,471.3     18.09     220,232.5
          (8.2)     (277,381.3)   (10.66)   (261,903.3)
ST
  Total   34.95     1,374,454     74.06     1,117,986
          (11.85)   (599,723.9)   (55.95)   (683,492.3)

          RPI      CARI      RPCASI

ALL
  Total   1.13     18.7      0.66
          (0.86)   (22.76)   (0.36)
HSS
  Total   0.86     7.9       0.52
          (0.49)   (8.86)    (0.27)
ST
  Total   1.62     38.14     0.91
          (1.14)   (26.97)   (0.36)

Source: Own calculations using data from "Nucleo di Valutazione"
of the University of Salerno

All monetary aggregates in thousands of deflated 2007 euros;
standard errors are in parentheses

EP equivalent personnel, ER expenses for research, NP number of
publications, FR funds for research, RPI research productivity index,
CARI capacity of attracting resources index, RPCASI research
productivity per cost of the academic staff index, ALL both sectors,
ST science and technology sector, HSS humanity and social science
sector

Table 3 Descriptive statistics. The production set:
Mean values, years 2005-2009--Faculties

          EP       ET            STU

ALL
  Total   86.3     3,728,326     4303,822
          (38.6)   (1,768,880)   (1938,867)

HSS
  Total   75.0     3,203,140     4682,367
          (30.7)   (1,411,987)   (2150,825)

ST
  Total   110.4    4,778,696     3546,733
          (42.1)   (1,982,638)   (1144,956)

          NG         SSI      USI

ALL
  Total   310.14     90.7     18.8
          (192.51)   (30.3)   (11.9)

HSS
  Total   353.84     100.1    19.0
          (215.85)   (32.1)   (1.4)

ST
  Total   222.75     71.9     18.3
          (87.66)    (13.9)   (4.5)

Source: Own calculations using data from "Nucleo di Valutazione" of
the University of Salerno

All monetary aggregates in thousands of deflated 2007 euros;
standard errors are in parentheses

EP equivalent personnel, ER expenses for research, NP number of
publications, FR funds for research, RPI research productivity index,
CARI capacity of attracting resources index, RPCASI research
productivity per cost of the academic staff index, ALL both sectors,
ST science and technology sector, HSS humanity and social science
sector

Table 4 Empirical results. Mean technical efficiency
scores for departments

Departments, Evaluation of efficiency scores

          Bootstrap sample

          Efficiency    Bias      Bias-Corrected    Confidence
                                  Efficiency        interval, 5%

          Model 1a

ST
  2005    0.6032        0.1582    0.4450            0.3792<x<0.5718
  2006    0.5926        0.1485    0.4441            0.3760<x<0.5667
  2007    0.6275        0.1365    0.4910            0.4165<x<0.6031
  2008    0.6357        0.1583    0.4773            0.4057<x<0.6067
  2009    0.6916        0.1698    0.5217            0.4417<x<0.6620

HSS
  2005    0.4742        0.1330    0.3411            0.2868<x<0.4458
  2006    0.4978        0.1405    0.3573            0.2987<x<0.4710
  2007    0.4584        0.1044    0.3540            0.2959<x<0.4435
  2008    0.4830        0.1206    0.3623            0.3065<x<0.4623
  2009    0.4725        0.1286    0.3439            0.2900<x<0.4513

          Model 2a

ST
  2005    0.8167        0.1296    0.6871            0.5950<x<0.8003
  2006    0.6935        0.1002    0.5932            0.5023<x<0.6846
  2007    0.7192        0.1319    0.5873            0.5004<x<0.7014
  2008    0.7900        0.1355    0.6544            0.5535<x<0.7757
  2009    0.6352        0.1069    0.5283            0.4471<x<0.6227

HSS
  2005    0.5652        0.0789    0.4766            0.4113<x<0.5544
  2006    0.6365        0.0876    0.5339            0.4577<x<0.6235
  2007    0.6659        0.0971    0.5308            0.4534<x<0.6473
  2008    0.5889        0.0899    0.4877            0.4160<x<0.5767
  2009    0.6270        0.0948    0.4987            0.4219<x<0.6094

          Model 3a

ST
  2005    0.5461        0.1078    0.4382            0.3812<x<0.5294
  2006    0.5171        0.0919    0.4252            0.3626<x<0.4964
  2007    0.5121        0.0973    0.4148            0.3553<x<0.5010
  2008    0.5404        0.1007    0.4396            0.3736<x<0.5276
  2009    0.5684        0.1300    0.4384            0.3702<x<0.5494

HSS
  2005    0.5837        0.1126    0.4710            0.3991<x<0.5662
  2006    0.4784        0.1374    0.3410            0.2823<x<0.4575
  2007    0.5791        0.1317    0.2209            0.3766<x<0.5557
  2008    0.6177        0.1361    0.2309            0.4081<x<0.5998
  2009    0.4720        0.1296    0.1888            0.2880<x<0.4483

          Model 4a

ST
  2005    0.5303        0.1052    0.4250            0.3566<x<0.5174
  2006    0.4533        0.1011    0.3521            0.2931<x<0.4397
  2007    0.6843        0.1184    0.5659            0.4844<x<0.6690
  2008    0.7057        0.1563    0.5493            0.4635<x<0.6836
  2009    0.4503        0.0953    0.3549            0.2945<x<0.4381

HSS
  2005    0.4724        0.1028    0.3696            0.3150<x<0.4562
  2006    0.5613        0.1271    0.4341            0.3687<x<0.5392
  2007    0.6798        0.1277    0.5521            0.4696<x<0.6649
  2008    0.5128        0.1130    0.3997            0.3403<x<0.4967
  2009    0.5580        0.1264    0.4316            0.3675<x<0.5358

          Model 5a

ST
  2005    0.7481        0.0789    0.6692            0.5944<x<0.7404
  2006    0.7395        0.0636    0.6759            0.6100<x<0.7290
  2007    0.6775        0.0885    0.5890            0.5239<x<0.6630
  2008    0.6981        0.0682    0.6298            0.5605<x<0.6926
  2009    0.7280        0.1211    0.6068            0.5321<x<0.7127

HSS
  2005    0.6332        0.0815    0.5516            0.4705<x<0.6253
  2006    0.5973        0.1075    0.4897            0.4126<x<0.5857
  2007    0.5862        0.0906    0.4956            0.4224<x<0.5772
  2008    0.5811        0.1056    0.4755            0.4058<x<0.5713
  2009    0.6454        0.1279    0.5175            0.4438<x<0.6298

          Model 6a

ST
  2005    0.8306        0.1160    0.7145            0.6135<x<0.8198
  2006    0.7761        0.1099    0.6662            0.5775<x<0.7624
  2007    0.8253        0.1063    0.7190            0.6226<x<0.8178
  2008    0.8522        0.1479    0.7043            0.6014<x<0.8385
  2009    0.7987        0.1195    0.6791            0.5765x<0.7882

HSS
  2005    0.7165        0.1020    0.6144            0.5320<x<0.7067
  2006    0.6958        0.1152    0.5805            0.4899<x<0.6864
  2007    0.8479        0.1070    0.7408            0.6476<x<0.8396
  2008    0.7186        0.1257    0.5928            0.5034<x<0.7076
  2009    0.7417        0.1185    0.6232            0.5274<x<0.7324

Estimates regarding the ALL sample are not reported, for the sake
of brevity, and are available on request

ALL both sectors, ST science and technology sector, HSS humanity
and social science sector

Table 5 Empirical results. Mean technical efficiency
scores for faculties

Faculties, Evaluation of efficiency scores

         Model 1b

         Bootstrap sample

         Efficiency   Bias        Bias-Corrected   Confidence
                      Corrected   Efficiency       interval, 5%

ST
  2005   0.5724       0.0671      0.5052           0.4347<x<0.5659
  2006   0.5313       0.0695      0.4617           0.3871<x<0.5254
  2007   0.5588       0.0726      0.4861           0.4058<x<0.5532
  2008   0.5469       0.0743      0.4725           0.3885<x<0.5408
  2009   0.6283       0.0893      0.5390           0.4616<x<0.6193

HSS
  2005   0.9224       0.1549      0.7674           0.6512<x<0.9127
  2006   0.8923       0.1664      0.7258           0.6090<x<0.8789
  2007   0.8589       0.1671      0.6917           0.5800<x<0.8491
  2008   0.8679       0.1652      0.7026           0.5870<x<0.8566
  2009   0.8533       0.1665      0.6867           0.5695<x<0.8391

         Model 2b

         Bootstrap sample

         Efficiency   Bias        Bias-Corrected   Confidence
                                  Efficiency       interval, 5%

ST
  2005   0.4027       0.0853      0.3173           0.2610<x<0.3946
  2006   0.3720       0.0732      0.2988           0.2430<x<0.3662
  2007   0.5111       0.0724      0.4387           0.3644<x<0.5077
  2008   0.5857       0.0746      0.5111           0.4338<x<0.5819
  2009   0.6038       0.0648      0.5389           0.4650<x<0.6001

HSS
  2005   0.6394       0.1617      0.4776           0.4005<x<0.6235
  2006   0.6404       0.1533      0.4870           0.4048<x<0.6256
  2007   0.7095       0.1266      0.5829           0.4896<x<0.7013
  2008   0.7415       0.1207      0.6208           0.5258<x<0.7338
  2009   0.7437       0.1086      0.6350           0.5376<x<0.7367

         Model 3b

         Bootstrap sample

         Efficiency   Bias        Bias-Corrected   Confidence
                                  Efficiency       interval, 5%

ST
  2005   0.8420       0.0785      0.7634           0.6754<x<0.8334
  2006   0.7075       0.0967      0.6107           0.5128<x<0.6963
  2007   0.4441       0.0998      0.3442           0.2840<x<0.4351
  2008   0.2740       0.0819      0.1920           0.1575<x<0.2618
  2009   0.3111       0.0844      0.2267           0.1850<x<0.2997

HSS
  2005   0.7368       0.0779      0.6588           0.5674<x<0.7300
  2006   0.6289       0.1108      0.5180           0.4320<x<0.6178
  2007   0.4718       0.1291      0.3427           0.2870<x<0.4538
  2008   0.3320       0.1210      0.2110           0.1803<x<0.3035
  2009   0.3630       0.1245      0.2384           0.2014<x<0.3354

Estimates regarding the ALL sample are not reported, for the sake
of brevity, and are available on request

ALL both sectors, ST science and technology sector, HSS humanity
and social science sector

Table 6 Malmquist index over the period 2005-2009

Departments   Model 1a

              E        PEFC     TC       SC       TFPC

ST
  2005-2006   0.9320   1.0367   1.1416   0.9073   1.0653
  2006-2007   1.1083   1.2274   0.9386   0.9021   1.0299
  2007-2008   1.1041   0.9577   1.0514   1.1371   1.1384
  2008-2009   1.2860   1.1235   1.0649   1.1447   1.3694

HSS
  2005-2006   1.7664   1.7548   1.1242   1.0066   1.9857
  2006-2007   1.1155   1.4908   1.0836   0.7441   1.2162
  2007-2008   1.2540   1.3975   1.1060   1.2048   1.3422
  2008-2009   1.6136   1.5501   1.2561   1.1613   1.9783

ALL
  2005-2006   1.6334   1.6112   1.0975   1.0116   1.7954
  2006-2007   1.6764   1.7992   0.9679   0.9499   1.6181
  2007-2008   1.4847   1.6006   1.0473   1.2010   1.5876
  2008-2009   1.2146   1.3654   1.0580   0.8895   1.2850

              Model 2a

              E        PEFC     TC       SC       TFPC

ST
  2005-2006   0.9481   1.0475   0.9829   0.8951   0.9535
  2006-2007   1.1186   1.2185   1.0924   0.8959   1.2382
  2007-2008   1.2810   1.0175   0.7867   1.2128   1.0118
  2008-2009   1.0004   0.8866   0.9240   1.1105   0.9450

HSS
  2005-2006   1.1351   1.2278   0.9250   0.9275   1.0496
  2006-2007   1.2035   1.2279   1.2612   0.9801   1.5178
  2007-2008   1.1585   1.2292   0.7479   1.0286   0.8631
  2008-2009   0.9825   1.0508   1.0320   0.9642   0.9510

ALL
  2005-2006   1.1804   1.2522   0.9149   0.9767   1.0735
  2006-2007   1.2791   1.3918   1.3979   1.0153   1.7995
  2007-2008   1.2481   1.3067   0.8292   1.0324   1.0542
  2008-2009   1.0991   1.1864   0.9170   1.0060   1.0026

              Model 6a

              E        PEFC     TC       SC       TFPC

ST
  2005-2006   1.0053   1.0841   0.8540   0.9274   0.8743
  2006-2007   1.2524   1.1558   0.7031   1.0619   0.9049
  2007-2008   1.1350   1.0573   1.1529   1.0643   1.2572
  2008-2009   1.6653   1.2110   0.5666   1.3589   0.9695

HSS
  2005-2006   1.2346   1.0772   1.1117   1.1181   1.3725
  2006-2007   1.1584   1.1894   1.2855   0.9739   1.4891
  2007-2008   1.4439   1.5126   1.2597   0.9335   1.8188
  2008-2009   1.2793   1.2996   1.0398   0.9759   1.3302

ALL
  2005-2006   1.2673   1.1498   1.1127   1.1021   1.4101
  2006-2007   1.1021   1.1854   1.3613   0.9197   1.6149
  2007-2008   1.2931   1.0547   1.5957   0.9230   1.3639
  2008-2009   1.6224   0.9286   1.3345   0.9374   1.5067

ST science and technology sector, MSS humanity and social science
sector, ALL both science and technology and humanity and social
science sectors, E Technical efficiency change (under a constant
returns-to-scale without convexity constraint): measures the change
in technical efficiency such as DMUs getting closer to or further
away from the efficiency frontier, PEFC Pure efficiency change
(under variable returns-to-scale with convexity constraint):
measures change in pure technical efficiency, TC Technological
change: measures the change in technology (shifts in the efficiency
frontier). Technical progress (regress) has occurred if TC is
greater (less) than one, SC Scale change: It is obtained by
dividing the technical efficiency under a constant returns-to-scale
without convexity constraint (E) by pure technical efficiency under
variable returns-to-scale with convexity constraint (PEFC). It
measures the changes in efficiency due to movement toward or away
from the point of optimal scale, TFPC Total factor productivity
change: measures the change in total output relative to the change
in the usage of all inputs. It indicates the degree of productivity
change; when TFPC is >1 then productivity gains occur, whilst if
TFPC <1 productivity losses occur

Malmquist index calculated on Models 1 a, 2a and 6a (see Table 1
for more details on the production set)

Table 7 Malmquist index over the period 2005-2009

Faculties   Model 1b

            E        PEFC     TC       SC       TFPC

ST
            1.0
2005-       108      1.0      0.9      0.9      0.9
2006                 231      716      881      825
            1.0
2006-                1.0      0.8      1.0      0.9
2007        752      023      683      728      350
            1.0
2007-       791      1.0      0.8      1.0      0.9
2008                 162      724      628      407
            1.0
2008-       048      1.0      1.0      0.9      1.0
2009                 137      179      911      234

HSS
            0.8
2005-       852      0.9      1.1      0.9      1.0
2006                 595      339      259      044
            0.9
2006-       482      0.9      0.9      0.9      0.8
2007                 507      404      945      885
            1.0
2007-                1.0      0.9      1.0      0.9
2008        402      131      206      268      583
            0.9

2008-2009   939      0.9686   0.9494   1.0241   0.9361

ALL
            0.9
2005-       094      0.9      1.1      0.9      1.0
2006                 514      218      620      200
            0.9
2006-2007   929      0.9868   0.9228   1.0036   0.9125
            1.0
2007-       178      1.0      0.9      1.0      0.9
2008                 055      564      128      722
            1.0
2008-       792      1.0      0.9      1.0      0.9
2009                 310      106      458      710

Faculties   Model 2b

            E        PEFC     TC       SC       TFPC

ST
                     0.9
2005-       1.23              0.9      1.2      1.1
2006        56       967      464      117      799
                     0.9
2006-       1.00     60       1.0      1.0      1.1
2007        83       5        981      495      075
                     0.9
2007-       1.01     34       1.0      1.0      1.0
2008        15       8        032      831      147
                     0.9
2008-       1.30     08       0.9      1.4      1.2
2009        31       4        235      425      136

HSS
                     1.0
2005-       1.05     07       0.9      1.0      1.0
2006        67       3        634      484      187
                     0.1
2006-       1.06     58       0.8      0.9      0.9
2007        67       7        944      256      543
                     1.0
2007-       1.11     50       0.8      1.0      0.9
2008        24       9        143      566      062
                     1.0

2008-2009   1.0161   24       1.0743   0.9930   1.0910

ALL
                     0.9
2005-       1.02     84       0.9      1.0      0.9
2006        25       2        712      401      931
                     1.2
2006-2007   1.1304   30 3     0.8948   0.9222   1.0117
                     1.0
2007-       1.13     80       0.8      1.0      0.9
2008        10       6        329      455      430
                     1.0
2008-       0.99     25       1.0      0.9      1.0
2009        88       0        815      752      880

Faculties   Model 3b

            E        PEFC     TC       SC       TFPC

ST

2005-       1.2      0.95     0.8      1.3      1.0
2006        369      16       802      073      920

2006-       0.7      1.00     1.4      0.7      1.0
2007        264      94       757      207      630

2007-       0.9      1.00     0.9      0.9      0.8
2008        750      03       194      746      965

2008-       1.6      1.14     0.6      1.3      1.0
2009        218      96       221      949      162

HSS

2005-       0.9      0.86     1.3      1.0      1.2
2006        093      92       236      423      040

2006-       0.7      0.70     1.6      1.0      1.1
2007        225      98       128      192      633

2007-       0.6      0.60     1.3      1.0      0.8
2008        255      83       971      338      814

2008-2009   1.2281   1.2103   0.8758   1.0142   1.0778

ALL

2005-       0.8      0.85     1.3      1.0      1.1
2006        942      43       059      428      671

2006-2007   0.6937   0.6998   1.6467   0.9881   1.1384

2007-       0.6      0.61     1.4      1.0      0.8
2008        159      12       314      116      859

2008-       1.2      1.19     0.8      1.0      1.0
2009        048      47       762      075      586

ST science and technology sector, HSS humanity and social science
sector, ALL both science and technology and humanity and social
science sectors, E Technical efficiency change (under a constant
returns-to-scale without convexity constraint): measures the change
in technical efficiency such as DMUs getting closer to or further
away from the efficiency frontier, PEFC Pure efficiency change
(under variable returns-to-scale with convexity constraint):
measures change in pure technical efficiency, TC Technological
Change: measures the change in technology (shifts in the efficiency
frontier). Technical progress (regress) has occurred if TC is
greater (less) than one, SC Scale change: It is obtained by
dividing the technical efficiency under a constant returns-to-scale
without convexity constraint (E) by pure technical efficiency under
variable returns-to-scale with convexity constraint (PEFC). It
measures the changes in efficiency due to movement toward or away
from the point of optimal scale, TFPC Total factor productivity
change: measures the change in total output relative to the change
in the usage of all inputs. It indicates the degree of productivity
change; when TFPC is >1 then productivity gains occur, whilst if
TFPC <1 productivity losses occur

Malmquist index calculated on Models 1a, 2a and 6a (see Table 1 for
more details on the production set)


References

Abbot, M., & Doucouliagos, C. (2003). The efficiency of Australian universities: a data envelopment analysis. Economics of Education Review, 22, 89-97.

Agasisti, T. (2011). Performances and spending efficiency in higher education: a European comparison through non-parametric approaches. Education Economics, 19(2), 199-224.

Agasisti, T., & Dal Bianco, A. (2009). Reforming the university sector: effects on teaching efficiency. Evidence from Italy. Higher Education, 57(A), 477-498.

Agasisti, T., & Johnes, G. (2010). Heterogeneity and the evaluation of efficiency: the case of Italian universities. Applied Economics, 42(11), 1365-1375.

Agasisti, T., Dal Bianco, A., Landoni, P., Sala, A., & Salerno, M. (2011). Evaluating the efficiency of research in academic departments: an empirical analysis in an Italian region. Higher Education Quarterly, 65(3), 267-289.

Aigner, D., Lovell, K., & Schmidt, P. (1977). Formulation and estimation of stochastic frontier production function models. Journal of Econometrics, 6, 21-37.

Andersen, R, & Petersen, N. C. (1993). A Procedure for ranking efficient units in data envelopment analysis. Management Science, 39, 1261-1264.

Atkinson, S. E., & Wilson, P. W. (1995). Comparing mean efficiency and productivity scores from small samples: a bootstrap methodology. Journal of Productivity Analysis, 6, 137-52.

Banker, R. D., Charnes, A., & Cooper, W. W. (1984). Some models for estimating technical and scale inefficiencies in DEA. Management Science, 32, 1613-1627.

Blondal, S., Field, S. and Girouard, N. (2002). Investment in Human Capital Through Post-Compulsory Education and Training: Selected Efficiency and Equity Aspects. OECD Economics Department Working Papers, No. 333, OECD Publishing. 10.1787/778845424272.

Bonaccorsi, A., Daraio, C., & Simar, L. (2006). Advanced indicators of productivity of universities. An application of robust nonparametric methods to Italian data. Scientometrics, 66(2), 389-410.

Buzzigoli, L., Giusti, A., & Viviani, A. (2010). The evaluation of university departments. A case study for Firenze. International Advances in Economic Research, 16, 24-38.

Carrington, R., Coelli, T., & Rao, D. S. P. (2005). The performance of Australian universities: conceptual issues and preliminary results. Economic Papers, 24, 145-163.

Catalano, G., Mori, A., Silvestri, P, and Todeschini, P, (1993). Chi paga Tistruzione universitaria? Dall'esperienza europea una nuova politica di sostegno agli studenti in Italia, Franco Angeli, Milano.

Caves, D. W., Christensen, L. R., & Diewert, W. E. (1982). The economic theory of index numbers and the measurement of input, output, and productivity. Econometrica, 50, 1393-1414.

Cazals, C., Florens, J. P, & Simar, L. (2002). Nonparametric frontier estimation: a robust approach. Journal of Econometrics, 106, 1-25.

Charnes, A., Cooper, W. W., & Rhodes, E. (1978). Measuring the efficiency of decision making units. European Journal of Operational Research, 2, 429-444.

Chizmar, J. F., & Zak, T. A. (1983). Modeling multiple outputs in educational production functions. American Economic Review, 73(2), 18-22.

Chizmar, J. F., & Zak, T. A. (1984). Canonical estimation of joint educational production functions. Economics of Education Review, 3(1), 37-43.

Coelli, T., Rao, D. S. P, & Battese, G. E. (1998). An introduction to efficiency and productivity analysis. Boston: Kluwer Academic Publishers.

Cooper, W.W., Seiford, L.M. and Zhu, J. (2004). Handbook on data envelopment analysis. Springer (Kluwer Academic Publishers)

Efron, B. (1979). Bootstrap methods: another look at the jackknife. Annals of Statistics, 7, 1-16.

Efron, B., & Tibshirani, R. J. (1993). An introduction to the bootstrap. London: Chapman & Hall.

Elliott, K. M. (2002). Key determinants of student satisfaction. Journal of College Student Retention, 4, 271-279.

Elliott, K. M., & Shin, D. (2002). Student satisfaction: an alternative approach to assessing this important concept Journal of Higher Education Policy and Management, 24, 197-209.

Fare, R., Grosskopf, S., Norris, M., & Zhang, Z. (1994). Productivity growth, technical progress, and efficiency changes in industrialised countries. American Economic Review, 84, 66-83.

Farrel, M. J. (1957). The measurement of productive efficiency. Journal of the Royal Statistical Society, 120, 253-290.

Ferrier, G.D. and Hirschberg, J.G. (1997). Bootstrapping Confidence Intervals for Linear Programming Efficiency Scores: With an Illustration Using Italian Banking Data. Journal of Productivity Analysis, 8, 19-33.

Flegg, A. T., Allen, D. O., Field, K., & Thurlow, T. W. (2004). Measuring the efficiency of British universities: a multi-period data envelopment analysis. Education Economics, 12(3), 231-249.

Giannakou, M. (2006). Chair's Summary, Meeting of OECD Education Ministers: Higher Education--Quality, Equity and Efficiency, Athens, Greece. Available from www.oecd.org/edumin2006

Greene, W. H. (1980). On the estimation of a flexible frontier production model. Journal of Econometrics, 13, 101-115.

Halkos, G., Tzeremes, N. G., & Kourtzidis, S. A. (2012). Measuring public owned university departments' efficiency: a bootstrapped DEA approach. Journal of Economics and Econometrics, 55(2), 1-24.

Harris, G. T. (1988). Research output in Australian university economics departments, 1974-83. Australian Economic Papers, 27, 102-110.

Johnes, J. (2004). Efficiency measurement. In G. Johnes & J. Johnes (Eds.), The international handbook on the economics of education. Cheltenham: Edward Elgar.

Johnes, J. (2008). Efficiency and productivity change in the english higher education sector from 1996/97 to 2004/05. The Manchester School, 76(6), 653-674.

Johnes, G., & Johnes, J. (1993). Measuring the research performance of UK economics departments: an application of data envelopment analysis. Oxford Economic Papers, 45, 332-347.

Johnes, G., & Johnes, J. (1995). Research funding and performance in UK university departments of economics: a frontier analysis. Economics of Education Review, 14(3), 301-314.

Kao, C., & Hung, H. T. (2008). Efficiency analysis of university departments: an empirical study. Omega, 36, 653-664.

Kocher, M. G., Luptacik, M., & Sutter, M. (2006). Measuring productivity of research in economics. A cross-country study using DEA. Socio-Economic Planning Sciences, 40, 314-332.

Koksal, G., & Nalcaci, B. (2006). The relative efficiency of departments at a Turkish engineering college: a data envelopment analysis. Higher Education, 51, 173-289.

Laureti, T. (2008). Modelling exogenous variables in human capital formation through a heteroscedastic stochastic frontier. International Advances in Economic Research, 14(1), 76-89.

Leitner, K.-H., Prikoszovits, J., Schaffhauser-Linzatti, M., Stowasser, R., & Wagner, K. (2007). The impact of size and specialisation on universities' department performance: a DEA analysis applied to Austrian universities. Higher Education, 53, 517-538.

Madden, G., Savage, S., & Kemp, S. (1997). Measuring public sector efficiency: a study of economics departments at Australian universities. Education Economics, 5(2), 153-167.

OECD (2008), Tertiary Education for the Knowledge Society, OECD Publishing, Paris. Available at www. oecd.org/edu/tertiary/review

Ray, S. C., & Desli, E. (1997). Productivity growth, technical progress and efficiency change in industrialized countries: comment. American Economic Review, 87(5), 1033-1039

Sarrico, C. S., Teixeira, P. N., Rosa, M. J., & Cardoso, M. F. (2009). Subject mix and productivity in Portuguese universities. European Journal of Operational Research, 197(2), 287-295.

Simar, L., & Wilson, P. W. (1998). Sensitivity analysis of efficiency scores: how to bootstrap in non-parametric frontier models. Management Science, 44(1), 49-61.

Simar, L., & Wilson, P. W. (1999). Estimating and bootstrapping malmquist indices. European Journal of Operational Research, 115, 459-71.

Tauer, L. W., Fried, H. O., & Fry, W. E. (2007). Measuring efficiencies of academic departments within a college. Education Economics, 15, 473-489.

Thursby, J. G. (2000). What do we say about ourselves and what does it mean? Yet another look at economics department research. Journal of Economic Literature, 38, 383-404.

Tomkins, C., & Green, R. (1988). An experiment in the use of data envelopment analysis for evaluating the efficiency of UK university departments of accounting. Financial Accountability & Management, 4, 147-164.

Torgersen, A. M., Forsund, F. A., & Kittelsen, S. A. C. (1996). Slack-adjusted efficiency measures and ranking of efficient units. Journal of Productivity Analysis, 7, 379-398.

Tremblay, K., Lalancette, D. and Roseveare, D. (2012). Assessment of Higher Education Learning Outcomes Feasibility Study Report--Volume 1--OECD.

Tyagi, P, Yadav, S. P., & Singh, S. P. (2009). Relative performance of academic departments using DEA with sensitivity analysis. Evaluation and Program Planning, 32, 168-177.

Worthington, A., & Lee, B. L. (2008). Efficiency, technology and productivity change in Australian universities, 1998-2003. Economics of Education Review, 27, 285-298.

(1) See Buzzigoli et al. (2010) for a brief review of the university system in Italy.

(2) For an evaluation of HEIs using parametric methods on Italian data, see Laureti (2008).

(3) See Agasisti (2011) and Kocher et al. (2006) for an attempt to measure the efficiency of higher education institutions at a country level.

(4) The reform was approved by the Law 240/2010, even though it was actually implemented by the University under analysis only at the end of 2013.

(5) In DEA, the organization under study is called the DMU.

(6) However, if these assumptions were too weak, efficiency levels would be systematically underestimated in small samples, generating inconsistent estimates.

(7) We search for all outliers in the dataset using super-efficiency (Andersen and Petersen 1993) and rho-Torgersen (Torgersen et al. 1996), The super-efficiency captures the maximum radial change such that the observations will remain effective. Instead, the rho-Torgcrsen measures the share of potential efficiency associated with actual observations. We find no difference in the efficiency estimates with and without outliers. Then, we report all efficiency scores for the DMUs (in our case, it is very relevant to check the evaluation of the efficiency for all DMUs under investigation).

(8) Technical efficiency refers to the capacity of DMUs, given the technology used, to produce the highest level of output from a given combination of inputs, or to use the least possible amount of inputs for a given output. Specifically, given that the focus is on the higher education system, technical efficiency means, according to Abbot and Doucouliagos (2003, p. 91), that "the technically efficient university is not able to deliver more teaching plus research output (without reducing quality) given its existing labor, capital and other inputs."

(9) As described by university guidelines, in each department there are professors and researchers who, due to similar research approaches and objectives, are part of the same scientific disciplinary sector and are grouped according to a large scientific and cultural project, consistent with teaching and training activities to which the department concurs. They promote and manage research, organise doctoral programmes, cany out research and consultancy work, according to specific agreements and contracts, on request of external organisations. The department is run by the department council and the director.

(10) Through the faculties, universities organise their action in various subject areas. Faculties coordinate subject courses and arrange them within different degree programmes. They appoint academic staff and decide, always respectful of the principle of teaching freedom, how to distribute roles and workload among university teachers and researchers. The faculty is run by the faculty council and the dean.

(11) In order to classify into two groups, university guidelines were used. The HSS sector has 18 departments and six faculties while the ST sector has 10 departments and three faculties.

(12) We also consider non-academic staff in order to take into account the administrative staff who support the academic staff and the students.

(13) They did not consider the administrative staff in the aggregate measure. They divided the academic staff into four categories so that the distance between two ranks is 1/4=0.25.

(14) The weights have been chosen so that the distance between two ranks is 1/5=0.2.

(15) The inclusion of this variable would be important if drop-out rates varied between faculties. Unfortunately, it could not be used for departments because the university statistical office only counts students by faculties.

(16) We did not include the number of citations due to the lack of available data (see Harris 1988 on the debate about the use of citations as a measure of research quality).

(17) According to Carrington et al. (2005), Worthington and Lee (2008) and Tyagi et al. (2009), "weighted publications" are the most suitable measure of research.

(18) The weights have been chosen so that the distance between two ranks is 1/4 = 0.25.

(19) This is a measure of the financial resources the departments receive from the central government in order to take into account the scientific production and it represents a good signal of research productivity.

(20) Johnes and Johnes (1993) argued that the amount of money received as grants for research will not only be spent on research but also on other facilities which are inputs into the production process. Thus, grants do not completely reflect the academic research but income for other research activities.

(21) According to them, "research grants represent an output variable," as indicator of a department's research capability.

(22) The first index is the weighted sum of the publications in international journals, the number of the patents and the total number of the academic staff (for ST sector departments), weighted sum of the publications in the national and international books and monographs and of the total number of academic staff (for HSS sector departments). The second index is calculated as the ratio between the total amount of money obtained for research over the total number of academic staff. Finally, the third index is represented by the number of research products per 10000 [euro] of academic staff costs.

(23) We also use, for robustness check, just the number of graduates without weighing by their degree classification and the results are similar.

(24) The use of this measure is still debated in the literature (Kao and Hung (2008)); (Abbot and Doucouliagos (2003)).

(25) We are aware that this measure might represent a potential limitation of our analysis. Indeed, according to Kao and Hung (2008, p. 655), "student evaluation for teachers may be biased by the nature of courses and does not have a common base for comparison if the students have not been taught by all teachers." Similarly, student satisfaction is a subjective measure and seems to be dominated by course difficulty and average grades. On the other hand, student satisfaction is an important qualitative indicator for higher education institutions. According to Elliott (2002) because of the positive relationship between student satisfaction and institutional characteristics such as student retention and graduation rates, many universities have incorporated some measure of satisfaction in their marketing campaigns, recruitment initiatives, and planning processes and to Elliott and Shin (2002) the assessment of student opinions and attitudes is a modern day necessity as institution of higher education are challenged by a climate of decreased funding, demands for public accountability, and increased competition for student enrollments. Thus, keeping the mentioned concerns in mind, we believe that a lesson could still be learned from the use of both the satisfaction indices.

(26) ER does not include any labour input (i.e., researchers), thus there is no double counting.

(27) ET does not cover any staffing costs, thus we can exclude any double counting.

(28) Moreover, as underlined by Johnes and Johnes (1995, p. 305), "a technically inefficient DMU could apparently become efficient merely by producing (however wastefully) an unusual type of output, or by forgoing the use of one type of input employed by all other DMUs." Being aware of this, we carefully select inputs and outputs, also from the quality point of view, taking into account what Kao and Hung (2008) considered as the two main difficulties to deal with, namely the data availability and the difficulty in measuring performance quality. See Johnes (2004) for a discussion of the problems of defining and measuring the inputs and outputs of the higher education production process.

(29) Mean values are calculated over the period 2005-2009. Descriptive statistics related to each year are not presented in the paper and are available on request.

(30) We also estimate the efficiency scores on average over the 2005-2009 period for each department and faculty. The results, for the sake of brevity, are not presented in the paper and are available on request. Overall 28 departments (DEP), named Dl, D2 up to D28 and of 9 faculties (FAC), named F1, F2 up to F9, have been considered. For the sake of anonymity, numbers have been assigned to the DMUs randomly.

(31) It is also interesting to notice that Model 4a (where the index is used as output) is not as efficient as the others. This could be due to the nature of the capacity of attracting resources index which is calculated as the ratio between total amount of money obtained for research over total number of academic staff.

(32) We find the same evidence even when the number of graduates is not weighed by their degree classification.

(33) A potential limitation of these results is represented by the decision to assign weights to the input EP and to the output NP. Therefore, we also test how alternative weights given to those variables would change the results. We did not find any statistically significant difference in the results either for departments or faculties. Results are available upon request.

(34) Through the Malmquist analysis, we provide four efficiency/productivity indices for each DMU and a measure of technical progress over time. These are: a) E, under a CRS technology without convexity constraint. It represents the change in technical efficiency as DMUs get closer to or further away from the efficiency frontier. It is also called "catching-up effect"; b) TC which measures the change in technology such as the shifts in the efficiency frontier. In other words it measures whether the production frontier is moving outwards over time. It is also called "frontier shift" effect. Technical progress (regress) has occurred if TC is greater (less) than one; c) PEFC, under a variable returns-to-scale technology with convexity constraint. It measures the change in pure technical efficiency; d) SC which is obtained by dividing the technical efficiency under a constant returns-to-scale without convexity constraint (E) by PEFC under variable returns-to-scale with convexity constraint (PEFC). It measures the changes in efficiency due to movement toward or away from the point of optimal scale. In other words, it measures the degree to which a unit gets closer to its most productive scale size over the periods under examination; e) TFPC measuring the change in total output relative to the change in the usage of all inputs. It indicates the degree of productivity change; when TFPC is >1 then productivity gains occur, whilst if TFPC <1 productivity losses occur. Specifically, it can be decomposed into two components: E and TC.

(35) We also calculate the Malmquist index for each department and faculty. The results, for the sake of brevity, are not presented in the paper and are available on request.

(36) If E> TC, the productivity gains are driven by improvements in efficiency while in case E<TC productivity gains are instead driven by the technological progress

(37) More specifically, E is the product of PEPC and scale efficiency (SC) such as that E=PEFC*SC. If PEFC> SC, then the main source of efficiency change is driven by PEPC while if PEFC<SC, then the major source of efficiency is, instead, due to changes in SC.

(38) Even though there is still a productivity gain (TFPC>1).

(39) E decreases from 1.0108 to 1.0048 while TC increases from 0.9716 to 1.0179.

(40) E increases from 1.2356 to 1.3031 and TC decreases from 0.9464 to 0.9235 in Model 2b; E increases from 1.2369 to 1.6218 while TC decreases from 0.8802 to 0.6221 in model 3b.

(41) TC increases from 0.9634 to 1.0743 and E decreases from 1.057 to 1.0161

(42) E increases from 0.8852 to 0.9939 and TC decreases from 1.1339 to 0.9494 in Model lb; E increases from 0.9093 to 1.2281 while TC decreases from 1.3236 to 0.8758 in model 3b.

(43) In order to obtain confidence intervals for efficiency scores, the confidence level ([alpha]) is fixed at 5% and 10% over 2000 replications, in an output-oriented framework. Since the results are almost the same, we only report the estimates (Appendix Tables 4 and 5) associated with [alpha] = 0.05.

Cristian Barra [1] * Roberto Zotti [1]

[mail] Roberto Zotti

r20tti@unisa.it

Cristian Barra

cbarra@unisa.it

[1] Department of Economics and Statistics, University of Salerno, Via Giovanni Paolo II, 132-84084 Fisciano, SA, Italy
COPYRIGHT 2016 Atlantic Economic Society
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2016 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Barra, Cristian; Zotti, Roberto
Publication:International Advances in Economic Research
Article Type:Report
Date:Feb 1, 2016
Words:11902
Previous Article:Inflation and the exchange rate: the role of aggregate demand elasticity.
Next Article:Poland's poverty from the US perspective.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters