Printer Friendly

Data envelopment analysis and commercial bank performance: a primer with applications to Missouri banks.

COMMERCIAL BANKS PLAY a vital role in the economy for two reasons: they provide a major source of financial intermediation and their checkable deposit liabilities represent the bulk of the nation's money stock. Evaluating their overall performance and monitoring their financial condition is important to depositors, owners, potential investors, managers and, of course, regulators.

Currently, financial ratios are often used to measure the overall financial soundness of a bank and the quality of its management. Bank regulators, for example, use financial ratios to help evaluate a bank's performance as part of the CAMEL system.(1) Evaluating the economic performance of banks, however, is a complicated process. Often a number of criteria such as profits, liquidity, asset quality, attitude toward risk, and management strategies must be considered. The changing nature of the banking industry has made such evaluations even more difficult, increasing the need for more flexible alternative forms of financial analysis.

This paper describes a particular methodology called Data Envelopment Analysis (DEA), that has been used previously to analyze the relative efficiencies of industrial firms, universities, hospitals, military operations, baseball players and, more recently, commercial banks.(2) The use of DEA is demonstrated by evaluating the management of 60 Missouri commercial banks for the period from 1984 to 1990.(3)

DATA ENVELOPMENT ANALYSIS:

SOME BASICS

DEA represents a mathematical programming methodology that can be applied to assess the efficiency of a variety of institutions using a variety of data. This section provides an intuitive explanation of the DEA approach. A formal mathematical presentation of DEA is described in appendix A; a slightly different nonparametric approach is described in appendix B.

The DEA Standard for Efficiency

DEA is based on a concept of efficiency that is widely used in engineering and the natural sciences. Engineering efficiency is defined as the ratio of the amount of work performed by a machine to the amount of energy consumed in the process. Since machines must be operated according to the law of conservation of energy, their efficiency ratios are always less than or equal to unity.

This concept of engineering efficiency is not immediately applicable to economic production because the value of output is expected to exceed the value of inputs due to the "value added" in production. Nevertheless, under certain circumstances, an economic efficiency standard-similar to the engineering standard - can be defined and used to compare the relative afficiencies of economic entitles. For example, a firm can be said to be efficient relative to another if it produces either the same level of output with fewer inputs or more output with the same or fewer inputs. A single firm is considered "technically efficient" if it cannot increase any output or reduce any input without reducing other outputs or increasing other inputs.(4) Consequently, this concept of technical efficiency is similar to the engineering concept. The somewhat broader concept of "economic efficiency," on the other hand, is achieved when firms find the combination of inputs that enable them to produce the desired level of output at minimum cost.(5)

DEA and Technical Efficiency

The discussion of the DEA approach will be undertaken in the context of technical efficiency in the microeconomic theory of production. In microeconomics the production possibility set consists of the feasible input and output combinations that arise from available production technology. The production function (or production transformation as it is called in the case of multiple outputs) is a mathematical expression for a process that transforms inputs into output. In so doing, it defines the frontier of the production possibility set. For example, consider the well-known Cobb-Douglas production function:

(1) Y = [AK.sup.a][L.sup.(1-a)],

where Y is the maximum output for given quantities of two inputs: capital (K) and labor (L). Even if all firms produce the same good (Y) with the same technology defined by equation 1, they may still use different combinations of labor and capital to produce different levels of output. Nonetheless, all firms whose input-output combinations lie on the surface (frontier) of the production relationship defined by equation 1 are said to be technologically efficient. Similarly, firms with input-output combinations located inside the frontier are technologically inefficient.

DEA provides a similar notion of efficiency. The principal difference is that the DEA production frontier is not determined by some specific equation like that shown in equation 1, instead, it is generated from the actual data for the evaluated firms (which in DEA terminology are typically called decision-making units or DMUs).(6) Consequently, the DEA efficiency score for a specific firm is not defined by an absolute standard like equation 1. Rather, it is defined relative to the other firms under consideration. And, similar to engineering efficiency measures, DEA establishes a "benchmark" efficiency score of unity that no individual firm's score can exceed. Consequently, efficient firms receive efficiency scores of unity, while inefficient firms receive DEA scores of less than unity.

In microeconomic analysis, efficient production is defined by technological relationships with the assumption that firms are operated efficiently. Whether or not firms have access to the same technology, it is assumed that they operate on the frontier of their relevant production possibilities set; hence, they are technically efficient by definition. As a result, much of microeconmic theory ignores issues concerning technological inefficiencies.

DEA assumes that all firms face the same unspecified technology which defines their production possibilities set. The objective of DEA is to determine which firms operate on their efficiency frontier and which firms do not. That is, DEA partitions the inputs and outputs of all firms into efficient and inefficient combinations. The efficient input-output combinations yield an implicit production frontier against which each firm's input and output combination is evaluated. If the firm's input-output combination lies on the DEA frontier, the firm might be considered efficient; if the firm's input-output combination lies inside the DEA frontier, the firm is considered inefficient.

An advantage of DEA is that it uses actual sample data to derive the efficiency frontier against which each firm in the sample can be evaluated.(7) As a result, no explicit functional form for the production function has to be specified in advance. Instead, the production frontier is generated by a mathematical programming algorithm which also calculates the optimal DEA efficiency score for each firm.

To illustrate the relationship between DEA and economic production in its simplest form, consider the example shown in figure 1, in which firms use a single input to produce a single output. In this example, there are six firms whose inputs are denoted as [x.sub.i] and whose outputs are denoted as [y.sub.i](i = 1,2,...,6); their input-output combinations are labeled by [F.sub.s](s = 1,2,...,6). While the production frontier is generated by the input-output combinations for the firms labeled [F.sub.1], [F.sub.3], [F.sub.5] and [F.sub.6], the efficient portion of the production frontier is shown by the connected line segments. [F.sub.2] and [F.sub.4] are clearly DEA inefficient because they lie inside the frontier; [F.sub.6] is DEA inefficient because the same output can be produced with less input.

The Importance of Facets in DEA

"Facets" are an important concept used to evaluate a firm's efficiency in DEA. The efficiency measure in DEA is concerned with whether a firm can increase its output using the same inputs or produce the same output with fewer inputs. Consequently, only part of the entire efficiency frontier is relevant when evaluating the efficiency of a specific firm. The relevant portion of the efficiency frontier is called a facet. For example, in figure 1, only the facet from [F.sub.1] to [F.sub.3] is relevant for evaluating the efficiency of the firm designated by [F.sub.2]. Similarly, only the facet [F.sub.3] to [F.sub.5] is used to evaluate the firm denoted by [F.sub.4].(8)

The use of facets with DEA enables analysts to identify inefficient firms and, through comparison with efficient firms on relevant facets, to suggest ways in which the inefficient firms might improve their performance. As illustrated in figure 1,[F.sub.2] can become efficient by rising to some point on the [F.sub.1]-[F.sub.3] facet. In particular, it could move to A by simply using less input, to B by producing more output or C by both reducing input and increasing output. Of course, in this example, the analysis is obvious and the recommendation trivial. In more complicated, multiple input-multiple output cases, however, the appropriate efficiency recommendations would be much more difficult to discover without the DEA methodology.(9)

Scale Efficiency

In addition to measuring technological efficiency, DEA also provides information about scale efficiencies in production. Because the measure of scale efficiency in DEA analysis varies from model to model, care must be exercised. The scale efficiency measured for the DEA model used in this study, however, corresponds fairly closely to the microeconomic definition of economics of scale in the classical theory of production.(10)

To illustrate, consider the [F.sub.1]-[F.sub.3] facet in figure 2. Firms located on this facet exhibit increasing returns to scale because a proportionate rise in their input and output places them inside the production frontier. A proportionate decrease in their input and output is impossible because it would move them outside of the frontier. This is illustrated by a ray from the origin that passes through the [F.sub.1]-[F.sub.3] facet at [F'.sub.2].

Firms located on the [F.sub.3]-[F.sub.5] facet exhibit decreasing returns to scale because a proportionate decrease in their input and output places them inside the production frontier. A proportionate increase in their input and output in impossible because it would move them outside of the frontier.

Constant returns to scale occur if all proportionate increases or decreases in inputs and outputs move the firm either along or above the production frontier. In figure 2, for example, [F.sub.3] exhibits constant returns to scale because proportionate increases or decreases would place it outside the production frontier.

Since the facets are generated by efficient firms the scale efficiency of these firms is determined by the properties of their particular facet. Scale efficiencies for inefficient firms are determined by their respective reference facets as well. Thus, [F.sub.2] and [F.sub.4] in figure 1 exhibit increasing and decreasing returns to scale, respectively.

DEA and Economic Efficiency

While the discussion of DEA in the context of technological efficiency of production is useful for illustrative purposes, it is far too narrow and limiting. DEA is frequently applied to questions and data that transcend the narrow focus of technical efficiency in production. For example, DEA is frequently applied to financial data when addressing questions of economic efficiency. In this regard, its application is somewhat more problematic. For example, when firms face different marginal costs of production due to regional or local wage differentials, one firm may appear inefficient relative to another. Give the potential differences in relative costs that a firm may face, however, it might be equally efficient. Alternatively, differences that appear to be due to economic inefficiencies may in fact be due to cost differences directly attributable to the non-homogeneity of products. Because of problems like these, DEA must be applied judiciously.

DEA Window Analysis

To this point, the discussion of DEA has been concerned with evaluating the relative efficiency of different firms at the same time. Those who use DEA, however, frequently employ a type of sensitivity analysis called "window analysis." The performance of one firm or its reference firms may be particularly "good" or "bad" at a given time because of factors that are external to the firm's relative efficiency. In addition, the number of firms that can be analyzed using the DEA model is virtually unlimited. Therefore, data on firms in different periods can be incorporated into the analysis by simply treating them as if they represent different firms. In this way, a given firm at a given time can compare its performance at different times and with the performance of other firms at the same and at different times. Through a sequence of such "windows," the sensitivity of a firm's efficiency score can be derived for a particular year according to changing conditions and a changing set of reference firms.(11) A firm that is DEA efficient in a given year, regardless of the window, is likely to be truly efficient relative to other firms. Conversely, a firm that is only DEA efficient in a particular window may be efficient solely because of extraneous circumstances.

In addition, window analysis provides some evidence of the short-run evolution of efficiency for a firm over time. Of course, comparisons of DEA efficiency scores over extended periods may be misleading (or worse) because o significant changes in technology and the underlying economic structure.

APPLYING DEA TO BANKING:

AN EVALUATION OF 60 MISSOURI

COMMERCIAL BANKS

To demonstrate DEA's use, it is applied to evaluate relative efficiency in banking. Financial data for 60 of the largest Missouri commercial banks for 1984 (determined by their total assets in 1990) are used. Initially, the relative efficiency of these banks is examined using two alternative DEA models; the CCR model and the additive DEA model. A discussion of these alternative DEA models appears in appendix A. In extending the discussion and analysis, however, we focus solely on the CCR model.

Measuring Inputs and Outputs

Perhaps the most important step in using DEA to examine the relative efficiency of any type of firm is the selection of appropriate inputs and outputs. This is partially true for banks because there is considerable disagreement over the appropriate inputs and outputs for banks. Previous applications of DEA to banks generally have adopted one of two approaches to justify their choice of inputs and outputs.(12)

The first "intermediary approach" views banks as financial intermediaries whose primary business is to borrow funds from depositors and lend those funds to others for profit. In these studies, the banks' outputs are loans (measured in dollars) and their inputs are the various costs of these funds (including interest expense, labor, capital and operating costs).

A second approach views banks as institutions that use capital and labor to produce loans and deposit account services. In these studies, the banks' outputs are their accounts and transactions, while their inputs are their labor, capital and operating costs; the banks' interest expenses are excluded in these studies.

Our analysis of 60 Missouri banks uses a variant of the intermediately approach. The bank's outputs are interest income (IC), non-interest income (NIC) and total loans (TL). Interest income includes interest and fee income on loans, income from lease-financing receivables interest and dividend income on securities, and other income. Non-interest income includes service charges on deposit accounts, income from fiduciary activities and other non-interest income. Total loans consist of loans and lease net unearned income. These outputs represent the banks' revenues and major business activities.

The banks' inputs are interest expenses (IE), non-interest expenses (NIE), transaction deposits (TD), and non-transaction deposits (NTD). Interest expenses include expenses for federal funds and the purchase and sale of securities, and the interest on demand notes and other borrowed money. Non-interest expenses include salaries, expenses associated with premises and fixed assets, taxes and other expenses. Bank deposits are disaggregated into transaction and non-transaction deposits because they have different turnover and cost structures. These inputs represent measures for the banks' labor, capital and operating costs. Deposits and funds purchased (measured by their interest expenses) are the source of loanable funds to be invested in assets.(13)

Evaluation of Missouri Bank

Management Performance in 1984

The DEA scores and returns to scale measures resulting from applying the CCR and additive DEA models are presented in table 1.(14) Although the overall results are similar across the two models, there are minor differences in the individual efficiency scores that may provide information about the relative efficiency of these banks.

The two models differ fundamentally in their definition of the efficiency frontier. In particular, the CCR model assumes constant returns to scale, while the additive model allows for the possibility of constant (C), increasing (I) or decreasing (D) returns. Because of this, banks that are efficient in the CCR model must also be efficient in the additive model. As table 1 illustrates for our Missouri banks, the converse, however, is not true.

The overall efficiency score is composed of "pure" technical and "scale" efficiencies. In the CCR model, a firm which is technologically efficient also uses the most efficient scale of operation. In the additive model, however, the score represents only "pure" technical efficiency. By comparing the results of the CCR and additive models, we can see that five of our Missouri banks were technologically efficient, they were not operating at the most efficient scale of operation. The reader is cautioned, however, that this analysis excludes a number of factors (such as demographic characteristics of the markets in which they operate) that may be important in determining the most economically efficient scale of operation.

Since the efficiency scores are defined differently in the CCR and the additive DEA models, it is not possible to generate a measure of scale inefficiency using the results in table 1. Nevertheless, the fact that the efficiency scores from the two model are quite similar suggests that the scale inefficiency is not a major source of overall inefficiency for these banks. It appears that the inefficient banks simply used too many inputs or produced too few outputs rather than chose the incorrect scale for production.(15)

A Further Analysis of the CCR Model

An Illustration of the use of DEA analysis can be obtained by considering the data for the bank with the lowest efficiency score, bank 59. The results for this bank are summarized in table 2. The reference banks making up the facet to which bank 59 is compared and "lambda," a measure of the relative importance of each reference bank in the facet, are given. The table shows that three reference banks compose the facet for bank 59. Banks 51 and 39 play the major role and the other bank is relatively unimportant.

(1) For more details, see Booker (1983), Korobow (1983) and Putnam (1983). (2) The name DEA is attributed to Charnes, Cooper and Rhodes (1978), for the development of DEA, see Charnes, et al. (1985) and Charnes, et al. (1978); for some applications of DEA, see Banker, et al. (1984), Charnes, et al. (1990) and Sherman and Gold (1985). (3) Although there is vast literature analyzing competition and performance in the U.S. banking industry (e.g., Gilbert (1984), Ehlen (1983), Korobow (1983), Putnam (1983), Wall (1983) and Watro (1989), actual banking efficiency has received limited attention. Recently, a few publications have used DEA or a similar approach to study the technical and scale efficiencies of commercial banks (e.g., Sherman and Gold (1985), Charnes et al. (1990), Rangan et al. (1988), Aly et al. (1990), and Elyasiani and Mehdian (1990)). (4) See Koopmans (1951). (5) This is also named "allocative efficiency" because a profit maximizing firm must allocate its resources such that the technical rate of substitution is equal to the ratio of the prices of the resources. Theoretical considerations of allocative efficiency can be found in the articles by Banker (1984) and Banker and Maindiratta (1988). (6) It is common to estimate production functions using regression analysis. When cross-section data are used, the estimated production function represents the average behavior of firms in the sample. Hence, the estimated production function depends upon the data for both efficient and inefficient firms. By imposing suitable constraints, these statistical procedures can be modified to orient the estimates toward frontiers. In this manner, the frontier of the production set can be estimated econometrically. (7) DEA has two theoretical properties that are especially useful for its implementation. One is that the DEA model is mathematically related to a multi-objective optimization problem in which all inputs and outputs are defined as multiple objectives such that all inputs are minimized and all outputs are maximized simultaneously under the technology constraints. Thus, DEA-efficient DMUs represent Pareto optimal solutions to the multi-objective optimization problem, while the Pareto optimal solution does not necessarily imply DEA efficiency.

Another important property is that DEA efficiency scores are independent of the units in which inputs and outputs are measured, as long as these are the same for all DMUs. These characteristics make the DEA methodology highly flexible. The only constraint set originally in the CCR model is that the values of inputs and outputs must be strictly positive.

This constraint, however, has been abandoned in the new additive DEA formulation. As a consequence, the additive DEA model is used to compute reservation prices for new and disappearing commodities in the construction of price indexes by Lovell and Zieschang (1990). (8) In a multiple dimensional space, the efficiency frontier forms a polyhedron. In geometry, a portion of the surface of a polyhedron is called a facet; this is why the same term is used in DEA. These facets have important implications in empirical studies, such as identification of competitors and strategic groups in an industry. See Day, Lewin, Salazar and Li (1989). (9) For alternative measures of efficiency, see appendix B. (10) See Fare, Grosskopf and Lovell (1985). Different DEA models employ different measures of scale efficiency. See appendix A and B for details. (11) This is called "panel data analysis" in econometrics. (12) Some studies have adopted the simple rule that if it produces revenue, it is an output; if it requires a net expenditure, it is an input. For example, see Hancock (1989). (13) This is controversial, however. Some researchers specify deposits as outputs, arguing that treating deposits as inputs makes banks that depend on purchased money look artificially efficient (see Berg et al., 1990). (14) The results from solving the DEA model also include information about DEA scale efficiencies, the efficient projection on the efficiency frontier, slack variables [S.sup.+.sub.r] and [S.sup.-.sub.i] and the dual variables [mu.sub.r] and [v.sub.i]. The "dual" variables represent "shadow prices" for each input and output. That is, they represent the marginal effects of the input and output variables on the bank's DEA efficiency score. See appendix A for details. (15) Similar results of insignificant scale-inefficiency of U.S. banks have been reported by Aly et al. (1990).

The value measure in the first column in the lower half of the table gives the value of the outputs and the inputs for bank 59 in 1984. The second column gives the value measure that bank 59 would have to achieve in order to be DEA efficient. The difference between these numbers is presented in the third column.(16) Bank 59 should increase its total loans by 143 percent and its non-interest income by 6 percent. Bank 59 should reduce its four inputs by 26.6 percent of interest expenses and by 24 percent of the other inputs.

[TABULAR DATA OMITTED]

Table 2 also presents a measure for bank 59 denoted as the "dual". This measure is important because the ratio of the duals for outputs and inputs shows the tradeoff of increments or decrements in inputs and outputs to DEA efficiency. This is with the assumption that the bank is free to vary all of its inputs and outputs. The fact that the dual for NIE is large relative to the others suggests that the biggest efficiency gains for bank 59 will come from decreasing non-interest expenses. A similar analysis can be conducted for each inefficient bank to determine its reference banks and the way in which it can become DEA efficient.
Table 2
Detailed Results for Bank 59
Efficiency Score = .7600
Facet 51 39 27
Lambda = .315 .188 .037
 Value Value if
Outputs measures efficient Difference Dual
IC 9,627.0 9,627.0 .0 .7895E-04
NIC 350.0 371.9 21.9 .1000E-08
TL 22,442.0 54,599.8 32,157.8 .3704E-10
Inputs
IE 7,887.0 5,784.3 2,102.7 .4762E-09
NIE 2,182.0 1,658.4 523.6 .2277E-03
TD 19,915.0 15,136.0 4,779.0 .2780E-05
NTD 77,005.0 58,526.1 18,478.9 .5815E-05


A Window Analysis

The available data cover a seven-year span from 1984 through 1990. A three-year period was chosen to allow five windows. The windows and the periods they cover are as follows:

window 1 1984 1985 1986
window 2 1985 1986 1987
window 3 1986 1987 1988
window 4 1987 1988 1989
window 5 1988 1989 1990


In each window, the number of banks is tripled because each bank at a different year is treated as an independent firm. Repeating the procedure discussed above for each window, information about the evolutions of DEA efficiencies of every bank during the seven-year period was obtained. Table 3 lists the DEA scores of three banks by year in each window. The average of the 15 DEA efficiency scores is presented in the column denoted "mean." The column labeled GD indicates the greatest difference in a bank's DEA scores in the same year but in different windows. The column labeled TGD denotes the greatest difference in a bank's DEA scores for the entire period.

[TABULAR DATA OMITTED]

A bank can receive a different DEA efficiency score for the same year in different windows. This variation in the DEA scores of each bank reflects both the performance of that bank over time as well as that of other banks. The distribution of banks by their average efficiency over the five windows is presented in table 4.
Table 4
Distribution of Average DEA Scores
(1984-1990)
 Five-year average Number
Model DEA score of banks
CCR 1.00 1
 0.98 - 0.99 8
 0.96 - 0.97 4
 0.93 - 0.95 13
 0.91 - 0.92 7
 0.90 3
 0.88 - 0.89 4
 0.86 - 0.87 10
 0.83 - 0.85 5
 0.80 - 0.82 3
 0.79 1
 0.68 1


Bank 48 was the only one that was efficient for every year in every window over the 1984-90 period. Its average efficiency of 1.00 indicates that bank 48 was a superb bank in the sample DEA evaluation.

Bank 41, on the other hand, began in the first window with scores of 0.84 in 1984, 0.85 in 1985 and 0.89 in 1986. In the second window, bank 41 had scores of 0.86 in 1985, 0.90 in 1986 and 0.94 in 1987. Although all of its efficiency scores fluctuated slightly in the other three windows, they tended to increase. With a gradual improvement in its DEA efficiency over the seven years, bank 41 was almost fully efficient in the last year, with a DEA score of 0.98. However, its average-efficiency score of 0.92 does not put it among the top 13 banks for the period.

In contrast to the banks previously discussed, bank 59 displayed relatively erratic and inefficient behavior over the entire seven-year period. Its average DEA score of 0.68 was the lowest of the 60 Missouri banks analyzed.

The window analysis enables us to identify the best and the worst banks in a relative sense, as well as the most stable and most variable banks in terms of their seven-year average DEA scores.

CONCLUDING REMARKS

The DEA methodology discussed in this article has the potential to provide crucial information about banks' financial conditions and management performance for the benefit of bank regulators, managers and bank stock investors. The DEA framework is extremely general, permitting multiple criteria for evaluation purposes. Moreover, DEA requires only data on the quantity of inputs and outputs; no price data are necessary. This is especially appealing in the analysis of banking because of the difficulties inherent in defining and measuring the prices of banks' inputs and outputs.

In addition, the DEA method is highly flexible. In particular, the selection of inputs and outputs has considerably fewer limitations than alternative econometric approaches. Nevertheless, if the analysis is to be useful, care must be exercised in the selection of inputs and outputs.

(16) In the case of outputs, this difference is a measure of "slack." In the case of inputs, however, the slack variable is more complicated.

REFERENCES

Ahn, T., A. Charnes, and W. W. Cooper, "Some Statistical and DEA Evaluations of Relative Efficiencies of Public and Private Institutions of Higher Learning," Socio-Economic Planning Sciences, Vol. 22, No. 6, 1988, pp. 259-69.

______. "Efficiency Characterizations in Different DEA Models," Socio-Economic Planning Sciences, Vol. 22, No. 6, 1988, pp. 253-57.

Aly, Hassan Y., Richard Grabowski, Carl Pasurka, and Nanda Rangan. "Technical, Scale, and Allocative Efficiencies in U.S. Banking: An Empirical Investigation," Review of Economics and Statistics (May 1990), pp. 211-18.

Amel, D., and L. Froeb, "Do Firms Differ Much?" Finance & Economics Discussion Series, Federal Reserve Board, #87 August 1989.

Banker, Rajiv D. "Estimating Most Productive Scale Size Using Data Envelopment Analysis," European Journal of Operational Research 217 (1984), pp. 35-40.

Banker, Rajiv D., A. Charnes and W. W. Cooper. "Models for Estimating Technical and Scale Efficiencies," Management Science, Vol. 30, (1984), pp. 1078-92.

Banker, Rajiv D., R. F. Conrad and R. P. Strauss. "A Comparative Application of DEA and Translog Methods: An Illustrative Study of Hospital Production," Management Science Vol. 36 (1986), pp. 30-34.

Banker, Rajiv D., and Ajay Maindiratta. "Nonparametric Analysis of Technical and Allocative Efficiencies in Production," Econometrica (November 1988), pp. 1315-32.

Berg, S. A., F. R. Forsund, and E. S. Jansen. "Deregulation and Productivity Growth in Norwegian Banking 1980-1988: A Non-parametric Frontier Approach," (Bank of Norway, 1990).

Booker, Irene O. "Tracking Banks from Afar: A Risk Monitoring System," Federal Reserve Bank of Atlanta Economic Review (November 1983), pp. 36-41.

Bovenzi, John F., James A. Marino, and Frank E. McFadden. "Commercial Bank Failure Prediction Models," Federal Reserve Bank of Atlanta Economic Review (November 1983). pp. 14-26.

Charnes, A., W. W. Cooper, B. Golany, L. Seiford and J. Stutz. "Foundations of Data Envelopment Analysis for Pareto-Koopmans Efficient Empirical Production Functions," Journal of Econometrics (November 1985), pp. 91-107.

Charnes, A., W. W. Cooper, Z. M. Huang and D.B. Sun. "Polyhedral Cone-Ratio DEA Models with An Illustrative Application To Large Commercial Banks, "Journal of Econometrics (October/November 1990), pp. 73-91.

Charnes, A., W. W. Cooper and E. Rhodes. "Measuring Efficiency of Decision Making Units," European Journal of Operational Research Vol. 1 (1978), pp. 429-44.

Day, D. L., A. Y. Lewin, R. J. Salazar, and H. Li. "Strategic Leaders in the U.S. Brewing Industry: A Longitudinal Analysis of Outliers," presented at the conference on New Uses of DEA, Austin, Texas, September 27-29, 1989.

Ehlen, James G. Jr. "A Review of Bank Capital and its Adequacy," Federal Reserve Bank of Atlanta Economic Review (November 1983), pp. 54-60.

Elyasiani, Elyas, and Seyed M. Mehdian. "A Nonparametric Approach to Measurement of Efficiency and Technological Change: The Case of Large U.S. Commercial Banks," Journal of Financial Services Research (July 1990), pp. 157-68.

Fare, Rolf, Shawna Grosskopf, and C. A. K. Lovell. The Measurement of Efficiency of Production (Kluwer-Nijhoff, 1985). Fare, Rolf, and W. Hunsaker. "Notions of Efficiency and Their Reference Sets," Management Science Vol. 32 (February 1986), pp. 237-43.

Gilbert, R. Alton. "Bank Market Structure and Competition, A Survey," Journal of Money, Credit, and Banking (November 1984, Part 2), pp. 617-45.

Grosskopf, Shawna. "The Role of the Reference Technology in Measuring Productive Efficiency," The Economic Journal (June 1986), pp. 499-513.

Hancock, Diana. "Bank Profitability, Deregulation, and the Production of Financial Services," Research Working Paper 89-16, Federal Reserve Bank of Kansas City (December 1989).

Koopmans, T.C. "An Analysis of Production as an Efficient Combination of Activities," in T.C. Koopmans, ed., Activity Analysis of Production and Allocation, Cowles Commission for Research in Economics, Monograph No. 13 (John Wiley and Sons, Inc., 1951).

Korobow, Leon, and David P. Stuhr. "The Relevance of Peer Groups in Early Warning Analysis," Federal Reserve Bank of Atlanta Economic Review (November 1983), pp. 27-34.

Lovell, C. A. K., and K. D. Zieschang. "A DEA Approach to the Problem of New and Disappearing Commodities in the Construction of Price Indexes," presented in the Sixth World Congress of the Econometric Society, Barcelona, Spain, August 21-28, 1990.

Noonan, John H., and Susan Kay Fetner. "Capital and Capital Standards," Federal Reserve Bank of Atlanta Economic Review (November 1983), pp. 50-53.

Putnam, Barron H. "Concepts of Financial Monitoring," Federal Reserve Bank of Atlanta Economic Review (November 1983), pp. 6-13.

Rangan, Nanda, Richard Grabowski, Hassan Y. Aly, and Carl Pasurka. "The Technical Efficiency of U.S. Banks." Economics Letters Vol. 28, No. 2 (1988), pp. 169-75.

Sherman, H. David, and Franklin Gold. "Bank Branch Operating Efficiency: Evaluation with Data Envelopment Analysis," Journal of Banking and Finance (June 1985), pp. 297-315.

Thrall, R.M. "Overview and Recent Development in DEA: The Mathematical Programming Approach," paper presented at [IC.sup.2] Institute, Conference Proceedings, University of Texas at Austin, October 1989.

Wall. L. "Why Are Some Banks More Profitable Than Others?" Working Paper Series No. 12, Federal Reserve Bank of Atlanta (November 1983).

Watro, Paul R. "Have the Characteristics of High-Earning Banks Changed? Evidence From Ohio," Economic Commentary, Federal Reserve Bank of Cleveland (September 1, 1989).

Whalen, Gary. "Concentration and Profitability in Non-MSA Banking Markets," Federal Reserve Bank of Cleveland Economic Review (1:1987), pp. 2-9.

Zukhovitskiy, S. I., and L. I. Avdeyeva. Linear and Convex Programming (W.B. Saunders Company, 1966).

(16) In the case of outputs, this difference is a measure of "slack." In the case of inputs, however, the slack variable is more complicated.

Appendix A

A Comparison of the CCR

The CCR Ratio Model

The most important characteristics of the DEA methodology can be presented with the CCR Ratio Model. Consider a general situation where n decision making units, DMUs, convert the same m inputs into the same s outputs. The quantities of these outputs can be different for each DMU. In more precise notation, the j-th DMU uses a m-dimensional input vector, [x.sub.ij](i = 1,2,...,m),to produce an s-dimensional output vector, [y.sub.rj] (r = 1,2,..., s). The particular DMU being evaluated is identified by subscript 0; all others are denoted by subscript j. The following optimization problem is formed for each DMU:

Max [h.sub.0] = s [summation over] r = 1 [u.sub.r y.sub.r0] / m [summation over] i = 1 [v.sub.i

x.sub.i0]

subject to the constraints:

s [summation over] r = 1 [u.sub.r y.sub.rj] / m [summation over] i = 1 [v.sub.i x.sub.ij] [is less

than or equal to] 1, [u.sub.r is greater than or equal to] 0, [v.sub.i is greater than or equal

to] 0

for i = 1,2,..., m; r = 1,2,..., s; j = 1,2,...,n.

where the output weights denoted by [u.sub.r] (r = 1,2,..., s) and the input weights denoted by [v.sub.i] (i = 1,2,...,m) are required to be non-negative (i,e., [u.sub.r], [v.sub.i] [s greater than or equal to] 0 for r = 1,2, ..., s; i = 1, 2, ..., m).

The "virtual output" is the sum (s [summation over] r = 1 [u.sub.r y.sub.rj]) and the "virtual input" is the sum (m [summation over] i = 1 [v.sub.i x.sub.ij]). The objective function is defined by [h.sub.o], that is, the ratio of virtual output to virtual input. The solution is a set of optimal input and output weights. The maximum of the objective function is the DEA efficiency score assigned to [DMU.sub.0]. The first set of inequality constraints guarantees that the efficiency ratios of other DMUs (computed by using the same weights [u.sub.r] and [v.sub.i]) are not greater than unity. The remaining inequality constraints simply require all input and output weights to be positive. Since every DMU can be [DMU.sub.0], this optimization problem is well-defined for every DMU. Because the weights ([v.sub.i], [u.sub.r]) and the observations of inputs and outputs ([x.sub.ij], [Y.sub.rj]) are all positive and the constraints must be satisfied by [DMU.sub.0], the maximum value of [h.sub.0] can only be a positive number less than or equal to unity. If the efficiency score [h.sub.o] = 1, [DMU.sub.0] satisfies the necessary condition to be DEA efficient; otherwise, it is DEA inefficient.

The above problem cannot be solved as stated because of difficulties associated with nonlinear (fractional) mathematical programming. Charnes and Cooper, however, have developed a mathematical transformation (the so-called "CC transformation") which converts the above nonlinear programming problem into a linear one. Existing duality theory and simplex algorithms in linear programming are used to solve the transformed problem.(1)

For a linear programming problem, there exists a pair of expressions which are "dual" to each other. The CCR ratio model is formed by problem 1 and problem 2 below:

Problem 1:

Min [h.sub.0] = [theta.sub.0] - [epsilon](m [summation over] i = 1 [S.sup.-.sub.i] + s [summation

over] r = 1 [S.sup.+.sub.r])

subject to

[theta.sub.0 x.sub.i0] - n [summation over] j = 1 [x.sub.ij lambda.sub.j] - [S.sup.-.sub.i] = 0, n [summation over] j = 1 [y.sub.rj lambda.sub.j] - [S.sup.+.sub.r] = [y.sub.r0], [lambda.sub.j is

greater than or equal to] 0, [S.sup.-.sub.i is greater than or equal to] 0, [S.sup.+.sub.r is

greater than or equal to] 0,

for i = 1,.., m; r = 1,..., s; j = 1,.., n.

Problem 2:

Max [Y.sub.0] = s [summation over] r = 1 [mu.sub.r y.sub.r0]

subject to

m [summation over] i = 1 [nu.sub.i x.sub.i0] = 1, s [summation over] r = 1 [mu.sub.r y.sub.rj] - m

[summation over] i = 1 [nu.sub.i x.sub.ij is less than or equal to] 0, [mu.sub.r is greater than or equal to epsilon], [nu.sub.i is greater than or equal to epsilon]

for i = 1,.., m; r = 1,.., s; j = 1,.., n.

As before, the subscript 0 represents the DMU being evaluated, [x.sub.ij] denotes input i, [y.sub.rj] denotes output r of [DMU.sub.j], and [mu.sub.r], and [nu.sub.i] represent the weights for outputs and inputs, respectively. An arbitrarily small positive number, [epsilon], is introduced to ensure that all of the observed inputs and outputs have positive values or shadow prices and that the optimal value [h.sub.0] is not affected by the values assigned to the so-called "slack variables" ([s.sup.+.sub.r] or [s.sup.-.sub.i](2).

The main conclusions from the CCR model are summarized as follows:

1. The optimal values of [s.sup.+.sub.r], [s.sup-.sub.i], and [lambda.sub.j] via problem 1 must be positive. The following inequalities should then be satisfied:

[y.sub.r0 is less than or equal to] n [summation over] j = 1 [y.sub.rj lambda.sub.j] and

[theta.sub.0 x.sub.i0 is greater than or equal to] n [summation over] j = 1 [X.sub.ij

lambda.sub.j],

for r = 1, ..., s; i = 1, ..., m.

2. Technical efficiency will be achieved if, and only if, all of the following conditions are satisfied:

[theta.sub.0] = 1 and [s.sup.+.sub.r] = 0, [s.sup.-.sub.i] = 0

for i = 1,.., m; r = 1,.., s.

The condition [theta.sub.0] = 1 ensures that [DMU.sub.0] is located on the production frontier; the conditions [s.sup.+.sub.r] = 0 and [s.sup.-.sub.i] = 0 exclude situations such as [F.sub.6] in figure 1 of the text.

3. The constant returns to scale condition for

[DMU.sub.0] occurs if n [summation over] j = 1 [lambda.sub.j] = 1, otherwise, n [summation over] n [summation over] j = 1 [lambda.sub.j] > 1 implies decreasing returns to scale; n [summation over] j = 1 [lambda.sub.j] < 1 implies increasing returns to scale.

4. An adjustment can be made in order to move (or project) inefficient [DMU.sub.0] onto the efficiency frontier. The projection ([x.sup.*], [y.sup.*]) in the CCR model is formed by the following formulas:

[X.sup.*.sub.i0] = [theta.sup.0.X.sub.i0] - [s.sup-.sub.i] i = 1,...,m [y.sup.*.sub.r0] = [y.sub.r0] + [s.sup.+.sub.r] r = 1,.., s.

The differences ([x.sub.i0] - [X.sub.i0.sup.*]), i = 1,.., m, represent amounts of inputs to be reduced; ([y.sup.*.sub.r0] - [y.sub.r0]), r = 1,..,s, represent the amounts of outputs to be increased in order to move [DMU.sub.0] onto the efficiency frontier. Hence, these differences can provide diagnostic information about the inefficiency of [DMU.sub.0].

5. Problem 1 is defined as the "primal" problem while problem 2 is the "dual." The dual variables have the economic interpretation of "shadow prices." The value of [nu.sub.i] indicates the marginal effect of input [x.sub.i0] on the DEA efficiency score. The value of [mu.sub.r] indicates the marginal effect of output [y.sub.r] in the DEA efficiency score. A comparison of these dual variables provides information on the relative importance of inputs and outputs in the DEA evaluation.

6. In the CCR model, problem 1 for problem 2) is solved for each DMU. Theoretically, there is no limitation on how many DMUs can enter the DEA model. Hence, the DEA model can perform an efficiency diagnosis for many DMUs.

Why is this approach referred to as data envelopment analysis? The main inequalities in conclusion 1,

[y.sub.r0 is less than or equal to] n [summation over] j = 1 [y.sub.rj lambda.sub.j] and

[theta.sub.0 x.sub.i0 is greater than or equal to] n [summation over] j = 1 [x.sub.ij

lambda.sub.j],

for r = 1, ..., s; i = 1, ..., m

are constraints to be satisfied for the optimal solution. The first inequality implies that the output of [DMU.sub.0] should not exceed the linear combination of all observed output [y.sub.rj]; thus, the optimal solutions will create a hyperplane to envelop the output of [DMU.sub.0] from above. Similarly, the second constraint can be interpreted such that the optimal solutions create another hyperplane which envelops the input of [DMU.sub.0] from be,ow. Since both outputs and inputs of the DMU evaluated are enveloped from above and below, the name DEA exactly matches the geometric interpretation of the procedure.

To see how this works, assume that there is a group of DMUs that produces the same outputs using the same inputs, but in varying amounts. In ranking their efficiencies of DMUs, DEA assigns weights to the outputs and inputs of each DMU. These weights are neither predetermined nor based on prior information or preferences of the decision makers. Instead, each DMU receives a set of "optimal" weights that are determined by solving the above mathematical programming problem. This procedure generates a DEA efficiency score for the DMU evaluated based on the solution value for the input and output weights. A set of constraints guarantees that no DMU, including the one evaluated, can obtain an efficiency score that exceeds unity. In this way, DEA derives a measure of the relative efficiency rating for each DMU in the cases of multiple input and output.

The Additive Model

Among DEA models, the additive model has been important in applications. The additive model can be formalized as the following two problems, which are dual to each other.(3)

Problem 3:

Max m [summation over] i = 1 [s.sup.-.sub.i / ~[x.sub.i0~ + s [summation over] r = 1 [s.sup.+.sub.r]

/ ~[y.sub.r0]~

subject to

[x.sub.i0] - n [summation over] j = 1 [x.sub.ij lambda.sub.j] - [s.sup.-.sub.i] = 0,

n [summation over]j = 1 [y.sub.rj lambda.sub.j] - [s.sup.+.sub.r] = [y.sub.r0], n [summation over] j = 1 [lambda.sub.j] = 1, [lambda.sub.j is greater than or equal to] 0,

[s.sup.-.sub.i is greater than or equal to] 0, [s.sup.+.sub.r is greater than or equal to], 0,

for i = 1,.., m; r = 1,.., s; j = 1,.., n.

Problem 4:

Min s [summation over] r = 1 [mu.sub.r y.sub.r0] + m [summation over] i = 1 [nu.sub.i x.sub.i0] +

[u.sub.0]

subject to:

s [summation over] r = 1 [mu.sub. y.sub.rj] + m [summation over] i = 1 [nu.sub.i.sub.ij] + [u.sub.0

is greater than or equal to] 0, [nu.sub.i is greater than or equal to] 1 / ~[x.sub.i0]~, [mu.sub.r is less than or equal to] 1 /

~[y.sub.r0]~, for i = 1,.., m; r = 1,.., s; j = 1,.., n.

Compared with the CCR model, the additive model has introduced another constraint n [summation over] j = 1 [lambda.ub.j] = 1 and a new variable [u.sub.0]. The new constraint in problem 3 ensures that the efficiency frontier is constructed by the convex combinations or original data points rather than a convex cone as in the CCR model. The new variable [u.sub.0] in problem 4 is used to identify returns to scale. The other variables in the additive model have interpretations similar to the CCR model.

In addition, there is a difference in the way the additive model and the CCR ratio model locate the efficient reference point on the facet. In figure A.1, an output isoquant consists of input combinations for five firms ([F.sub.1], [F.sub.2], [F.sub.3], [F.sub.4] and [F.sub.5]) in the case of one-put (y) and two-input ([x.sub.1] and [x.sub.2]). Point [F.sub.5] represents an inefficient DMU which uses more of [x.sub.1] and [x.sub.2] to produce the same amount of output as

(1) This also opens the way for many different DEA models which are refined, more flexible or more convenient for computations. These DEA models (BCC model, additive DEA model, cone ratio DEA model, CCW model) and their mathematical characteristics are beyond this paper.

(2) For the [epsilon]-Method, see Zukhovitskiy et al. (1966), pp. 46-51.

(3) See Charnes et al. (1985). its efficient reference DMUs, [F.sub.2] and [F.sub.3]. By the CCR ratio model, the efficiency score is determined via a value [h.sub.0], which can be interpreted in terms of the ray from the origin to [F.sub.5]. that is, [h.sub.0] is expressed by the length of the ray from the origin to the intersection point B divided by the length from the origin to [F.sub.5]. In the additive model, however, the reference efficient point on facet [F.sub.2]-[F.sub.3] is denoted by A, which is determined by maximizing the sum of the slacks, [S.sub.1] + [S.sub.2]. Geometrically, the slack variables are expressed by the horizontal line starting from [F.sub.5] and the vertical line extending to the facet [F.sub.2]-[F.sub.3]. Point A is selected such that the sum of the lengths of the horizontal and vertical lines are maximized. The DEA efficiency score in the additive model that we used is computed by the following formula:

[Mathematical Expression Omitted]

where [X.sup.*.sub.i0] and [Y.sup.*.sub.r0] are corresponding inputs and outputs of the efficient reference point, such as point A.

The DEA scale efficiency in the additive model is identified by a variable [u.sub.0] in problem 4 in accordance with the following criteria:

If [u.sub.0] = 0, [DMU.sub.0] has constant returns to scale; otherwise,

[U.sub.0] > 0 implies decreasing returns to scale; [U.sub.0] < 0 implies increasing returns to scale.

The value of variable [U.sub.0] is part of an optimal solution of the additive model and is produced by the computer code such that facet rate = [-u.sub.0].

Appendix B

Data Envelopment Analysis: An Alternative Approach

In measuring and evaluating technical and scale efficiencies there are two basic approaches: the DEA technique developed by Charnes, Cooper and others in operations research and the approach developed by Farrell, Fare and Grosskopf, among others, in economics.(1) The latter approach is based upon a set of axioms on production technology to define the concept of efficiency. Some connections of the two approaches have been investigated by Banker, Charnes and Cooper (1984) and by Fare Hunsaker (1986).

Both approaches share the characteristics that there is no need to specify a production function or cost function and to estimate the parameters. Therefore, they are nonparametric, nonstochastic techniques that can be used to construct a multiproduct frontier relative to which the efficiency measures of the entities in the sample are calculated. Because the frontier in these approaches is generated by data and all observations are enveloped by the frontier, both approaches can be viewed as Data Envelopment Analysis. In this appendix, some of the differences and similarities among the CCR and the additive models and the Farrell or Russell models are discussed.

The choice of efficiency reference on the relevant frontier is a major difference among these DEA models. In the Farrell or Russell models, three measures of technical efficiency can be defined: input, output and graph efficiency measures.

Using the input efficiency measure, the observed output vector is fixed and the search for efficiency reference is constrained to proportionally reducing inputs until the efficient frontier is reached. The "ratio of contraction," as it is called, is the ratio of the particular input to be efficient to the current level of inputs (in the Farrell input model).

Using the output efficiency measure, the observed input vector is fixed and the outputs proportionally expanded until the efficient frontier is reached. The "stretch ratio" of the output, as it is called, is the ratio of efficient output to the current level of output (in the Farrell output model).

For the graph efficiency measure, both input and output vectors are varied. Inputs are reduced and outputs are expanded, both proportionally, with the input ratio reciprocal to the output ratio.

In the case of figure 1 in the text, A is the reference point for the input efficiency measure, B is the reference point for the output efficiency measure and C might be the reference point for the graph efficiency measure. These three efficiency measures can be classified as radial because proportional changes of inputs and/or outputs are used in defining them.

To illustrate the input efficiency measure, ray [OF.sub.3] in figure 3 of the text is used to represent the optimal scale that would be generated by long-run competitive equilibrium. The overall input efficiency measure is defined with respect to the ray [OF.sub.3], while the input pure technical efficiency is defined with respect to the line segment connecting [F.sub.1], [F.sub.3] and [F.sub.5]. The measure of input overall technical efficiency, [KD/KF.sub.2], can be decomposed into the measure of pure technical input efficiency given by the ratio [KA/KF.sub.2] and the measure of input scale efficiency given by the ratio KD/KA. When the scale efficiency equals unity, the constant returns to scale occur; otherwise non-increasing or varying returns to scale hold.

It is clear from these examples that, in general, these radial efficiency measures will be different. Moreover, there is nothing to guarantee that a firm that is output efficient by this measure is also input efficient or vice versa. For example, the firm denoted by [F.sub.6] in figure 1 of the text is output efficient by the output efficiency measure, but is not input efficient (see Fare, Grosskopf and Lovell (1985)). However, the Farrell input efficiency measure is reciprocal to the Farrell output efficiency measure, if and only if, the technology is homogeneous degree one. Because this condition is satisfied by constant returns to scale technology, the Farrell input and output efficiency measures are "identical" in this case. For models with other technologies, simple relationships between input and output efficiency measures do not hold.

An improvement of the Farrell or Russell models over the others is the use of non-radial efficiency measures. The use of proportional changes of inputs and/or outputs in searching for efficient reference is abandoned.

Moreover, different piecewise linear technology can be accommodated in both Farrell and Russell models to meet the needs of various users. For example, to measure scale efficiency we can use constant returns to scale, non-increasing returns to scale or varying returns to scale technologies. These technology constraints can be easily imposed by corresponding restrictions on the "intensity parameters" in the Farrell or Russell models.

In the CCR or additive DEA model discussed in appendix A, however, only one efficiency measure is defined: the CCR model uses the radial measure of efficiency while the additive model uses the non-radial measure.

Geometrically, the efficiency frontier with constant returns to scale technology is a convex cone, but it is a convex hull in cases of both non-increasing and varying returns to scale. In general, these constraints on technology form a chain such that one efficiency frontier is enveloped by another. Consequently, the associated efficiency measures are compatible and nested.(2)

As is presented in appendix A, the CCR model has a convex cone efficiency frontier that implies technology with constant returns to scale. The additive model uses a convex hull as its efficiency frontier that is associated with the varying returns to scale. Even though the efficiency frontier of the additive model is enveloped by the efficiency frontier of the CCR model, the efficiency scores given by both models are not compatible because one uses a radial measure while the other uses a non-radial measure. The efficiency ratio of the CCR model is identical to the Farrell input efficiency measure (or reciprocal output efficiency measure) with constant returns to scale technology. Although both additive and Russell models define non-radial efficiency measures, the definitions are not identical. Hence, the efficiency measures given by these models are not compatible.

With our 1984 data of 60 Missouri commercial banks, we used the Farrell model with input and output efficiency measures and different technology constraints. The overall technical efficiencies and scale efficiencies are presented in table B.1. The reported results are based upon the input measure of efficiency.

[TABULAR DATA OMITTED]

Comparing table B.1 with table 1 in the text, we can see that the CCR model and the Farrell input model give identical technical efficiency measures and classification of returns to scale. Farrell input scale efficiency measures in table B.1 indicate that the scale inefficiency was not a major source of technical inefficiency in this group of banks. For a few of the banks in the sample, however, the scale inefficiency might be a problem.

(1) See Fare and Hunsaker (1986); Fare, Grosskopf and Lovell (1985). (2) See Grosskopf (1986).
COPYRIGHT 1992 Federal Reserve Bank of St. Louis
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1992 Gale, Cengage Learning. All rights reserved.

 Reader Opinion

Title:

Comment:



 

Article Details
Printer friendly Cite/link Email Feedback
Author:Yue, Piyu
Publication:Federal Reserve Bank of St. Louis Review
Date:Jan 1, 1992
Words:9248
Previous Article:Institutional developments in the globalization of securities and futures markets.
Next Article:Monetary policy in the Great Depression: what the Fed did, and why.
Topics:


Related Articles
MERCANTILE ANNOUNCES FIRST YEAR RESULTS OF NEW CO-BRANDED CREDIT CARD
Commercial Federal Announced Acquisition
ORDERS ISSUED UNDER BANK HOLDING COMPANY ACT.
Commercial Federal Names David S. Fisher Chief Financial Officer.
Valley National Bank Signs Five-Year IT Agreement With Aurum Technology.
APPLICATIONS APPROVED UNDER BANK MERGER ACT.
APPLICATIONS APPROVED UNDER BANK HOLDING COMPANY ACT.
APPLICATIONS APPROVED UNDER BANK HOLDING COMPANY ACT: By Federal Reserve Banks.
Efficiency of banks in Croatia: a DEA approach *.

Terms of use | Copyright © 2014 Farlex, Inc. | Feedback | For webmasters