Printer Friendly

Lessons lost in linearity: a critical assessment of the general usefulness of LEN models in compensation research.

Abstract: As the Linear-Exponential-Normal (LEN) Principal-Agent model has gained in popularity, it has also increasingly been applied to problems inherently inconsistent with the assumptions underpinning this fragile framework. Concerns regarding LEN analysis in such cases are generally countered by appealing to the results' intuitive nature. When consistent with economic intuition, it is argued, the constraints the LEN framework imposes on such problems are likely non-consequential. This paper demonstrates in two simple cases that even seemingly intuitive results of non-optimal LEN analysis can be fundamentally different from the results that obtain from analysis of the unrestricted programs. Accordingly the paper makes a case for limiting LEN-type analysis to settings that are actually compatible with this framework.

INTRODUCTION

Over little more than a decade now, the linear principal-agent framework initially developed by Holmstrom and Milgrom (1987, 1991) has evolved to become an ever more popular vehicle for studying issues pertaining to the provision of incentives. It is hardly surprising that the framework has gained such a strong following. It is very tractable to the point of allowing for the derivation of closed-form solutions to variables of interest even in highly complex settings. As such, I have actually found it quite useful in the past. The model's simple elegance truly does allow one to attempt to address institutional and economic issues rather than technical problems. (1) This certainly has not been lost on people who are interested in the relative importance of accounting information and equity prices in optimal incentive arrangements.

Unfortunately, the tractability of the model has led some researchers to apply the linear principal-agent model in settings that are clearly incompatible with its very foundations. (2) Probably the first and certainly the most influential paper to promote this departure from fundamental information economic norms and principles in accounting research is Feltham and Xie (1994). Their study focused on the notion of congruity in a setting where many actions control a given performance measure and where noise terms in the available performance measures are correlated. It is immediately clear, however, that their linear model can not be legitimized by (reference to) Holmstrom and Milgrom (1987). (3)

Thus lacking a theoretical justification, papers following Feltham and Xie (1994) instead appeal to the Linear-Exponential-Normal (LEN) model's tractability as well as the intuitive nature of its results to justify this fundamental break with classical agency theory. (4) As long as the general insights produced by LEN models are likely to correspond to those that would follow from the analysis of the correct program, the argument goes, the tractability of the LEN framework and its ability to generate tangible results justifies its use even in cases where, as in Feltham and Xie (1994), linear contracts are not optimal. Although this argument may sound reasonable, the problem is that it is impossible to know whether the results produced by the LEN framework are actually consistent with the analysis of optimal contracts unless one does the latter part as well. But since the LEN approach often is used where the analysis of optimal contracts seems too complicated, such verification is rarely if ever provided. Moreover, if the general analysis could actually be done, there would be no reason to restrict attention to linear contracts in the first place.

Given this paradox, the test that researchers often apply in defense of knowingly suboptimal but tractable LEN analysis is one of "reasonableness": If the results of the analysis correspond to economic intuition, then the LEN assumptions are likely to be benign. Although this appears reasonable on the surface, it represents the key fallacy of less-than-second-best LEN principal-agent analysis. The basic trade-off present in the LEN framework fails to capture many (most?) of the economic subtleties at the heart of principal-agent theory. Accordingly, when applied to standard accounting questions LEN rarely (if ever) produces any results that are substantially at odds with common wisdom or that represent a challenge to basic intuition. This is, of course, in stark contrast to the original and important role agency theory has played in accounting research--to challenge straight-forward intuition and expose the shortcomings of relying on common wisdom.

To illustrate the significance of this problem, this paper demonstrates in two specific settings where the LEN approach is not an appropriate tool, that while the LEN analysis produces seemingly intuitive results, these results are actually fundamentally different from the results that obtain from the study of the unrestricted model and thus not supported by agency theory. As a basis for this exercise, I first provide an introduction to the development of the LEN framework from fundamentals pioneered by Holmstrom and Milgrom (1987). The purpose of doing so is twofold. First is to identify the main assumptions and their role in arriving at a structure that can be safely analyzed following the LEN approach. Second is to introduce the dynamic approach to the principal-agent problem that, when analyzed directly, provides the appropriate benchmarks by which the validity of LEN results can be evaluated.

I then turn to the example of a seemingly straightforward yet inappropriate application of the LEN framework. Specifically, I use a LEN model to identify desirable properties of an interim (accounting) report. The LEN approach yields the familiar and seemingly reasonable "insight" that noise in an interim report is never good and can actually be bad. To evaluate these conclusions, I conduct a validity check by repeating the analysis using the dynamic approach that the LEN model is supposed to approximate. Analysis of the optimal contract in the dynamic setting reveals a different and much more subtle answer--that noise in the interim report may actually be desirable. What the comparison demonstrates is that the LEN approach yields the wrong answer here because the benefit of noise in an interim report can only be captured if the contract is allowed to be nonlinear. Accordingly, in this setting the seemingly harmless linearity assumption is directly responsible for eliminating the one result that is not directly commensurate with common intuition. (5)

Finally, I provide a similar analysis of the role of correlated performance measures in relative performance evaluation (RPE). The point I illustrate here is that even when the inappropriate application of LEN gets the general insight right (in this case, that having correlated observables available is generally good), the empirical predictions generated may still be wrong. In the case of RPE, the LEN analysis suggests that the relation between compensation and a correlated variable, say a market index, should be the opposite of the sign of the covariance. Moreover, the use of RPE implies that the strength of incentives always increase. Analysis of the basic program, however, makes clear that the optimal use of a correlated variable is to condition the strength of the pay-to-performance relation based on the realized value of the correlated measure and that, therefore, there is no basis for expecting the LEN model to yield correct empirical predictions regarding RPE. Accordingly, this part of the analysis reveals that testing for RPE based on the LEN prediction is unlikely to yield significant and/or consistent results even if RPE is practiced optimally.

The paper makes no general statements about the validity of results derived from LEN models, nor is it intended to do so. Rather, the paper simply uses a couple of very specific but quite central (to accounting research) examples to illustrate that restricting attention to linear contracts in settings where optimal contracts (even those that obtain in a dynamic setting that is the foundation of the development of the LEN model are inherently nonlinear) is not just done at the expense of generality. It may well lead to results that, while intuitively appealing, are fundamentally at odds with the results that obtain from analysis of optimal contracts.

The paper proceeds as follows. The second section presents the dynamic model and the corresponding LEN in the simplest forms possible. The third section contains the analysis of the desirability of noise in interim reports to demonstrate that restricting attention to the LEN model indeed yields intuitive yet incorrect conclusions. The fourth section then demonstrates how the LEN model fails to identify the optimal implementation of RPE even when it does correctly identify that RPE (ceteris paribus) is good. The fifth section provides a brief conclusion and a call for the return to fundamentals in accounting research.

THE DYNAMIC APPROACH

The introduction of the main components of LEN (linear contracts, negative exponential, multiplicatively separable preferences, and normal distributions with an exogenous second moment) into economics and, maybe in particular, into accounting research was chiefly due to Holmstrom and Milgrom (1987, 1991). Prior to their 1987 paper, analysis of "mean only" controlled normally distributed performance measures in conjunction with unbounded utility functions had been rendered unacceptable due to one of the key insights provided by Mirrlees (1974).

Mirrlees (1974) demonstrated two key problems with this type of setting from the agency theory perspective. First, the first-order approach is not a valid technique for characterizing an agent's response to incentives. This precludes characterizing in an operational way the constraint set facing the principal when he chooses the properties of the contract he offers to the agent. Second, but no less significant, despite the technical problems, the principal can achieve allocations arbitrarily close to the first-best by using a very simple nonlinear incentive scheme. This particular incentive scheme excels by not being responsive to performance except in extremely bad outcomes in which can it imposes large penalties (in utility terms). What is intriguing about this scheme is that while the threat of the penalty looms large enough to keep the agent from shirking, the penalty is imposed so rarely when the agent does not shirk that the risk-premium it introduces is negligible.

The significance of Holmstrom and Milgrom (1987) is that they managed to reintroduce (roughly) the same model components that Mirrlees (1974) dismissed while at the same time being able to legitimately restrict attention to linear contracts. However, the (approximate) resurrection of these model components (as well as the accompanying justification for focusing on linear contracts) is as fragile as it is clever. To appreciate when the Holmstrom and Milgrom (1987) approach cannot be used to justify using the LEN model, it is critical to obtain an understanding of when it can be used and why. The remainder of this section provides such an understanding using the simplest possible representation of their model.

The fundamental difference between the LEN approach, as it is currently applied, and the approach taken by Holmstrom and Milgrom (1987) is that they do not assume normality and linearity in a traditional "one-shot" principal-agent model. Instead, they derive these features as equilibrium properties of a dynamic principal-agent relation where the agent acts repeatedly over the contract horizon and, significantly, gets to learn about his previous successes and failures before choosing his action in every subperiod. More specifically, suppose the agent chooses his effort level N times over the contracting horizon. In each subperiod the effect of the agent's effort is to increase the likelihood of a success that generates the value [PI] > 0.A failure, which in the simplest version of the dynamic model is the only alternative to the success, I will assume results in zero output here. The probability of these two events depends on effort in the exact same way in all of the mutually independent subperiods and the agent in this model therefore effectively commands the evolution of a binomial random walk one independent trial at a time. The feedback at the end of period n = 1, ..., N we can represent by the simple indicator variable [y.sub.n] [member of] {1, 0}, where 1 corresponds to a success.

The final piece that ensures that actions in every subperiod are indeed identical in equilibrium (together with the assumption that the agent always observes how things went (1 or 0) in a given period before choosing his effort for the subsequent period) is the structure of the parties' preferences. The agent here exhibits constant absolute risk-aversion per the utility function U(s([??]), v([??])) = [-e.sup.-r[s([??]) - v([??])]], where s([??]) is the agent's total compensation as a function of the vector of all realizations of y over the N subperiods, v([??]) is the agent's personal cost of effort as a function of the effort exerted in each subperiod as summarized by the vector [??], and r is the agent's coefficient of absolute risk-aversion. The principal can conveniently be assumed risk-neutral with preferences defined entirely over the expected magnitude of his residual claim.

Given this structure the optimal contract pays the agent more for a success than a failure in every subperiod. Moreover, given the assumptions about preferences and the stationarity and, importantly, the time-independence of the production, along with the assumption that the agent learns the result in each period before taking his next action, the optimal value of these payments, say [B.sub.1] and [B.sub.0], are the same in each subperiod. Therefore, all that matters in the end is how many successes that occurred in total--not when they occurred. This, in turn, implies that compensating the agent based on the aggregated value of all the N realizations of [y.sub.n], say Y [equivalent to] [summation of] [y.sub.n], is without loss of generality. Also, it implies that the agent will actually choose the exact same level of effort in every period.

The first of these two implications is closely related to the optimality of a linear contract in the dynamic setting. With [B.sub.1] > [B.sub.0] we can always write the agent's compensation as the following function of the aggregate performance measure s(Y) = k + [[beta].sub.y]Y where k = [NB.sub.0] and [[beta].sub.y] [equivalent to] [B.sub.1] - [B.sub.0]. The second implication--that the agent always chooses the same level of effort every subperiod--ensures that the binomial random walk that generates the aggregate measure Y is stationary. Then given enough subperiods, the distribution of Y will converge to a normal distribution with easy-to-identify properties. Moreover, with the linear contract in place, it makes no difference whether the agent were to choose (and commit to) the level of effort he wants to exert in each period at the start of the horizon or whether he revisits his choice period by period. However, it is crucially important that the agent is free to change behavior in response to past results as it is this potential threat that forces the principal to offer the same bonus period by period and, thus a linear incentive scheme.

The final step in Holmstrom and Milgrom (1987) then is to show that the optimal solution to a model where the agent commits to a specific level of effort ex ante to control a normally distributed measure, Y, corresponds to the optimal solution to the underlying dynamic problem used to motivate linearity and normality in the first place. In the case of the discrete (binomial random walk) model, they show that the solution to the one-shot normal/linear model provides a close approximation to the optimal solution to the dynamic problem. Moreover, for a Brownian motion where the agent controls the instantaneous drift rate (taken to be the limiting case of the discrete random walk when subperiods get very short), the solution is shown to be exact.

The resulting characterization of the (approximately) optimal contract becomes particularly simple. Suppose that the agent's control of Y is given as Y = [gamma]a + [[epsilon].sub.Y], [[epsilon].sub.Y] ~ N(0, [[sigma].sup.2.sub.Y]), where [gamma] represents the marginal effect of the agent's effort on performance. Then with the other LEN assumptions the agent's expected utility can be calculated as:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

which, after some algebraic manipulation, simplifies to:

(1) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Since this expression is strictly increasing in the term in brackets, a term representing the agent's certainty equivalent (CEA), his utility-maximizing choice of a corresponds to the effort that maximizes the CEA. Formally, the agent then chooses his effort to solve the problem:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

which has the first-order condition:

(2) v'(a)/[gamma] = [beta]Y

where the prime signifies "derivative of."

A risk-neutral principal then solves the standard residual maximization problem:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

Rewriting the agent's minimum utility requirement in terms of the minimum certainty equivalent that will satisfy the IR-constraint (CEA), rearranging and substituting for k + [[beta].sub.Y] [gamma]a in the principal's objective function and further using Equation (2) to substitute for [[beta].sub.Y], yields the following program:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

The first-order condition characterizes the optimal amount of effort the principal choose to induce as:

[PI][gamma]/1 + r[[sigma].sup.2.sub.Y]v"(a)/[[gamma].sup.2] = v'(a),

which now can be substituted back into Equation (2) to characterize the incentive component (the slope coefficient) of this linear contract as:

(3) [PI]/1 + r[[sigma].sup.2.sub.Y]v"(a)/[[gamma].sup.2] = [[beta].sub.Y].

The "contract intercept" k can be found by substituting [[beta].sub.Y] into the minimum CEA condition, but since k is determined completely independent of [[beta].sub.Y] in this framework and only plays the role of satisfying the (IR) constraint, I will skip this last part here.

The model can now be (and almost always is) extended to include multiple performance measures. From a theoretical perspective this can be done in (at least) two different ways. First, there is the approach inherent in the Holmstrom and Milgrom (1987) multinomial analysis. When the agent controls a multinomial distribution (of which the binomial is a special case) with more than two outcomes each period, by setting up an "account" for each of the possible outcomes and crediting the account by a time-invariant amount every time the particular outcome is realized, the account balances themselves become normally distributed performance measures. Alternatively, as in Holmstrom and Milgrom (1991), one could allow the agent a to take multiple actions, each of which controls a binomial random walks like the one introduced above. In either case, under the appropriate assumptions the optimal contract takes the form:

(4) s(X) = k + [m.summation over j=1] [[beta].sub.j][x.sub.j]

where [x.sub.j] [member of] X, which is a vector of m (approximately) normally distributed performance measures.

Before proceeding with two (troublesome but quite standard) applications of this LEN framework, it is worthwhile to reiterate two of the key assumptions used in the derivation of optimal contracts of the form in Equation (4). First, the agent must be able to observe output realizations from past periods and to change his effort appropriately each and every period in response. (6) Second, when introducing multiple normally distributed performance measures as the limiting case of multiple binomial random walks, contracts of the form summarized by Equation (4) are only optimal when the performance measures are statistically independent. The reason why the two applications of the LEN model considered here lead to inappropriate conclusions is because each of the settings is at odds with one of these two fundamental assumptions.

NOISE AND LINEARITY

Perhaps the most familiar prediction of the LEN model developed above involves the detrimental effect of noise in measurements. The basic result stems largely from inspection of Equation (3) that reveals that [[beta].sub.Y] is strictly decreasing in [[sigma].sup.2.sub.Y]. Equation (2) then reveals that a is also decreasing in [[sigma].sup.2.sub.Y] and so is the value of the principal's expected utility. As intuitive as this "noise is bad" result may be, Hemmer (2003) shows it is actually a by-product of the approximation described in "The Dynamic Approach" section. This is because in the case of the discrete multinomial random walk that is the centerpiece of Holmstrom and Milgrom (1987), higher moments of the outcome distribution are mechanically connected through the agent's choice of effort. Accordingly, [[sigma].sup.2.sub.Y] is determined endogenously as a function of a and no general prediction about the equilibrium relation between [[beta].sub.y] and [[sigma].sup.2.sub.Y] is therefore possible except in the highly specialized limiting Brownian motion case. (7)

Since this point has already been made, the point of this section is not to challenge the LEN prediction that, ceteris paribus, the principal would prefer more precise aggregate measures to contract on. Rather, it is to demonstrate that regardless of whether this claim is true, the "noise is bad" notion fails miserably when it comes to identifying desirable properties of the much more common interim measures. A couple of examples that relate this problem of the model to standard accounting issues are the relation between annual financial reports and the aggregate dividends over the entire cradle-to-grave history of the firm and the relation between, say, quarterly financial statements and annual financial reports. While presumably all imprecisions wash out in the aggregate, clearly it is a key issue facing accountants, practitioners, regulators, and scholars alike, to understand the cost and benefits of increasing the precision of such interim measures.

For specificity, I concentrate on the latter of the two above examples. Thus, suppose that we tentatively accept the idea that it would be desirable to have annual reports with a random component as small as possible. Does that then translate into a conclusion that it would be better to have more "precise" quarterly or biannual reports as well? The question is clearly relevant here as some parties are certainly lobbying to have interim reports audited with the explicit goal of improving their information content. The LEN style analysis in the case of biannual reports would go something like the following.

Let the relation between the agent's subperiod 1 and 2 efforts and the biannual reports be given as:

[y.sub.i] = [gamma][a.sub.i] + [[epsilon].sub.i] - (i-j)[eta], i, j [member of] {1,2} i [not equal to] j,

where [y.sub.i] is the "true" accounting report for subperiod i, [[epsilon].sub.i] ~ N(0, [[sigma].sup.2.sub.i]) is the random component in subperiod i, and [eta] ~ N(0, [[sigma].sup.2.sub.[eta]]) is the measurement error in the interim reports, which is assumed to reverse so that the noise in the annual report is not affected by noise introduced in the interim reports. Allowing for the possibility that the process of aggregation could introduce a separate error itself, the annual report becomes Y = [y.sub.1] + [y.sub.2] + [delta], where [delta] ~ N (0, [[sigma].sup.2.sub.[delta]]) is the error introduced by the aggregation process.

With this setup along with the additional, standard simplifying assumption that v([a.sub.1]) = [a.sup.2.sub.1]/2, the agent's LEN problem can, following the approach outlined in the previous section, be expressed as: (8)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

which has the first-order conditions:

(5) [a.sub.i] = [gamma]([[beta].sub.i] + [[beta].sub.Y]).

Again using the LEN approach from "The Dynamic Approach" section, the principal's objective function here becomes:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.].

Then, using Equation (5) to substitute for [a.sub.i], the first-order conditions become:

(6) [PI][[gamma].sup.2] - [[gamma].sup.2] ([[beta].sub.Y] + [[beta].sub.i]) - r[[[beta].sub.i]([[sigma].sup.2.sub.i] + [[sigma].sup.2.sub.[eta]]) + [[beta].sub.Y][[sigma].sup.2.sub.i] = 0

and

(7) 2[PI][[gamma].sup.2] - [[gamma].sup.2]([[beta].sub.Y] + [[beta].sub.1] - [[gamma].sup.2]([[beta].sub.Y] + [[beta].sub.2] - r[([[beta].sub.Y] + [[beta].sub.1])[[sigma].sup.2.sub.1] + ([[beta].sub.Y] + [[beta].sub.2])[[sigma].sup.2.sub.2] + [[beta].sub.Y][[sigma].sup.2.sub.[delta]] = 0

With the simplifying assumption implied by the dynamic model that [[sigma].sup.2.sub.1] = [[sigma].sup.2.sub.2] [equivalent to] [[sigma].sup.2.sub.y], Equation (6) implies that:

(8) [[beta].sub.y] [equivalent to] [[beta].sub.1] = [[beta].sub.2] = [PI] - [[beta].sub.Y][1 + r[[sigma].sup.2.sub.y]/[[gamma].sup.2]]/ 1 + r([[sigma].sup.2.sub.y] + [[sigma].sup.2.sub.[eta]])/[[gamma].sup.2]

and Equation (7) implies that:

(9) [[beta].sub.y] = [PI] - [[beta].sub.y][1 + [rho][[sigma].sup.2.sub.y]/[[gamma].sup.2]]/1 + r([[sigma].sup.2.sub.y] +[[gamma].sup.2.sub.[[delta]]/2)/[[gamma].sup.2].

Rather than doing the formal comparative static analysis here (this is left for the reader's enjoyment), inspection of Equations (8) and (9) provide the necessary insight for inferring what the answer to the original question will be. First, suppose that [[sigma].sup.2.sub.y] and [[sigma].sup.2.sub.[eta]] are both positive (and finite) while [[sigma].sup.2.sub.[delta]] equals zero. That is, no separate noise is introduced as a result of aggregation. Substitute into Equation (9) and substitute the result into Equation (8) to verify that [[beta].sub.y] then equals zero so that the interim reports are never used for evaluating the agent's performance. If, however [[sigma].sup.2.sub.[delta]] is also positive (and finite), then it is equally easy to verify that [[beta].sub.y] is always greater than zero, so that the interim report always matters. Given that the derivative of [[beta].sub.y] with respect to [[sigma].sup.2.sub.[eta]] is monotone in [[sigma].sup.2.sub.[eta]], to identify the sign, simply set [[sigma].sup.2.sub.[eta]] = 0, substitute into Equation (8) and then substitute the resulting expression into Equation (9) to verify that in this limiting case, [[beta].sub.y] is zero and the principal relies exclusively on the interim reports. With [[beta].sub.y] thus decreasing in [[sigma].sup.2.sub.[eta]], from Equation (8) then it follows [[beta].sub.y] that becomes more and more important (valuable) here as the noise in the interim report goes down which is fully in line with basic intuition.

To see why this conclusion is dubious, we need to go back to the dynamic model used to justify the LEN framework. As discussed in "The Dynamic Approach" section, a key assumption in the development of the LEN model is that every time the agent acts, the consequences of his most recent action are summarized by an interim report. For the basic case in which the aggregate performance measure is simply the sum up the interim reports so that no noise is added or removed through the process of aggregation (corresponding to setting [[sigma].sup.2.sub.[eta]] and [[sigma].sup.2.sub.[delta]] both equal to zero in the analysis above), contracting on the interim reports or on the aggregate measure is equivalent. While this was not explored directly above, it can also be verified that the same property is present in the model above. The consequences of adding noise to (or removing it from) the interim reports are not, however. In fact, the dynamic model and the LEN model yield exactly the opposite answers to the question: "Is noise in the interim reports good or bad?"

To identify the reason why the LEN model cannot be trusted here it suffices to focus on a two-period version (or alternatively, the first two periods) of the binomial dynamic model. We thus have just one interim measure, [y.sub.1], and one aggregate measure, Y = [y.sub.1] + [y.sub.2] (9) Consider then the case that corresponds to [[sigma].sup.2.sub.[delta]] = 0 and [[sigma].sup.2.sub.[eta]] = [infinity] above. (10) In this case [y.sub.1] and [y.sub.2] are individually completely uninformative, but when aggregated, the noise exactly cancels out. In the LEN model this is clearly not helpful, as with these properties the interim reports are rendered worthless. In the dynamic setting, in contrast, a Pareto improvement can be achieved when the interim performance measure is noisy relative to the case where it is perfect ([[sigma].sup.2.sub.[eta]] = 0).

While Y is now the only information available, payments to the agent can still be conditioned on the number of successes to which the realization of corresponds (i.e., two, one, or no successes over the course of the two periods in question). For concreteness assume that Pr([y.sub.i] = 1 | [a.sub.i]) = [a.sub.i] and that v([a.sub.i]) = [a.sup.2.sub.i]/2. When using the same basic notation as in the previous section and the fact that without getting information in the interim, the agent will choose the same level of effort in both subperiods, which allows me to drop the subscript on a. The agent's problem then becomes:

(10) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

while the principal's problem becomes:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

where the first constraint is the standard (IR) constraint and the second is the (IC) constraint, obtained as the first-order-condition to the agent's problem as summarized by Equation (10).

Differentiating the Lagrangian to this program with respect to [B.sub.2], [B.sub.1], and [B.sub.0] respectively, then yields, after taking advantage of the specific utility function and some rearranging, the following first-order conditions:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.],

where [lambda], is the Lagrange multiplier on the (IR) constraint and [mu] the multiplier on the (IC) constraint.

These conditions now permit checking for linearity of the optimal contract using:

(11) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

and

(12) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

The contract will, of course, be linear in Y if and only if:

[B.sub.2] - [B.sub.1] - ([B.sub.1] - [B.sub.0]) = 0

which, using the Expressions (11) and (12), implies that:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

For simplicity, let the denominator of the above fraction be [D.sub.2]. A bit of algebraic manipulation then reveals that the numerator can be written as [D.sup.2] - [[mu].sup.2]. It then follows from [D.sub.2] and [[mu].sup.2] both being strictly positive that the denominator is indeed always strictly greater than the numerator. In other words, with no information available in the interim, the optimal contract is no longer linear in aggregated output. Instead, as the above result indicates, it is concave. Suspicion about the validity of the result produced by the LEN analysis therefore should creep in.

To verify that the LEN analysis provides an incorrect conclusion, I need to establish that the nonlinearity observed in the optimal contract actually corresponds to a Pareto improvement and is not simply indicating a loss due to the reduced information available here relative to the benchmark setting. To see the Pareto improvement, note that the same aggregate measure (can) form the basis for the optimal contract independently of the noise in the interim measure, [y.sub.1]. Accordingly, the linear contract that is optimal when [y.sub.1] is informative is still a feasible contract here where [y.sub.1] is useless. The nonlinearity that results when the noise is introduced must therefore represent an improvement, thus leading to the conclusion that "noise is valuable."

The result reflects the fact that information in multiplayer games can cut both ways. As documented in the linear part of this analysis, an interim performance measure is worthless to the principal here absent noise in the aggregate report. What does not show up in the LEN analysis, however, is that the interim performance measure is actually helpful to the agent, thus making the principal's control problem more severe. Getting rid of such information in the interim performance measure allows the principal to deviate from what is in fact a very poor contractual specification--linearity--without allowing the agent to take advantage of this. Since this part of the problem is clearly lost in LEN, when it comes to interim performance measures (i.e., most accounting measures) relying on the LEN framework to generate insights about desirable properties should only be done with great caution.

COVARIANCE AND RELATIVE PERFORMANCE EVALUATION (RPE)

The point illustrated in the previous section is that the apparently self-evident truth that "noise is bad" actually is a direct product of the very restrictions imposed by the LEN framework. A related issue that appears to be less subject to the above type problem, however, would be achieving noise reduction by adding a measure that exhibits statistical correlation of the error terms but is not under the control of the agent. Absent the agent's ability to control this additional measure, the main problem that led to the discrepancy between the LEN and the optimal solution in the previous section, namely that more precise information makes the agent's strategic choice of effort harder to control, ought to be less of a concern. In particular, I focus on the case where this additional measure only becomes available at the end of the horizon. As this section will show, the answer to this question is both yes and no.

Within the LEN framework we can quickly address the consequences and the desirability of adding such a measure. In the LEN model adding a measure of this sort means only that the variance part of the risk premium paid by the principal must be recalculated. Specifically, it must be recalculated to reflect the presence of additional noise from the extra performance measure along with the implications for the overall risk of the covariance between the new measure and the original measure that the agent controls. Suppose that this additional measure is ~ N(0, [[sigma].sup.2.sub.z]) and that the covariance between z and the original measure of interest, y, is [[sigma].sub.zy]. Modifying the principal's problem from "The Dynamic Approach" section then results in the following expression:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

where the assumption regarding the agent's cost of effort is carried over from the prior section.

Using the first-order condition for the agent's effort choice, which is fundamentally unchanged from Equation (2) in the previous section and thus given by Equation (2), after some algebraic manipulation the first-order conditions for a maximum in the principal's problem then become:

(13) [[beta].sub.z] = - [[beta].sub.y] [[sigma].sub.zy]/[[sigma].sup.2.sub.z]

and

[[beta].sub.y] = 1/[1-r/[[gamma].sup.2][[[sigma].sup.2.sub.y] - [[sigma].sup.2.sub.zy]/[[sigma].sup.2.sub.z]]]

Inspection of this last expression reveals that [[beta].sub.y] is decreasing in both [[sigma].sup.2.sub.y] and [[sigma].sup.2.sub.z] but increasing in the squared covariance implying that "covariance is indeed good." If one can get one's hands around a measure beyond the control of the agent that exhibits any amount of covariation, positive or negative, with measures the agent do control (try to think of measures that do not fit this bill), then things get better.

I now return to the "dynamic" model. To demonstrate the point in this section where a separate measure (z) exhibits correlation with the already available performance measure (y), it is sufficient to focus on a simple one-period version of that model. (11) Moreover, it will suffice to consider a simple case where the agent chooses either high ([a.sup.h]) or low ([a.sup.1]) effort and where the principal uses the binary measures, y and z, to make the agent choose [a.sup.h]. With these simplifications, the principal's problem then becomes:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Using the first-order condition to characterize the optimal contract and thus differentiating the Lagrangian to the above program with respect to s(y, z) yields:

(14) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

which, using 1/U'(s(y,z)) = [e.sup.s(y,z), becomes:

(15) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

where [delta] [equivalent to] [e.sup.v(l)/[e.sup.v(h)], and is thus a strictly positive number less than 1.

Several properties of the Expressions (14) and (15) are noteworthy in the context of the LEN result regarding relative performance evaluation summarized by Equation (13) above. First, comparing Equations (13) and (15) reveals the direct mechanical error that the LEN approach introduces here. The LEN approach suggests that optimal use of RPE implies that the realization of the correlated variable affects the level of pay in the opposite direction of the sign of the covariance. At the same time, the pay-performance relation, the relation between the controlled signal and compensation, is unaffected by the realization of z. In contrast, the solution to the underlying dynamic problem shows that the key role of z in optimal RPE is that of calibrating the strength of the incentive, i.e., to adjust the pay- performance relation. (12)

Second, the effect the correlated variable (z) has on the level of pay is determined not by the sign of the covariance but rather by the implications of the correlated variable for the usefulness of y for making inference about a. As a result, under quite plausible assumptions about the production function, positive covariance between y and z may well result in an optimal contract under which the agent's expected pay is increasing in z.

To see this, first take expectations of Expression (14) with respect to y to get:

(16) E[[U.sup.-1](s(y,z))] = [mu]([delta] - 1) - [lambda] [for all]z

Thus, under optimal RPE the expected inverse utility of the agent is constant in z. While perhaps not particularly interesting in its own right, it is still significant for the purpose at hand. Recall that the main role z of in the optimal contract here is to change the strength of the incentives and, thus, somewhat casually speaking, the agent's risk exposure. But with the agent's expected inverse utility held constant in equilibrium across the realizations of z, the expected compensation is likely not constant. Indeed, since [U.sup.-1](s(y, z)) = - [e.sup.s(z,y)], it follows that for realizations of z that lead to more variation in the agent's utility, Equation (16) implies that E[s(y, z)] will optimally be lower. (13)

The main point here is that under optimal RPE the expected compensation may just as well be positively as negatively related to the value of a positively correlated variable. To see this, consider the probability structure in Table 1. Clearly y and z are positively correlated here--both in and out of equilibrium. However, the first and third rows of the Table 1 reveal the real reason why knowing z is valuable here: It is only when z = [z.sup.H] that the agent's effort is important; what happens to y when z = [z.sup.L] is entirely a matter of luck.

While this example is intentionally a bit extreme, a production function with characteristics similar those in Table 1 actually seems quite plausible. Consider as examples the problems of evaluating a fisherman or a farmer. With poor fishing or growing conditions, something that would be reflected in peer performance, making an extra effort could be of crucial importance for the final outcome. If, however, there are plenty of fish or the soil can accommodate many crops, then features such as capacity constraints could make the agent's effort choice much less important. The relation between (positively correlated) peer-performance and the compensation of a given agent in this type of environment, however, is in direct conflict with the LEN prediction.

To establish this formally, use Equation (15) along with Table 1 and the fact that [mu] is a positive constant to obtain the following pay-performance relation in the unrestricted model:

s([y.sup.L], [z.sup.L]) < s([y.sup.L], [z.sup.H]) = s([y.sup.H], [z.sup.H]) < s([y.sup.H], [z.sup.L]).

Accordingly, the agent is exposed to risk only when Z = [z.sup.L] From Equation (16) [E.sub.y][U.sup.-1] in the optimal contract is independent of the realization of z under the optimal contract. Then per the properties of the negative exponential utility function discussed following Equation (16), the agent's expected (or "average") compensation conditional on z = [z.sup.H] must be strictly higher than his expected compensation conditional on z = [z.sup.L]. Again, this is in direct contrast to what is predicted by the LEN analysis. This highlights the danger of using such a model in an environment (here one with correlation between performance measures) where the model cannot be derived from the dynamic framework. Although the predictions appear eminently intuitive and are easily operationalized as an OLS regression, they may also simply be false.

It is important to notice that the above example does not imply that optimal RPE necessarily introduces a positive relation between average pay and the value of a positively correlated variable such as a market index. Although the example illustrates the plausibility of such a positive relation, there are many other equally plausible settings in which a negative relation would indeed be predicted by this model. If we were to modify the problem facing the farmer or the fisherman slightly, then this could all look very different. If "bad" conditions means that there are no fish or that there very poor growing conditions, then trying may be futile. In contrast, "good" conditions may simply be challenging but not impossible so that extra effort, at least on average, makes a positive difference.

In this second environment (strong) incentives are provided only when peer performance indicates that conditions are "good." Following the same logic as used above, this results in lower expected pay when things are "good," producing an inverse relation between expected pay and peer performance in this case. The main message here is therefore not that the LEN prediction is always wrong, but rather that it is just as likely right as wrong. Moreover that without detailed knowledge of the production function in question, it is impossible to sign the relation in question.

The above problem may actually help shed some light on a puzzle in the empirical literature on compensation and incentives: the lack of (strong) support for the RPE hypothesis. Dating back to Antle and Smith (1986), a variety of empirical studies have attempted to find convincing evidence of what has been argued to be one of the stronger predictions of agency theory, namely that of RPE. While not inconsistent with RPE, the evidence provided by Antle and Smith (1986) at most provides only weak support for RPE. Moreover, the study by Gibbons and Murphy (1990) provides arguably inconsistent conclusions depending on the unit of analysis, while subsequent studies such as Barro and Barro (1990) and Janakiraman et al. (1992) seem to be in direct contradiction to the RPE hypothesis. More recently, a study by Aggarwal and Samwick (1999) also failed to find evidence of RPE.

Of the studies referenced here, only Aggarwal and Samwick (1999) rely directly on the LEN framework for their predictions, so it may appear that the problems with the LEN predictions identified above apply only to their design. It is, however, not clear that the problem does not apply more broadly. Antle and Smith (1986), for example, base their study on Theorem 7-9 in Holmstrom (1982). While these results provide support for a practice of relating the pay of an agent to his own performance and an aggregate (such as a weighted average) measure of peer performance, his analysis provides no insight into how these measures should be utilized in the optimal contract. (14) Because this part of Holmstrom (1982) does rely on normally distributed error terms, the Mirrlees (1974) nonexistence problem always lingers in the background. Therefore, relying on the dynamic approach of Holmstrom and Milgrom (1987) to gain insights into the optimal use of these two measures would also seem eminently reasonble in this case.

CONCLUSION

This paper has provided two illustrations of the dangers of the current practice of using LEN to analyze accounting problems in settings where the LEN framework is not grounded in a dynamic model as in Holmstrom and Milgrom (1987, 1991). First, I demonstrated that when asked to identify the implications of noise in an interim performance measure, the LEN model provides the wrong answer. Second, while the LEN framework does get the general answer right regarding relative performance evaluation, it also provides misleading empirical predictions. Since the LEN results in both instances appear perfectly reasonable and intuitive, the analysis in this paper underscores why apparent reasonableness of the results and the tractability of the analysis cannot be used to justify applying LEN to problems that are inconsistent with the basic assumptions needed to support it.

Highlighting these problems should ideally affect both the supply of and the demand for LEN research in accounting. First, by highlighting that seemingly innocuous statements along the lines of "while linear contracts are not optimal in our setting, we rely on the LEN model because of its tractability" are not innocuous at all, ideally the LEN model will be used much more sparingly. Second, by demonstrating that one of the attractions of LEN when used appropriately, its direct OLS interpretation, is a key casualty when the LEN framework is used inappropriately, empiricists will be more cautious in relying on predictions from papers containing statements akin to the one above. And last but not least, editors and referees will ask authors who do continue this practice to provide credible evidence that their results bear any relation to those that would obtain from a more rigorous (and correct) analysis.
TABLE 1
Example of Conditional Probability Structure Where z and y Are
Positively Correlated but Effort Is Only Productive
When [z.sup.L] Is Realized

p([y.sup.H |           =  p([y.sup.H] |          >  1/2
[z.sup.H], [a.sup.h])     [z.sup.H], [a.sup.l])
p([y.sup.H |           >  1/2                    >  p([y.sup.H] |
[z.sup.H], [a.sup.h])                               [z.sup.L],
                                                    [a.sup.h])
p([y.sup.H |           <  p([y.sup.H] |          <  1/2
[z.sup.L], [a.sup.l])     [z.sup.L], [a.sup.h])     p([y.sup.L] |
p([y.sup.H |           <  1/2                    >  [z.sup.H],
[z.sup.L], [a.sup.h])                               [a.sup.h])


I am most grateful to Qi Chen, John Core (Forum Associate Editor), Frank Gigler, Eva Labro, and an anonymous referee for helpful comments, suggestions, and discussions. The responsibility for remaining errors and shortcomings is exclusively mine.

(1) The Holmstrom and Milgrom (1991) paper is an excellent example of the kind of insights this model may help provide when it is applied in a way consistent with the assumptions that facilitated its development. Specifically, they confine attention to settings where there are as many independent and uncorrelated performance measures available as there are independent actions.

(2) For a summary of recent work in accounting that uses the LEN model, see Lambert (2001).

(3) If for no other reason, this is stated explicitly in Feltham and Xie (1994, footnote 8).

(4) Restricting attention to linear contracts, (multiplicatively separable negative) exponential preferences, and normally distributed disturbance terms in settings where linearity is not (likely to be) optimal is often referred to as applying the LEN model.

(5) This, in my view, is probably the most straightforward illustration of why the inappropriate application of LEN is not a good way of getting rid of bath water. That is, if one actually cares about the babies.

(6) Although, in equilibrium, his effort will stay constant.

(7) See Hemmer (2003) for the full argument regarding the equilibrium nature of noise in principal-agent relations, including the linear model.

(8) Given the simple nature of this setting, absent arbitrary assumptions to impose additional constraints on the principal's ability to contract, the best he can do is to commit to a two-period contract ex ante as implied by this formulation of the maximization problem.

(9) A two-period version of the model is sufficient here since the Holmstrom and Milgrom (1987) linearity result requires that the optimal contract is history-independent. Establishing history-dependence and thus the suboptimality of linear contracts as defined above only requires two periods here.

(10) While convenient for tractability reasons, this also makes it easy to verify that the binomial random walk will converge to a normal distribution with a known mean and [[sigma].sup.2.sub.[eta]] = [infinity], the agent cannot condition on past realizations and will therefore take the same action in each period. The only remaining question is therefore whether the contract will still be linear in the normally distributed aggregate output.

(11) Recall that the time-independent production function along with the preference structure in Holmstrom and Milgrom (1987) implies that, in equilibrium, every period is identical. If the optimal contract in a one-period version is not of the form given by Equation (4), then neither will it be in a multiperiod version.

(12) Stated differently, this comparison reveals that the contract s(y, z) = [[beta].sub.y]y + [[beta].sub.z]z used in the LEN analysis can only be appropriate when y and z are independent.

(13) This is a result of the specific properties of the risk-preferences imbedded in this particular utility function. The optimal relation between risk and expected pay is indeed different for other utility functions.

(14) While Janakiraman et al. (1992) claim that "Holmstrom [1979; 1982] and Banker and Datar [1989] show that these assumptions imply that the optimal contract is based on a linear combination of the two performance measures," in fact, it is made clear in the discussion following Theorem 8 (Holmstrom (1982, 337)) that his results do not imply such a relation.

Thomas Hemmer

London School of Economics
COPYRIGHT 2004 American Accounting Association
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2004 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Hemmer, Thomas
Publication:Journal of Management Accounting Research
Date:Jan 1, 2004
Words:8279
Previous Article:The value of activity-based costing in competitive pricing decisions.
Next Article:Does managerial accounting research contribute to related disciplines? An examination using citation analysis.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters