Printer Friendly

A Solvable Time-Inconsistent Principal-Agent Problem.

1. Introduction

The principal-agent problem is a classic issue of the optimal contract and is widely used in financial and economical fields. The principal and the agent, as two parties of the contract, interact with each other. Under the constraints of the contract, the agent creates profits for the principal and the principal pays the salary for the agent as incentives. In this paper, we introduce an optimal contract where both the principal and the agent are time-inconsistent to solve the principle-agent problem under moral hazard in a dynamic environment. In general, the agent is regarded as risk-neutral and the principal is risk aversion.

In solving the optimal contract with the time-inconsistent principle-agent problem, there are three problems we need to face. The first is the solution to the principal-agent's problem in the continuous-time. A continuous-time model where the agent controls the Brownian motion drift rate over the time interval is studied by [1]. Later, [2, 3] uses martingale methods to develop the first-order approach of principal-agent problems under moral hazard with exponential utility. Reference [4] shows that the first best sharing rule is also linear in output in the continuous-time principal-agent model with exponential utility. Reference [5, 6] uses the stochastic maximum principle to extended Holmstrom's model, and discuss the optimal solution of the agent with private information in the continuous-time model. Reference [7, 8] uses the forward-backward stochastic differential equations to consider the optimal contract under moral hazard. Reference [9] systematically expands the problem of continuous-time principal-agent. However, the above methods solved the principal-agent problem in a continuous period of time under the time-consistent preference, which is simplistic for the actual situation. Therefore, it is natural to consider the time inconsistency in the principal-agent model.

The second problem is how to find the optimal strategy when the time preference is inconsistent. Reference [10] proposes the optimal contracts for the principal who contracted with the dynamically inconsistent agents in a discrete case. Their study includes exploitative contracts that applied for naive agents to better explain the true contractual arrangements. Based that, [11,12] takes the neutral agent as a benchmark to study the possibility that the principal manipulates the naive agent. The result shows that the innocence of the agent does not bring benefits to the principal, and the maximum effectiveness of the principal is the same in front of the neutral agent and naive agent. Besides, the definition of naive and neutral agent was firstly mentioned in [13]. Reference [14] takes the discount of the quasi-hyperbolic [beta][delta] as the agent's discount function, then discuss the optimal contract and the profit level of the principal when the agent is neutral or naive. The above mainly deals with the problem of time-inconsistent agents in discrete time. However, less attention is paid in continuous-time because it is complicated to solve the closed-loop solution under nonconstant discounting. Reference [15] proposes the optimal contract models based on the Pontryagin maximum principle for forward-backward stochastic differential equations to study a general continuous-time principal-agent problem in which the utility function is time-inconsistent.

The third question is how to find the exact solution of the Hamilton-Jacobi-Bellman (HJB) equation. Using the stochastic control to solve the HJB equation is a complex mathematical process, especially in the case of increasing the control variables, which will be more complex nonlinear partial differential equations. By the Legendre transform, the problem can be transformed into a dual problem that is convenient for analysis, so as to solve some model solving problems. Reference [16] studies the portfolio problem under the general utility function, and prove the effectiveness of using the Legendre dual method to solve the HJB equation. Reference [17] uses Legendre transform-dual theory to solve the optimal investment problem based on hyperbolic absolute risk aversion (HARA) preference under constant elastic variance model. References [18, 19] study the investment-consumption of HARA utility by Legendre method.

In this paper, we study the optimal incentive contract under moral hazard in the framework of the principal-agent problem with time inconsistency in continuous-time. Assume both the principal and the agent are time-inconsistent, where the principal is risk-averse with an exponential utility function and the agent is risk-neutral with a linear utility function. To describe the time inconsistency of participants, we assume that the discount rate of participants is a function of time (not a constant) but still takes the form of exponential discounting because the principal's utility function is the exponential utility function. According to the property of the exponential function, we can divide the discount function into two parts: one part is the traditional discount function (discount rate is constant) and the other part is the uncertainty. Under the moral hazard, the principal can observe the process of output but cannot observe the agent's efforts and random perturbations. Thus the principal considers a part of the discount function as an unknown factor that affects the output. Reference [20] puts forward that the principal can constantly learn and update his belief from the unknown factor (the uncertain part of the discount function) through the existing information and historical information in the production process. Therefore, we transform the time-inconsistent principal into a principal who has the consistent time and learning process. For time-inconsistent agents, we can employ the Markov subgame perfect Nash equilibrium method [21] to get its time consistency strategy.

Through the above assumption, we solve the optimal contract in two cases where the principal is time-consistent and time-inconsistent. When the principal is time consistency, we use the stochastic optimal control method to derive the nonlinear partial differential equation (HJB equation) for the optimal value function of the principal. This partial differential equation is hard to solve for an exact closed-loop solution; however, the original problem can transfer to a dual problem by applying the Legendre transform in some cases. To obtain the exact solution of the optimal contract (closedloop solution), we use the Legendre transform-dual theory to obtain the explicit solution of the optimal solution and the optimal contract. When the principal is time-inconsistent, we obtain a value function which takes the time, the agent's personal information, and the utility of agent as variables, so as to derive a three-dimensional nonlinear second-order HJB equation. In this situation, we solve the HJB equation by guess solution.

The general structure of this paper is as follows. Section 2 presents the model. The incentive compatibility conditions and the given proof are provided in Section 3. Section 4 studies the optimal contract of a time-consistent and time-inconsistent principal. Section 5 provides numerical simulation of optimal strategy. Finally, we made the conclusion in Section 6.

2. The Model

2.1. The Agent. Suppose a principal made the contract with a time-inconsistent agent to manage a production process (or invest a risk project) and the initial time of contracting period is recorded as 0. Consider an infinite horizon stochastic environment; let {[W.sup.0.sub.t]} be a standard Brownian motion on the probability space ([OMEGA], F, [P.sup.0]). The risk process that pays a cumulative process [Y.sub.t] evolved on period [0, T] as follows:

d[Y.sub.t] = ([e.sub.t] - [c.sub.t]) dt + [sigma]d[W.sup.0.sub.t] (1)

assumes that A [member of] R is a compact set and [e.sub.t] [member of] A is the agent's effort choice. [c.sub.t] [member of] R is the salary of the agent (or his consumption). [sigma] is the project's volatility (constant) where [sigma] > 0. The path of [Y.sub.t] is observable both from the principal and the agent, but the path of {[W.sup.0.sub.t]} is observable only from the agent, and the effort choice [e.sub.t] is unobservable from the principal.

At the initial moment time 0, the principal provides the agent with a contract (pay the salary according to the contract). Assume that the salary is composed of two parts, a continuous payment [c.sub.t] and a terminal payment [C.sub.T]. Moreover, we assume that the agent is risk-neutral, u and v (we will give the explicit functional forms for u and v in the specific question (see (26) and (27))) are utility functions, and u and v are concave and twice continuously differentiable. And the agent has a discount function h, where [mathematical expression not reproducible] this paper, for convenience, sometimes h(t) is written as [h.sub.t]) is a general discount function; see [22]; then the agent will be time-inconsistent if [rho](s) is time dependence.

The agent's preferences as of time 0 read

[J.sub.A] (0;e) = E [[[integral].sup.T.sub.0] h(t)u([c.sub.t],[e.sub.t])dt + h(T) v ([C.sub.T]) (2)

2.2. The Principal. In this paper, we assume that the principal is partially naive type ([23] in a discrete time-inconsistent model, disaggregate participants into mature type, naive type, and partially naive type based on the cognitive differences of participants about their own future preference), which means that the principal knows he is time-inconsistent (his discount function is time-variant), but his current perception of the future discount rate is biased against the true value of the future discount rate (at time t, he cannot be sure the value of discount rate [delta]([tau]) (we can further assume that [rho](t), [delta](t): [0, T] [right arrow] [0,1]) when 0 [less than or equal to] t < [tau] [less than or equal to] T). Therefore, the principal will continuously update his belief in the future discount rate based on the past information. The detailed analysis is as follows.

The principal's preferences as of time 0 will be

[J.sub.p] (0; e, c) = E [[[integral].sup.T.sub.0] k(t)U([e.sub.t] - [c.sub.t]) dt + k (T) L (-[C.sub.T])] (3)

where U and L are utility functions by the principle and [mathematical expression not reproducible] is the discount function. Assume that the utility function [mathematical expression not reproducible] over salary (consumption) and effort and [mathematical expression not reproducible], where [lambda] is an absolute risk aversion coefficient. Hence, we rewrite (3) as follows:

[mathematical expression not reproducible] (4)

where [K.sub.t] = ([[integral].sup.t.sub.0] [delta](s)ds - [bar.[delta]t)/[lambda] that t [member of] [0, T].

From (4), we can split the principal's discount factor S into two parts under the condition of the exponential utility function: one part is a constant discount rate [bar.[delta]] and another part is [K.sub.t]. The purpose of the above operation is that the principal estimates a suitable constant discount rate [bar.[delta]] instead of a time-varying discount rate [delta](t), and the principal does not know the exact value of this constant. Therefore, the principal will constantly update the recognition of [bar.[delta]] based on past information. [c.sub.t] is the subjective choice of the principal, but [K.sub.t] indicates an objective reality, reflects the type of principal (time-inconsistent or time-consistent), and does not depend on the subjective choice of the principal. So we can set [K.sub.t] as a part of the output (investment) process. In this way, we can turn the principal's time inconsistency problem into an unknown constant discount rate problem. If the principal is time-consistent, namely, the principal's discount rate is a constant [[delta].sub.0], he can choose [[delta].sub.0] as a constant discount rate [bar.[delta]]; hence [[delta].sub.0] = [bar.[delta]] (or [K.sub.t] = 0). Under the probability measure P of the principal, we can regard that [K.sub.t] is an intrinsic influence of the risk item and is not subject to the control of the principal but must be considered. Hence the risk process (1) becomes

d[Y.sub.t] = ([e.sub.t] - [c.sub.t] + [K.sub.t]) dt + [sigma]d[W.sup.P.sub.t] (5)

As discussed above, we know that the process [Y.sub.t] and the path of [W.sup.0.sub.t] are observable from the agent; therefore the measure for the agent is [P.sup.0], which means that the agent does not learn in secretly, so the agent's beliefs will not be a hidden-state variable ([24]) (this does not mean that the agent cannot mislead the principal by choosing an effort, just that such actions do not cause persistent hidden states according to the agent's beliefs). From (1) we have

[mathematical expression not reproducible] (6)

Equation (5) expresses the principal's beliefs about the project and (6) expresses the agent's beliefs. The disagreement between the principal and the agent is caused by principal's nonindex discount.

At time t, the principal knows the exact value of [[integral].sup.t.sub.0] [delta](s)ds, because [delta](s) is his discount rate, but he does not knows the exact value of [bar.[delta]]; therefore we use a sided Bayesian learning model after signed contract, and we assume that the prior about K at time 0 is normally distributed with mean 0 and variance [[theta].sub.0]. The agent does not update his beliefs because he has perfect information. If the agent follows the recommended effort choice e, the principal's posterior beliefs about [K.sub.t] depend on [Y.sub.t] and on cumulative effort [E.sub.t] = [[integral].sup.t.sub.0] [e.sub.s]ds.

According to the Kalman-Bucy filter, see [25]. The conditional expectation [[??].sub.t] = E[[K.sub.t] | [Y.sub.t], [E.sub.t]] = [[sigma].sup.-2]([Y.sub.t] - [E.sub.t])/[[theta].sub.t] and the precision of filtering [[theta].sub.t] = E[[([K.sub.t] - [[??].sub.t]).sup.-2]] satisfy the system of equations

[mathematical expression not reproducible] (7)

where [[??].sub.0] = 0 and [[theta].sub.0] = 0 and [Z.sub.t] is a standard Brownian motion under the measure induced by the effort sequence {[e.sub.s] :0 [less than or equal to] s [less than or equal to] t} as

[mathematical expression not reproducible] (8)

3. Incentive Compatible Conditions

In this section, we focus on the agent's problem. Since the agent's objective function relies on the consumption process {[c.sub.t]}, that is, it relies on the history of the whole output, so it is non-Markov (the specific proof see [6]). Thus, the agent optimization problem cannot be solved by the standard dynamic programming principle. We will employ the stochastic maximum principle of the solution to the weak situation of the agent problem. The main idea is to apply random variational methods; the relative papers [8, 20, 26] used the similar approach.

Define the agent's continuation value (promised utility) [q.sub.t] as the expected discounted utility for remaining in the contract from date t forward

[mathematical expression not reproducible] (9)

where [[bar.Y].sub.s] [??] {[Y.sub.t] : 0 [less than or equal to] t [less than or equal to] s} is the output history. We use [GAMMA] to relate the expectation operator [E.sub.e][x] under the measure [Q.sub.e] (because the agent's objective function depends on the consumption process [c.sub.t], which is non-Markovian since it depends on the whole output path [[bar.Y].sub.t]; hence the optimization problem (9) cannot be analyzed with standard methods. So we use a martingale approach. Given a contract (c, [e.sup.*]), the agent controls the distribution of salary through his choice of effort. For the specific technical treatment see Appendix A). The agent's objective function can be recast as

[mathematical expression not reproducible] (10)

where [bar.c] = c(s, [bar.Y]) represents the salary paid by the principal based on output history.

After the change of measure, the time-inconsistent agent's problem only has one control. We apply a stochastic maximum principle to characterize the agent's optimality condition, and we also use the dynamic programming equation derive a stochastic maximum principle with general time-inconsistent.

The agent's problem is to find an admissible control to maximize the expected reward [J.sub.A](0; e, [bar.c]). In other words, the agent needs to solve the problem

[mathematical expression not reproducible] (11)

Given [bar.c], for all 0 [less than or equal to] t [less than or equal to] T, subject to

d[[GAMMA].sub.t] = [[GAMMA].sub.t][[sigma].sup.-1][[e.sub.t] - c(t, [bar.Y]) + [[??].sub.t]]d[Z.sup.0.sub.t] (12)

next we define the optimal effort for the time-inconsistent agent. Let [epsilon] >0 and [E.sub.e] [subset or equal to] [0, T] be a measurable set whose Lebesgue measure is [absolute value of [E.sub.[epsilon]]] = [epsilon]. Let [e.sub.t] [member of] A be an arbitrary effort choice. We define the following:

[mathematical expression not reproducible]. (13)

with [E.sub.[epsilon]] = [0,[epsilon]], and it is clear that [e.sup.[epsilon].sub.t] [member of] A. We refer [e.sup.[epsilon].sub.t] as a needle variation of the effort choice [e.sup.*.sub.t]. Then, we have following definition.

Definition 1. An effort choice [e.sup.*.sub.t] is an optimal effort choice for the time-inconsistent agent, for t [member of] (0, T], if

[mathematical expression not reproducible] (14)

The optimal density process [mathematical expression not reproducible] is a solution of the stochastic differential equation.

[mathematical expression not reproducible] (15)

Through the above technical processing, we convert the time-inconsistent strategy into time consistency optimal strategy of the agent.

Next, we analyze the conditions for the implementation of incentive contract. According to the previous analysis, the agent will control the distribution of salary by choosing his effort. The idea of using the distribution of salary as control to solve principal-agent problem goes back to [27] is and expanded by [5, 20]. The learning process of the principal complicates our problem, as the past effort affects not only current salary but also future expectations of the agent and the principal. Therefore, we have to deal with a principal-agent problem with time inconsistency and learning process. In Appendix B, we show how this difficulty can be handled through an extension of the proof by [8, 20]. And the conclusion presents the following.

Proposition 2. The agent's continuation value can be uniquely represented by the following differential form:

d[q.sub.t] = [[rho] (t) [q.sub.t] - u ([c.sub.t], [e.sup.*.sub.t])] dt + [[gamma].sub.t][sigma]d[Z.sub.t] [q.sub.T] = v([C.sub.T]) (16)

where [[gamma].sub.t] is a square integrable predictable process.

The necessary and sufficient conditions for [e.sup.*.sub.t] is the optimal effort choice reads:

(i) If [e.sup.*.sub.t] is the optimal effort choice, then for every t [member of] [0, T] there exists a solution [{[q.sub.t],[[gamma].sub.t]}.sub.t[member of][0,T]] of (16) which satisfies (in this paper, [[partial derivative].sub.e]u and [[partial derivative].sub.ee]u represent function u taking the first-order partial derivative and second-order partial derivative of e, respectively)

[[gamma].sub.t] + [[[sigma].sup.-2]/[[theta].sub.t]] [p.sub.t] + [[partial derivative].sub.e]u([c.sub.t], [e.sup.*.sub.t]) = 0 (17)

where

[mathematical expression not reproducible] (18)

(ii) For almost all t, if the following inequality holds

-2 h(t)[[partial derivative].sub.ee]u([c.sub.t],[e.sub.t]) [greater than or equal to] [[sigma].sup.2] [[xi].sub.t][[theta].sub.t] (19)

where [xi] is the predictable process defined by

[mathematical expression not reproducible] (20)

then [e.sup.*.sub.t] is the optimal effort choice.

According to (18), we say that [p.sub.t] is a stochastic process capturing the value of private information and then obtain the solution

[mathematical expression not reproducible] (21)

for all s [member of] [t, T].

In the following, at any time [e.sup.*.sub.s] > 0, the process for p reads

d[p.sub.t] = [[[rho].sub.t][P.sub.t] - [[partial derivative].sub.e]u ([c.sub.t], [e.sub.t])] dt + [??][sigma]d[Z.sub.t] [P.sub.T] = 0. (22)

the coefficients [??] is chosen by the principal to maximize his expected utility (the proof is given by [20])

According to (19), the process [[xi].sub.t] is the random fluctuation in the discounted sum of marginal utilities evaluated from time 0. Based on the stochastic differential equation of [p.sub.t], we can obtain that [mathematical expression not reproducible]. Besides that, [c.sub.t], [e.sub.t], [[gamma].sub.t], and are endogenous, which implies that we need to get a con tract to satisfy the necessary conditions and then prove that [[xi].sub.t] also meets the sufficient conditions. If the contract has no explicit solution, it will be difficult to prove that the contract also satisfies the sufficient condition. In this paper, the utility function for the principal is exponential function and the utility function for the agent is linear function. In the next section we will employ the exponential utility function to get the closed-loop solution of the contract.

4. The Optimal Contract

This section detailedly explains how to solve the principal's problem and derive the optimal contract in closed form when the principal's utility is exponential.

Eliminating [??] from the list states. For a given contract (c, [e.sup.*]), the principal expected utility form data t forward reads

[mathematical expression not reproducible] (23)

and defines

[mathematical expression not reproducible] (24)

and we have the following result.

Proposition 3.

[mathematical expression not reproducible] (25)

Proof in Appendix C.

Assume that Propositions 2 and 3 hold, so that the necessary condition is also sufficient. The principal's problem consists of solving for [J.sub.P](t, c, e) subject to the two promise-keeping constraints (9) and (18) and also to the incentive constraint (17). Given that the posterior mean [??] does not enter directly into any of the constraints, it can be dispensed as a state, only leaving the precision as a belief state. Furthermore, since [[theta].sub.t] is deterministic, we may index the precision by t. The fact is that the expected value of [K.sub.t] is immaterial to the principal's objective and illustrates that incentives are designed to reward effort, not to ability.

The Agent's utility function. To obtain the solution of optimal contract, according to our assumption in Section 2.1, the agent is risk-neutral and the utility functions of the agent are linear, i.e.,

u(c,e) = [e.sup.2]/2; (26)

moreover, we make a particular assumption about the terminal utility for the agent, setting

v([C.sub.T]) = a + ln [C.sub.T] (27)

where a is a constant. This assumption implies a situation in which an infinitely lived agent retires at the termination date T of the contract and, after retirement, he can consume a permanent annuity derived from [C.sub.T]. We always concentrate on problems where the contracting horizon goes to infinity T [right arrow] [infinity], so this particular assumption v is not critical.

Incentive providing contracts. Restore the principal's optimization problem as

[mathematical expression not reproducible] (28)

subject to

[mathematical expression not reproducible] (29)

[mathematical expression not reproducible] (30)

[[gamma].sub.t] = -[[partial derivative].sub.e]u([c.sub.t],[e.sub.t]) - [[[sigma].sup.-2]/[[theta].sub.t]] [p.sub.t] (31)

Since the state variables [q.sub.t] and [p.sub.t] are Markovian processes, we can use the HJB equation to analyze the principal's optimal control problem. Take V(t, q, p) as the principal's value function; this value function satisfies the HJB equation for 0 < t < T:

[mathematical expression not reproducible] (32)

4.1. Second Best Contracts for the Time-Consistent Principal. In this section, we mainly consider that the model with the time-consistent principal and setting hidden action means that the principal can observe the process [Y.sub.t] but does not know the type of agent and also cannot observe the agent's effort et. For incentive contracts (e, c) : [e.sub.t] > 0, for any t [greater than or equal to] 0, the necessary condition for incentive compatibility (17) becomes [[gamma].sub.t] = -[[partial derivative].sub.e]u([c.sub.t],[e.sub.t]). When the principal is time-consistent, it expresses that [[delta].sub.t] = [bar.[delta]] for all t [member of] [0, T]; hence we say K = 0. There is no need to inquire K and there is no influence of belief manipulation, which indicates that the information value p is equal to zero.

4.1.1. The HJB Equation. When the time tends to infinity (T [right arrow] [infinity]), the agent's continuous value function q is the only state variable for writing the principal's HJB equation as follows:

[mathematical expression not reproducible] (33)

with the terminal condition V(T, [q.sub.T]) = L(-[C.sub.T]), where [q.sub.T] = v([C.sub.T]). Taking the first-order conditions for (e, c), we have

[mathematical expression not reproducible](34)

Under full information, the principal can observe the agent's effort and consumption and there is no private information in this case. Hence, the principal can freely choose [[gamma].sub.t] as parts of the contract; i.e., [gamma] is independent of c and e, and then we have the following proposition.

Proposition 4. Under full information, the optimal effort for the principal is a constant [e.sup.*] = 1. We say [e.sup.*] = 1 is the first best effort level.

Under hidden action case, the optimal effort and consumption derived from (34) shows

[mathematical expression not reproducible] (35)

Putting (35) into (33), the HJB equation (33) for the value function V(t, q) is rewritten as

[mathematical expression not reproducible] (36)

Recalling the principal with the exponential utility function, the HJB equation is a complex nonlinear partial differential equation. It is difficult to take the classic separation of the variable method and solve it intuitively. In the next, we will employ the Legendre transform to turn the problem into a dual problem, by solving the dual problem to obtain the optimal solution for the original problem.

4.1.2. Legendre Transform. The dual function of V is defined by

[mathematical expression not reproducible] (37)

where z <0 is the dual variable of q. The function g(t, z) is closely related to [??](t, z) and can be used as a dual function of the function V(t, q). In this paper, g(t, z) is defined as the dual function of V(t, q) and satisfies

g(t,z) = -[[partial derivative].sub.z] [??](t,z) (38)

According to the definition of the dual function, we have z = [[partial derivative].sub.q]V(t, q) and

g (t, z) = q [??] (t, z) = V (t, q) - zg (t, z) (39)

Based on the conclusion in [28], the following transformation rules are obtained:

[[partial derivative].sub.t]V = [[partial derivative].sub.t][??], [[partial derivative].sub.q]V = z, [[partial derivative].sub.qq]V = -1/[[partial derivative].sub.zz][??] (40)

Define the dual function of the utility function as

[mathematical expression not reproducible] (41)

With the analysis from [29], the function U(x) and [??](z) can be changed to pass the Legendre conversion

[mathematical expression not reproducible] (42)

The relationship between the optimal values [x.sup.*] and [z.sup.*] is

[mathematical expression not reproducible], (43)

According to equation (39) and rules (40), the HJB equation for the dual problem is

[mathematical expression not reproducible] (44)

Taking the derivative of z and combineing (35), we have

[mathematical expression not reproducible] (45)

4.1.3. Solution of the HJB Equation. According to the form of the principle's utility function, we have

G(T,z) = [1/[lambda]]ln(-z/[lambda]) (46)

We can assume that HJB equation (36) has the following form of solution:

[mathematical expression not reproducible] (47)

with [phi](T) = h(T), [phi](T) = 0.

[mathematical expression not reproducible] (48)

plug in (45) and separate the variables; then we have

[mathematical expression not reproducible] (49)

Thus, the following two ordinary differential equations are established:

[partial derivative].sub.t][phi] + h = 0, [phi](T) = h(T) (50)

[mathematical expression not reproducible] (51)

Proposition 5. Assume that (i) the principal is time-consistent and the agent is time inconsistent, (ii) u(c,e) and v(C) are as defined in (26) and (27), and (iii) [e.sup.*.sub.t] > 0 for all t, so that the incentive constraint (17) binds for almost all t [member of] [0,T]. Then recommended effort and the agent's consumption is given by

[mathematical expression not reproducible] (52)

where

[mathematical expression not reproducible] (53)

4.2. Second Best Contracts for the Time-Inconsistent Principal. In this section, we discuss the case when the principal is time-inconsistent, which means the discount rate [delta] is not a constant. In this case, the principal still cannot observe the agent's efforts and consumption (moral hazard). Hence, the value of private information is not zero. As described in Section 3, the HJB equation is as follows:

[mathematical expression not reproducible] (54)

Now we need to solve this above equation by guessing the solution. Under the first-order conditions for (e, c, [??]), we have

[mathematical expression not reproducible] (55)

Substituting (55) into (54), denoting that [mathematical expression not reproducible], we have

[mathematical expression not reproducible] (56)

where [mathematical expression not reproducible].

In particular, we suppose the value function has the following form:

V(t,p,q) = -[e.sup.[lambda][A(t)q+g(t,p)]] (57)

with A(T) = 1 and g(T, p) = 0.

Hence, for some functions A(t) and g(t, p), the expressions of optimal effort and consumption are

[mathematical expression not reproducible] (58)

Substituting the optimal effort and consumption into (54), we deduce

[mathematical expression not reproducible] (59)

where [mathematical expression not reproducible].

The following two differential equations can be obtained by eliminating the dependence on q:

[[partial derivative].sub.t] A + A[rho] (t) - [A.sup.2] = 0, A(T) = 1 (60)

[mathematical expression not reproducible], (61)

According to (60), we can obtain

[mathematical expression not reproducible] (62)

From the above analysis, we need to know the specific expressions of g(t, p) to get the explicit expression of effort e(t) and consumption c(t).

Let us expand g(t, p) at p = 0,

[mathematical expression not reproducible] (63)

Here we consider a simple situation; according the principal's utility function U = -[e.sup.-[lambda](e-c)] and dp(t)/dt = [rho](t)p(t) + e(t), suppose the structure of the solution of equation (61) is [mathematical expression not reproducible] with the terminal condition [mathematical expression not reproducible].

Lemma 6. Suppose the structure of solution of (61) is [mathematical expression not reproducible], with the terminal condition [mathematical expression not reproducible], and then [mathematical expression not reproducible] are, respectively, the solutions of the following differential equations:

[mathematical expression not reproducible] (64)

where [mathematical expression not reproducible].

It is proved as follows.

Substituting [mathematical expression not reproducible] into (61), we calculate to get

[mathematical expression not reproducible] (65)

The following two differential equations can be obtained by eliminating the dependence on p:

[mathematical expression not reproducible] (66)

Lemma 6 conclusion can be provided by solving the above two differential equations.

Proposition 7. When the principal is time-inconsistent, the expressions of optimal effort and consumption for the agent are

[mathematical expression not reproducible] (67)

where A(t), [??](t), and [??](t) are given by (62) and (64), respectively.

It can be seen from the proposition that the second optimal effort is less than the first optimal effort. The optimal consumption is the linear function of the agent's promise value and private information.

5. Numerical Simulation

In this section, we provide a numerical simulation to characterize the dynamic behavior of the optimal portfolio strategy derived in the previous section. Firstly, an optimal effort numerical simulation is performed when the principal was time-consistent.

As shown in Figure 1, the discount rate of the agent is taken as the constant discount rate, namely, [rho](t) = [bar.[rho]]. The optimal effort, under different volatility, is reduced with the increase of volatility. It also shows that the greater the uncertainty, the lower the efforts of the agent. In addition, the three curves are almost declining, indicating that effort is a decreasing function of time.

If we take the discount rate of the agent as d[rho](t)/dt = [bar.[rho]] - [rho](t), i.e., [rho](t) = [bar.[rho]] + [e.sup.-t], where [bar.[rho]] = 0.1 and [sigma] = 0.1; 0.2; 0.5, respectively. The curves of effort variation are drawn in Figure 2.

Analogy with Figure 1, although the discount function is different, we still can get the similar conclusion, which means that the greater the uncertainty for any type of agent is (whether he is time-consistent or time-inconsistent), the less effort he provides. The reason is that in the case of moral hazard, the principal cannot distinguish the influence of the agent's efforts and uncertainty on the risk project's return.

Next, we simulate the optimal consumption (salary) under the specific parameters. Let [rho](t) = [bar.[rho]] = [bar.[delta]], [lambda] = [phi]/h[rho], and [[sigma].sup.2] = 0.25. According to the expression of [phi], we have

[phi](t) = [1 - [e.sup.[rho](t-T)]]/2([[rho].sup.2] + [[sigma].sup.2]) (68)

Since q is a stochastic process, the mathematical expectation of q can be expressed as

[mathematical expression not reproducible] (69)

where q(T) = v(c(T)) [??] a + lnc(T) within a constant a. Then we substitute the expressions of e, q and [phi] into the above expression; the relationship between c(t) and time t can be simulated with the terminal condition c(T) = 1.

Consumption is initially diminishing because the effort is a function that decreases monotonically over time. After falling to a certain value, the bottoming out of consumption rose and this trend can be explained by the value of terminal condition of consumption setting. The overall consumption trend is shown in Figure 3.

As shown in Figure 3, in the period of time just after the contract has been performed (for example t [member of] [0,5]), the greater the volatility, the lower the consumption. The reason is that greater volatility leads to lower efforts. In the latter part of the contract, the situation is just the opposite.

Finally, we simulate the optimal effort trend when the principal is time-inconsistent. Assuming that the discount rate for the agent is the constant discount rate, then two different effort curves are both horizontal lines, as Figure 4 shows. This indicates that, under the established discount rate, the optimal effort does not change over time. The reason is that we hypothesize the value function of principal as an exponential function under private information. And also, the effort is a decreasing function of the agent discount rate, which means that if the agent pays more attention to the present value (timely enjoyment), the higher discount rate and the less effort provided.

6. Conclusion

In this paper, we were interested in a time-inconsistent principal-agent problem under full information and moral hazard framework. In particular, the optimal contracts we discussed in details assume that the principal is risk aversion and the agent is risk-neutral. There are two main works we have done in the paper. First, we made the technical processing of the time-inconsistent principal and agent, respectively. We transformed the principal by the changing of the time-varying discount rate into the time-consistent. And we used the Markov subgame perfect Nash equilibrium method to get the time consistency strategy for the time-inconsistent agent. Second, we used the Legendre transform duality theory to transform the HJB equation into the dual equation. The solution of the original HJB equation was obtained smoothly, thus obtaining the explicit expression of the optimal effort and optimal consumption. Under moral hazard, we also obtained the exact solution of the original HJB equation by using the guessing solution. We found that the optimal consumption of agent is a linear function of promised value q and private information p. The optimal effort is the function of the agent discount rate. Eventually, we considered the contractual relationship between the principal and the agent in a special circumstances. The more general situations of time-inconsistent contracts should be considered in future research.

https://doi.org/10.1155/2018/8512608

Appendix

A. Details of the Change of Measure

Consider the Brownian motion [Z.sup.0] under a probability space with probability measure Q. And let

d[Y.sub.t] = [sigma]d[Z.sup.0.sub.t] (A.1)

so that [Y.sub.t] is also a Brownian motion under Q. Given a contract c(t, [[bar.Y].sub.t]), we define the drift of output as

f(t,[bar.Y[,[e.sub.t]) = [e.sub.t] - c(t,[bar.Y])+[[??].sub.t] (A.2)

Since expected output is linear in cumulative output, then we define a [F.sub.t]--predictable process with an effort e [member of] A:

[mathematical expression not reproducible] (A.3)

for 0 [less than or equal to] t [less than or equal to] T, and [[GAMMA].sub.t] is an Ft- martingale with E([[GAMMA].sub.T](e)) = 1 for all e [member of] A. By Girsanov theorem, the defined new measure [Q.sub.e] is as

d[Q.sub.e]/dQ = [[GAMMA].sub.T](e) (A.4)

and the process [Z.sup.e.sub.t] defined by

[mathematical expression not reproducible] (A.5)

is a Brownian motion under [Q.sub.e], and the triple (Y, [Z.sup.e], [Q.sub.e]) is a weak solution of the following SDE:

d[Y.sub.t] = f (t, [bar.Y], [e.sub.t])dt + [sigma]d[Z.sup.e.sub.t] (A.6)

Hence each effort choice e results in a different Brownian motion. r defined above satisfied [[GAMMA].sub.t] = E([[GAMMA].sub.T] | [F.sub.t]) which is the relative process for the change of measure.

B. The Proof of Proposition 2

Consider the agent problem and suppose c(t, [[bar.Y].sub.t]) is given; in general we assume c = 0 to ease the presentation. Let [mathematical expression not reproducible] be the density process corresponding to [mathematical expression not reproducible], i.e.,

[mathematical expression not reproducible] (B.1)

Processes [mathematical expression not reproducible] are defined by the SDE

[mathematical expression not reproducible] (B.2)

and

[mathematical expression not reproducible] (B.3)

can be regarded as the first-order and the second-order variation of the density process [{[[GAMMA].sup.[epsilon].sub.t]}.sub.t[member of][0,T]].

From the reference of Theorem 4.4 in Chapter 3. in [30], the following expansion holds:

[mathematical expression not reproducible] (B.4)

Define the adjoint variables [{M(t), N(t)}.sub.t[member of][0,T]] as follows:

[mathematical expression not reproducible] (B.5)

Using adjoint processes [{M(t),N(t)}.sub.t[member of][0,T]] to partial integration, we can remove [mathematical expression not reproducible] in the above equation.

Now we introduce the Hamiltonian function H by

[mathematical expression not reproducible] (B.6)

Since the control variable enters into the volatility of the density process, we have to introduce a second pair of adjoint processes [{[P.sub.t], [[??].sub.t]}.sub.t[member of][0,T]] by

[mathematical expression not reproducible] (B.7)

By using the Lemma 4.5 and Lemma 4.6 in Chapter 3 [30], we have

[mathematical expression not reproducible] (B.8)

We claim that the process [{[P.sub.t]}.sub.t[member of][0,T]] is nonpositive. As a matter of fact [H.sub.IT] = 0; hence

[mathematical expression not reproducible] (B.9)

where

[mathematical expression not reproducible] (B.10)

Therefore a sufficient condition for [e.sup.*.sub.t] has to be an optimal effort strategy that

[mathematical expression not reproducible] (B.11)

Let [mathematical expression not reproducible]; we have

[mathematical expression not reproducible] (B.12)

Hence, (16) is satisfied.

There is no drift in [GAMMA]; in addition, since [[GAMMA].sub.t] > 0, we can define the conditions on the Hamitonian H rather than H. Hence, the first-order condition for e is

[mathematical expression not reproducible] (B.13)

Finally, the necessary condition for [e.sup.*.sub.t] to be an optimal effort choice is [[partial derivative].sub.e]H = 0, namely,

[mathematical expression not reproducible] (B.14)

On the other hand, from (9) the expected utility form [e.sup.*] is given by [J.sub.A](0;[e.sup.*]) = [q.sup.*.sub.0],

[mathematical expression not reproducible] (B15)

Then, for an arbitrary effort choice e [member of] A, we define [[omega].sub.t] = [e.sub.t] - [e.sup.*.sub.t] and [[DELTA].sub.t] = [[integral].sup.t.sub.0] [[omega].sub.s]ds = [E.sub.t] - [E.sup.*.sub.t]; the following holds:

[mathematical expression not reproducible] (B.16)

The second term on the right hand side [mathematical expression not reproducible] uses the fact that the stochastic integral is martingale. The last term on the right hand side can be written as

[mathematical expression not reproducible] (B.17)

where the last equality follows the form definition of p and [xi]. Hence,

[mathematical expression not reproducible] (B.18)

Similar to [20], we define a new function

[mathematical expression not reproducible] (B.19)

which says that G(x) is a generalized Hamiltonian for the agent's problem with full information which means without private information (i.e., [xi] = 0), we have h(t) x G(t, x) = H(t, x). Taking the first-order approximation on G(x) around [E.sup.*] yields

[mathematical expression not reproducible] (B.20)

so that

[mathematical expression not reproducible] (B.21)

is negative when G(*) is concave. Concavity is considered by the Hessian matrix of G(*),

[mathematical expression not reproducible] (B.22)

which is negative semidefinite when -2h(t)([[sigma].sup.-2]/[[theta].sub.t])[[partial derivative].sub.ee]u([c.sub.t], [e.sub.t]) [greater than or equal to] [[xi].sub.t]. When the agent seeks to maximize expected utility, [e.sup.*] is the optimal effort choice for the agent. All above completes the proof of Proposition 2.

C. The Proof of Proposition 4

In front, we provide a significant lemma.

Lemma C.l (jessen inequality). Let f(X) be a continuous convex function and X be a random variable with respect to F as [sigma]--integrable; then f(X) is expected to exist with respect to F,

f(E [X | F]) [less than or equal to] E [f(X) | F] (C.1)

Then we prove Proposition 4; the belief [[??].sub.t] follows a martingale and it is [F.sup.Y.sub.t]--predictable,

[mathematical expression not reproducible] (C.2)

Divide [[PI].sub.t] (c, [e.sup.*]) into two parts: for the first part,

[mathematical expression not reproducible] (C.3)

and on the other hand,

[mathematical expression not reproducible] (C.4)

Combine the above two inequalities and then

[mathematical expression not reproducible] (C.5)

For the second part,

[mathematical expression not reproducible] (C.6)

And on the other hand,

[mathematical expression not reproducible] (C.7)

Through the above two inequalities, we have

[mathematical expression not reproducible] (C.8)

Finally,

[mathematical expression not reproducible] (C.9)

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

[1] B. Holmstrom and P. Milgrom, "Aggregation and linearity in the provision of intertemporal incentives," Econometrica, vol. 55, no. 2, pp. 303-328, 1987.

[2] H. Schattler and J. Sung, "The first-order approach to the continuous-time principal-agent problem with exponential utility," Journal of Economic Theory, vol. 61, no. 2, pp. 331-371, 1993.

[3] H. Schattler and J. Sung, "On optimal sharing rules in discrete- and continuous-time principal--agent problems with exponential utility," Journal of Economic Dynamics and Control (JEDC), vol. 21, no. 2-3, pp. 551-574, 1997.

[4] H. M. Muller, "The first-best sharing rule in the continuous-time principal-agent problem with exponential utility," Journal of Economic Theory, vol. 79, no. 2, pp. 276-280, 1998.

[5] N. Williams, "Persistent private information," Econometrica, vol. 79, no. 4, pp. 1233-1275, 2011.

[6] N. Williams, "A solvable continuous time dynamic principal-agent model," Journal of Economic Theory, vol. 159, no. part B, pp. 989-1015, 2015.

[7] J. s. Cvitanic, X. Wan, and J. Zhang, "Optimal contracts in continuous-time models," Journal of Applied Mathematics and Stochastic Analysis. JAMSA, Art. ID 95203, 27 pages, 2006.

[8] J. Cvitanic, X. Wan, and J. Zhang, "Optimal compensation with hidden action and lump-sum payment in a continuous-time model," Applied Mathematics & Optimization, vol. 59, no. 1, pp. 99-146, 2009.

[9] J. s. Cvitanic and J. Zhang, Contract Theory in Continuous-Time Models, Springer-Verlag, New York, NY, USA, 2012.

[10] K. Eliaz and R. Spiegler, "Contracting with diversely naive agents," Review of Economic Studies, vol. 73, no. 3, pp. 689-714, 2006.

[11] M. Yilmaz, "Repeated moral hazard with a time-inconsistent agent," Journal of Economic Behavior & Organization, vol. 95, pp. 70-89, 2013.

[12] M. Yilmaz, "Contracting with a naive time-inconsistent agent: To exploit or not to exploit?" Mathematical Social Sciences, vol. 77, pp. 46-51, 2015.

[13] D. Laibson, "Golden eggs and hyperbolic discounting," The Quarterly Journal of Economics, vol. 112, no. 2, pp. 443-478, 1997.

[14] Z. Zou, S. Chen, Y. Yan, and Z. Honghao, "The optimization and decision-making of principal-agent problem based on time-inconsistency preference," Chinese Journal of Management Science, vol. 21, no. 4, pp. 27-34, 2013.

[15] B. Djehiche and P. Helgesson, The principal-agent problem with time inconsistent utility functions, ArXiv preprint, arXiv, 1503.05416.

[16] B. Bian and H. Zheng, "Turnpike property and convergence rate for an investment model with general utility functions," Journal of Economic Dynamics & Control, vol. 51, pp. 28-49, 2015.

[17] E. J. Jung and J. H. Kim, "Optimal investment strategies for the HARA utility under the constant elasticity of variance model," Insurance: Mathematics and Economics, vol. 51, no. 3, pp. 667-673, 2012.

[18] Hao Chang and Xi-min Rong, "Legendre Transform-Dual Solution for a Class of Investment and Consumption Problems with HARA Utility," Mathematical Problems in Engineering, vol. 2014, pp. 1-7, 2014.

[19] H. Chang and K. Chang, "Optimal consumption investment strategy under the Vasicek model: HARA utility and Legendre transform," Insurance: Mathematics & Economics, vol. 72, pp. 215-227, 2017.

[20] J. Prat and B. Jovanovic, "Dynamic contracts when the agent's quality is unknown," Theoretical Economics, vol. 9, no. 3, pp. 865-914, 2014.

[21] I. Ekeland and A. Lazrak, Being serious about non-commitment: subgame perfect equilibrium in continuous time, Arxiv preprint math/0604264v1.

[22] J.-m. Yong, "Deterministic time-inconsistent optimal control problems--an essentially cooperative approach," Acta Mathematicae Applicatae Sinica, vol. 28, no. 1, pp. 1-30, 2012.

[23] T. O'Donoghue and M. Rabin, "Choice and procrastination," The Quarterly Journal of Economics, vol. 116, no. 1, pp. 121-160, 2001.

[24] T. Adrian and M. M. Westerfield, "Disagreement and learning in a dynamic contracting model," Review of Financial Studies, vol. 22, no. 10, pp. 3873-3906, 2009.

[25] R. S. S. A. N. Liptser, "Statistics of random processes, ii applications. no. 6 in applications of mathematics," Stochastic Modelling and Applied Probability.

[26] Z. He, "Dynamic compensation contracts with private savings," Review of Financial Studies , vol. 25, no. 5, pp. 1494-1549, 2012.

[27] J. Mirrlees, "Notes on welfare economics, information, and uncertainty," Essays on economic behavior under uncertainty, pp. 243-261, 1974.

[28] J. W. Gao, "Stochastic optimal control of DC pension funds," Insurance: Mathematics and Economics, vol. 42, no. 3, pp. 1159-1164, 2008.

[29] D. Kramkov and W. Schachermayer, "Necessary and sufficient conditions in the problem of optimal investment in incomplete markets," The Annals of Applied Probability, vol. 13, no. 4, pp. 1504-1516, 2003.

[30] J. Yong and X. Y. Zhou, Stochastic Controls: Hamiltonian Systems and HJB Equations, vol. 43 of Applications of Mathematics, Springer, New York, NY, USA, 1999.

Chao Li (iD) and Zhijian Qiu

School of Economic Mathematics, Southwestern University of Finance and Economics, Chengdu, China

Correspondence should be addressed to Chao Li; 1160202z1004@2016.swufe.edu.cn

Received 8 March 2018; Accepted 10 June 2018; Published 1 August 2018

Academic Editor: Jorge E. Macias-Diaz

Caption: Figure 1: The efforts of constant discount rate for different uncertainty [sigma].

Caption: Figure 2: The efforts of variable discount rate for different uncertainty [sigma].

Caption: Figure 3: The consumption of constant discount rate for different uncertainty [sigma].

Caption: Figure 4: The efforts with time-inconsistent principle for different uncertainty [rho].
COPYRIGHT 2018 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Li, Chao; Qiu, Zhijian
Publication:Discrete Dynamics in Nature and Society
Date:Jan 1, 2018
Words:8019
Previous Article:A New Algorithm for Solving Terminal Value Problems of q-Difference Equations.
Next Article:The Existence Results of Solutions for System of Fractional Differential Equations with Integral Boundary Conditions.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |