Printer Friendly

THE AGENCY OF POLITICS AND SCIENCE.

I. INTRODUCTION

Reaching scientific certainty before implementing policies is typically an illusion. The ubiquitous ambiguity leaves room for speculations that political interests may determine in which form scientific insights are publicized or even produced. As such, a natural question arises: Why do governments commission research at all? One reason suggested by The LSE GV314 Group (2014) is that politicians look for endorsement of their preferred policy. Consequently, they select researchers that are more likely to deliver the desired results and have the potential to make them "look good." In line with this explanation, Richard Toi1 claims that as a result of governmental agencies following a political agenda, "certain researchers are promoted at the expense of more qualified colleagues"(The Daily Caller 2014).

In this paper, we build a model of delegation of research that shows the mechanisms through which politicians can obtain scientific support for their preferred policies. The model is able to reproduce the above claim of Richard Tol and uncover the consequences of the political pressure regarding what type of researchers are chosen for governmental research. Our model is motivated by the anecdotal evidence on the politics-science interactions, having as a prime example the concerns about the involvement of politics into the drafting of the scientific results, which were voiced by the scientists participating in the elaboration of the Intergovernmental Panel on Climate Change (IPCC).

On his "An Economics View of the Environment" blog, Robert Stavins, a Co-Coordinating Lead Author of the IPCC Fifth Assessment Report, expressed his frustration and disappointment over the intervention of the governments' representatives in making recommendations and changes in the text of the Summary for Policymakers (SPM) "on purely political, as opposed to scientific bases" (Stavins 2014). Stavins points to the fact that the process followed by the IPCC in the approval of the SPM seeks to build political credibility by sacrificing scientific integrity. Similarly, in a column of The Daily Caller from May 29, 2014, Richard Tol argues that it is "rare that a government agency with a purely scientific agenda takes the lead on IPCC matters" {The Daily Caller 2014). Consistent with such concerns, the Union of Concerned Scientist published a report detailing political interference at the U.S. Environmental Protection Agency (Union of Concerned Scientists 2008). More recently, in the same context of environmental and climate policy, Dave Levitan at Gizmodo criticizes an "emerging Trump Scientific Method that places predetermined conclusions front-and-center" (Levitan 2017).

The above examples suggest a grounded concern for the politically motivated interference into the shaping of environmental policy. Arguably, climate policies are a long-term endeavor that addresses a state of the world that is revealed only in the far future, when today's decision makers are no longer in the office and, therefore, cannot be held accountable. However, policies for adaptation to extreme weather events often reveal their true effectiveness within the term of the office of the relevant decision-maker, especially in regions of high vulnerability. Similarly, the effect of health care evidence-based policies becomes apparent after short periods of time during which the policy maker is still in the political arena and seeks voters' support. In this respect, Brownson, Chriqui, and Stamatakis (2009) emphasize the interference of factors such as ideology, interest groups, or personal experience into the adoption of laws and regulations in the public health domain (e.g., vaccination, smoking regulations, trans fats regulations).

In fact, a group of researchers from the London School of Economics surveyed 204 academics that conducted governmental-commissioned research in various fields and find that political pressure occurs at all stages of the research process, from the commissioning to the drafting of the results (The LSE GV314 Group 2014). The authors conclude that, nonetheless, there are persistent disincentives for researchers to compromise their scientific integrity for the sake of governmental contracts, at least for the case of the British researchers comprised in their study. In this sense, in a comment from Nature, Geden (2015) warns that scientific advisers who shy away from reporting the scientific facts related to climate change "squander their scientific reputations and public trust in climate research." Hence, depending on the extent of their concern for academic reputation, one would expect that some researchers resist the political pressure, while others comply with the conditions of a governmental contract that require the delivery of favorable results. By the same token, some politicians may be more interested in political ammunition while others are more concerned with the reliability of the scientific evidence itself.

The above anecdotal evidence not only suggests a widespread involvement of politics in the production of policy research, but it also points to channels that materialize this involvement. First, the political pressure is exercised at the level of message releasing, that is, in the policy recommendation, despite the fact that the politicians may know the actual research results. This pressure is function of the strength of the political ideology, the influence of the interest groups or, simply, the politicians' personal biases. Second, the success of the political pressure crucially depends on the degree of reputational concern exhibited by the researchers. Similarly, politicians differ in their preferences for the quality of the scientific advice.

Guided by the above observations, we model a principal-agent relationship between a researcher (the agent, "she") and a politician (the principal, "he") who uses the researcher's remuneration to induce her to bias the scientific report toward the politician's ideology. When choosing a policy, the politician faces a conflict between three objectives. First, he would like the policy to be in line with his ideal policy that we assume to be exogenous and independent of the true state of the world. Second, the politician is interested in pleasing the voters by adopting a policy which is in line with the true state of the world. Third, we assume that the politician can be punished by voters for implementing a policy that deviates from the scientific advice. The researcher, on the other hand, derives utility from the research grant paid by the politician, and is interested in preserving her academic reputation. Researchers may differ with respect to their concern for reputation as well as with respect to their ability to conduct quality research. We consider both symmetric and asymmetric information concerning the researcher's type at the time of contracting.

In our model, the politician offers a contract to a researcher to get information about the state of the world which corresponds to the optimal policy from the point of view of the voters. At the time of contracting the state of the world is unknown to both the politician and the researcher, as well as to the voters. The contract consists of a one-time transfer, which is paid after the researcher delivers the results of the research, but before the state of the world is realized. After the research report is made public, the politician implements a policy. Finally, the state of the world is revealed and the payoffs are realized.

Our main results are the following. For the case of symmetric information, that is, where the politician knows the researcher's type, the politician induces the researcher to release a biased report which is a weighted average of the politician's ideal policy and the voters' most preferred policy which we call the "optimal" policy. If researchers do not differ with respect to their ability, the politician contracts with the researcher who cares the least about her reputation as biasing her report is least costly. Conversely, if researchers do not differ with respect to their reputational concerns, the politician hires the researcher with the highest ability. As a consequence, the interests of the politicians and the voters are aligned regarding the researcher's ability, but differ regarding the researcher's reputational concerns. Voters prefer the researcher with the highest concern for reputation as she can be bribed to a lesser extent into misreporting scientific evidence. Finally, we discuss how the politician trades off ability and reputation of researchers when deciding on whom to hire. We demonstrate that politicians who strongly favor specific (biased) policies tend to contract with low-ability researchers.

We also discuss the politician's contract offers when he is not fully informed about the type of the researcher. First, we consider asymmetric information with respect to the researcher's concern for reputation. We show that asymmetric information changes the contracted report of the researcher with high reputational concerns such that the resulting policy is closer to the optimal policy from the perspective of the voters. As a consequence, voters prefer the politician to be uninformed about the researcher's concern for reputation. Second, we consider asymmetry with respect to the researcher's ability. Despite the asymmetric information, the politician can find transfers that implement the same reports that would be implemented under symmetric information, for each type of researcher.

The outline of the paper is the following. We discuss the related literature in Section II. In Section III we present our model. In Section IV we derive the contracts under symmetric information, that is, when the politician knows the researcher's ability and concern for reputation. The case of asymmetric information concerning the researcher's type is treated in Section V. Finally, Section VI concludes. All proofs are in the Appendix.

II. RELATED LITERATURE

Our paper relates to the literature on the economics of expert advice with reputational concern (Inderst and Ottaviani 2012; Morris 2001; Ottaviani and Sorensen 2006) and strategic transmission of information (Szalay 2009). Morris (2001) builds a repeated cheap-talk model of information transmission from an adviser to a decision-maker who believes that the two parties have identical preferences only with some probability. That is, the decision-maker has imperfect knowledge about the adviser's preference for one policy or another. Because the adviser has concerns about her reputation with the decisionmaker and does not want to appear biased, she has an incentive to lie, resulting in a decrease of social welfare. This contrasts with our model in which the reputational concern disciplines the researcher to report less biased results. The reason for this is that we model reputation with an outside player, for instance the academic community, rather than with the decision-maker. We also differ from Morris (2001) in that the decision-maker can observe the signal received by the adviser. While we focus on the mechanism through which the decision-maker incentivizes the adviser to support his preferred policy via her message, the decision-maker in Morris (2001) is a benevolent one who is interested in obtaining reliable information from the adviser.

Ottaviani and Sorensen (2006) develop a model of strategic information transmission from an expert to an evaluator. In their model, the expert privately observes a signal about the state of the world. The informativeness of the signal depends on the expert's ability, which is exogenously given and unobserved by both parties. Instead, the evaluator confronts the expert's message with the ex post realization of the state of the world and forms a posterior belief about the ability of the expert. As this belief determines her payoff, the expert wants to appear being well informed. Similarly to Ottaviani and Sorensen (2006) our expert (the researcher) is concerned with her reputation. However, while in both models the experts bias their reports, the reasons for this and the directions of the biases are different. In Ottaviani and Sorensen (2006) communication is cheap but the expert is ignorant about her own type. Therefore, she biases her report toward the prior belief about the state of the world in order to maximize her expected payoff. By contrast, in our model the information transmission is compensated via a transfer contingent on the message and, therefore, our expert biases her report toward the ideal policy of the politician. This follows from the fact that the expert's concern for reputation in our model can be regarded as her intrinsic characteristic that is not rewarded by the receiver (the politician).

The context studied in Inderst and Ottaviani (2012) also bears some analogy with our model. In their model, the adviser (the researcher in our model) is an intermediary between a supplier and a customer. As in our model, the supplier offers a commission (kickbacks) to the intermediary for giving a biased advice to the customer. The advice issued by the expert to the customer resembles the public message of the researcher in our model. The policy variable in their model is the product price set by the supplier, which takes into account the advice of the expert. Moreover, the incentives of the expert are similar to those of the researcher in our model: on the one hand, she is interested in the transfer received from the supplier; on the other hand, she cares for her reputation with the customers. Inderst and Ottaviani (2012) introduce competition between suppliers which increases efficiency irrespective of the adviser's concern for reputation. In fact, the focus of their paper is on the welfare effects of the competition between the suppliers and the disclosure of the commissions to the customers. (2)

When studying information acquisition by (ideological) policymakers through contracting with scientists, we focus on the impact of ideology by politicians and reputational preference by scientists on the implemented contract. With this, we abstract from potential competition among politicians which may limit the potential of misrepresenting information, (3) and focus on a principal-agent relationship between a single politician and a single researcher.

Szalay (2009) studies information acquisition and reporting in a principal-agent framework. More precisely, he analyzes the procurement problem in which neither the buyer (the principal) nor the seller (the agent) know the production cost, that is, the agent's type. Hence, the difference to our model is that when the principal commits to the menu of contracts, no party is informed about the agent's type. However, after the principal commits to the contract, but before the signing of the contract, the agent receives a noisy and private signal about her type. The quality of this signal depends on the agent's choice of costly effort, neither of which is observed by the principal. By contrast, in our model the quality of the signal received by the agent depends on her exogenously given ability. The focus in Szalay (2009) is on the incentive for information acquisition, which, as it turns out, has a strictly positive value to the principal. Therefore, he always implements a positive level of effort.

From a different angle, our paper also relates to Prendergast (1993) and Ewerhart and Schmitz (2000) who study the phenomenon of "yes men" behavior. The most important common element with our model is the incentive of the agent to conform with the principal. However, the reason for conformity in these papers is different from ours. In Prendergast (1993) and Ewerhart and Schmitz (2000) the payoff to the agent is determined by the comparison between the agent's message and the principal's belief about the state of the world, of which the agent only has noisy information. As Prendergast (1993) shows, this induces the agent to bias her report conditional on any effort level exercised for finding information about the true state of the world. Following up on this result, Ewerhart and Schmitz (2000) show that contracts can be designed such that the first-best is achieved, that is, the agent truthfully reports her signal. Different from our setting, in the "yes men" papers the principal is only interested in accurate information about the state of the world. Therefore, he would prefer that the agent does not conform, that is, that she has integrity. By contrast, in our model the incentive for conformity is due to a monetary transfer from the principal to the agent. In particular, the principal pays the agent to support his ideal policy which is common knowledge.

III. MODEL

We consider an adverse selection model in which a politician (the principal) contracts with a researcher (the agent) in order to acquire information about the state of the world before implementing a policy. After the contract is signed, the researcher conducts research which produces a signal about the state of the world. In the next stage, the researcher releases a report on the results of his research (the message) which is observed both by the public (the voters) and the politician. However, we assume that the true result of the research (the signal) is observed only by the researcher and by the politician. This models a situation where the politician acquires an exclusive right to access and use the scientific data collected by the researcher on behalf of the politician so that he can verify the researcher's signal. For simplicity we assume that the verification cost is zero.

After observing the researcher's message and before the revelation of the true state of the world, the politician chooses a policy. Finally, after the state of the world is revealed, the payoffs are realized.

In the following we present the details of our model. There are two states of the world, [[theta].sup.L] and [[theta].sup.H], where L and H stand for low and high, respectively, and [[theta].sup.L] <[[theta].sup.H]. Both the politician and the researcher have the same prior p = P([[theta].sup.H]) that the state is high, where 0 < p < 1. For further reference let [bar.[theta]] = p[[theta].sup.H] + (1 - p) [[theta].sup.L] denote the prior expectation about the state of the world.

The result of the research is a signal s [member of][[s.sup.L], [s.sup.H]}. (4) The precision of the signal depends on the ability of the researcher, that is, the quality of her research output: For i = L, H, let

(1) P([s.sup.i]|[[theta].sup.i])=q

be the probability of signal [s.sup.i] conditional on state [[theta].sup.i], where the quality is q [member of] [1/2, 1], That is, the least able researcher delivers an uninformative signal (q = 1/2), while a maximally able researcher generates a fully informative signal (q = 1). We use ability of the researcher synonymously with the quality of the generated signal. Ability is assumed to be a given personal trait of the researcher. Moreover, we assume that research is costless for the researcher. Note that our results would not change if there were a fixed cost of research since this would merely increase the researcher's reservation utility.

Both the politician and the researcher are expected utility maximizers. The politician's utility [U.sup.P] depends on his ideal policy (ideology) [gamma][member of]R, the chosen policy y [member of] R, the researcher's message m [member of] R, the transfer T [member of] R he pays to the researcher, and the true state of the world [theta] [member of] {[[theta].sup.H], [[theta].sup.L]}. More precisely, we assume that

(2)

[U.sup.P] (y, m, T|[theta]) = -[[alpha].sub.[gamma]] [([gamma] - y).sup.2] - [[alpha].sub.m] [(m - y).sup.2] -[[alpha].sub.[theta]] [([theta]-y).sup.2] -T,

where [[alpha].sub.[gamma]] > 0, [[alpha].sub.m] > 0, and [[alpha].sub.[theta]] > 0 are the weights the politician assigns to the policy being close to his ideology, the policy being consistent with the public message, and the policy being close to the true state of the world, respectively.

Hence, the objective of the politician is to implement a policy which is close to his ideal policy [gamma], but at the same time takes voters preferences into account. While we do not explicitly consider voters as active players in our model, their preferences are given by a desire to have a policy close to the true state of the world, that is, their utility is given by -[([theta] - y).sup.2].

Having chosen a policy close to the actual state of the world may, for example, increase the politician's reelection probability, although we do not explicitly model an electoral stage in this paper. If voters were fully rational, they would obviously judge the politician based on that part of her achievements she is actually accountable for. In our context, this would imply that voters punish the politician if his policy deviates from the ex interim optimal policy (after the realization of the signal but before the revelation of the true state) rather than punishing the politician for deviations from the ex post optimal policy (after the revelation of the true state). However, there is ample evidence that voting behavior is strongly influenced by events that are beyond the incumbent's control (e.g., Campello and Zucco Jr. 2016) which suggests that voters do not reward or punish politicians for their intentions but for their achievements, no matter whether these are due to merit or luck. With this modeling assumption, we do make the implicit assumption that voters are ignorant about the politician's and the researcher's preferences and hence, unable to infer the signal from the message publicized by the researcher. While voters could know the intensity of the punishment if the policy deviates from the researcher's message (the [[alpha].sub.m] parameter), it is not obvious that they are informed about the politician's preference for conforming with his personal bias (the ay parameter) or the bias itself. (5) And even with this information at hand there is evidence that voters may lack the necessary sophistication to process information beyond its face value. (6) Most importantly, as it will become clear in Section IV, in order to update their beliefs about the signal (after observing the message), voters should also be informed about the researcher's preference for reputation. However, there is no good reason to believe this could be the case.

Next, we assume that the politician's utility is decreasing in [(m--y).sup.2], that is, that ideally the politician would like the researcher's public message to support his policy choice. This term captures what is often referred as evidence-based policy, that may contrast with intuition, ideological-led or even theory-based policy making (Banks 2009). In a country, where evidence-based policy making is the norm, (7) the politician suffers a cost if he wants to implement a policy that deviates from scientific advice (in). For example, in order to implement policy, the politician may need the approval of the parliament, which is more difficult to obtain if the policy is not evidence based. This constraint, potentially, introduces a conflict with the politician's ideal policy [gamma].

The researcher, on the other hand, is interested in sending a public report (the message) which does not undermine her reputation while she is indifferent regarding the chosen policy. The utility [U.sup.R] of the researcher is given by

(3) [U.sup.R](m,T|[theta]) = T-[beta][([theta]-m).sup.2],

where [beta] > 0 is the weight she assigns to her reputation. The researcher's reputation is measured as the distance between the true state of the world and the forecast she provides in the public report. It should be stressed that in our model the notion of reputation is rather different from the one in Ottaviani and Sorensen (2006), though in both models the ability of the expert is judged by her actions. The main difference is that in our model the researcher is not concerned about his reputation with the other contracting party (the politician) but rather with an exogenous party (e.g., the academic community) that, for simplicity, we do not model as a strategic player. The researcher's reputation in our model is judged through the output of the research results, that is, the message itself, rather than the inferred ability of the researcher by an evaluator as in Ottaviani and Sorensen (2006). In our model the exogenous party, that is, the academic community, takes the ability of the researcher as equivalent to the quality of her publications record. This does not necessarily imply that the exogenous party (the academic community) is naive. Even if the academic community knows the researcher's contract and can deduce the researcher's ability from the message, it may want to punish a researcher who is willing to bias her report, thereby selling her integrity. Hence, the second term in Equation (3) should be interpreted as foregone opportunities (such as future academic jobs or researcher grants) within the academic world. (8)

The order of moves is the following (see Figure 1). First, the politician offers a contract to the researcher which is given by [([m.sub.i],[T.sup.i]).sub.i=H,L], where [m.sup.i] is the message demanded from the researcher and [T.sup.i] is the transfer paid to the researcher if the signal is [s.sup.i], i = L, H. The researcher then decides whether to accept or reject the contract. In the latter case she receives some reservation utility [U.sub.0]. For simplicity, in the following we will assume that [U.sub.0] is independent of the researcher's ability q. However, most of our results continue to hold if [U.sub.0] is increasing in q or [beta] which is the most plausible assumption. Any required modifications will be illuminated during the analysis. If the researcher accepts the contract she produces a signal about the state of the world. Upon receiving the signal, the researcher sends her message. Next, the politician rationally updates her beliefs about the state of the world and chooses the policy [y.sub.i], i = L, H. Finally, nature reveals the true state of the world and the payoffs are realized.

We assume that the contract is enforceable, that is, both the politician and the researcher are committed to fulfill their part of the contract that was accepted by the researcher. (9)

IV. SYMMETRIC INFORMATION

As a benchmark we first study the case in which the politician knows the researcher's type, that is, her ability q and her concern for reputation [beta] and designs a contract that ensures the researcher's participation. We will refer to this contract as the first-best contract in line with terminology used in contract theory, but we point out that this is the first-best contract only from the point of view of the politician. Let [p.sup.H] be the unconditional probability that the researcher receives signal [s.sup.H], that is,

(4)

[p.sup.H] =P(s |q)=P([s.sup.H]|[[theta].sup.H])P([[theta].sup.H]) + P([s.sup.H]|[[theta].sup.L])P([[theta].sup.L]) = 1 -p + (2p-1)q.

By [[sigma].sup.H] ([[sigma].sup.L]) we denote the posterior probability of the high state of the world after receiving a high (low) signal, i.e.,

(5)

[[sigma].sup.H] = P([[theta].sup.H]|[s.sup.H]) = P([s.sup.H]| [[theta].sup.H])P([[theta].sup.H])/P([s.sup.H]| [[theta].sup.H])P([[theta].sup.H]) + P([s.sup.H]| [[theta].sup.L])P([[theta].sup.L]) = pq/1 -p + (2p-1)q

and

(6)

[[sigma].sup.L] = P([[theta].sup.H]|[s.sup.L]) = P([s.sup.L]| [[theta].sup.H])P([[theta].sup.H])/P([s.sup.L]| [[theta].sup.H])P([[theta].sup.H]) + P([s.sup.L]| [[theta].sup.L])P([[theta].sup.L]) = p(1-q)/1 -p - (2p-1)q

Finally, let us denote by [[bar.[theta]].sup.i] = [[sigma].sup.i][[theta].sup.H] + (1 - [[sigma].sup.i])[[theta].sup.L] the posterior expected state of the world given signal [s.sup.i], i = H, L.

The politician observes the type (q, [beta]) of the researcher and can condition the transfer on the public message sent by the researcher. Then the politician solves the following optimization problem

(7) [mathematical expression not reproducible]

under the researcher's participation constraint

(8) E [[U.sup.R]] [greater than or equal to] [U.sub.0].

It is straightforward to see that the researcher's participation constraint (Equation (8)) is binding in the first-best contract which is derived in Appendix A1. In the first-best contract the messages and policies are

(9)

[m.sup.i] = [beta] ([[alpha].sub.[gamma]] + [[alpha].sub.[theta]] + [[alpha].sub.m] + [[alpha].sub.m] [[alpha].sub.[theta]]] [[bar.[theta]].sup.i] + [[alpha].sub.m] [[alpha].sub.[gamma]] [gamma]/[beta] ([[alpha].sub.[gamma]] + [[alpha].sub.[theta]] + [[alpha].sub.m]) + [[alpha].sub.m] ([[alpha].sub.[theta]] + [[alpha].sub.[gamma]])

i = L,H

and

(10) [y.sup.i] = [[alpha].sub.[theta]] [[bar.[theta]].sup.i] + [[alpha].sub.m][m.sup.i] + [[alpha].sub.[gamma]][gamma]/ [[alpha].sub.[theta]] + [[alpha].sub.m] + [[alpha].sub.[gamma]], I = L, H,

and the transfers [T.sup.H] and [T.sup.L] are such that the researcher's participation constraint is binding. Observe that only expected transfers are determined in the first-best contract.

A. Properties of the First-Best Contract and Policy

Equation (9) implies that the politician induces the researcher to send a biased public report which is a weighted average of the ex post expectation about the state of the world, [[bar.[theta]].sup.i], and the politician's ideal policy, [gamma]. Similarly, the policy choice in Equation (10) is a weighted average of the ex post expected state of the world, the public report and the politician's ideal policy, with the weights exactly matching the corresponding weights in the politician's utility function. Substituting Equation (9) into Equation (10) yields

(11) [y.sup.i] = [lambda][[bar.[theta]].sup.i] + (1 - [lambda]) [gamma], i = L, H,

where

(12) [lambda] = [beta]([[alpha].sub.[theta]] + [[alpha].sub.m]) + [[alpha].sub.m][[alpha].sub.[theta]]/ [beta] ([[alpha].sub.[gamma]] + [[alpha].sub.[theta]] + [[alpha].sub.m]) + [[alpha].sub.m] ([[alpha].sub.[theta]] + [[alpha].sub.[theta]])

We use this characterization of the first-best contract to obtain comparative statics results.

First, the policy is affected by the ideology of the politician: if the politician's concern for his ideal policy is very large compared to his concern for punishment by the voters ([[alpha].sub.[gamma]] [right arrow] [infinity]), then he implements exactly this ideal policy. In this case the researcher publishes a biased report which is a weighted average of the ex post expected state of the world and the politician's ideal policy, with the weights given by [beta]/([beta] + [[alpha].sub.m]) and [[alpha].sub.m]/([beta] + [[alpha].sub.m]), respectively. Conversely, if the politician is not concerned with choosing the optimal policy ([[alpha].sub.[theta]] [right arrow] 0), then both the message and the policy are biased.

Second, we turn to the role of messages in politics-science interaction: the researcher gains reputation from the message being close to the actual state of nature (parameter [beta]), the politician is interested in backing his policy by scientific evidence (parameter [[alpha].sub.m]). From Equations (9), (11), and (12) it is easy to verify that the distance of both the message and the policy to the optimal policy [[bar.[theta]].sup.i] is decreasing in [beta] and in [[alpha].sub.m]. We state this result in the following proposition.

PROPOSITION 1. The first-best policy is the closer to the expected state of the world, (1) the higher the researcher's concern for reputation P and (2) the larger the politician's preference for evidence-based policy [[alpha].sub.m].

The intuition for this result is simple: The more the researcher cares for her reputation (the higher P), the higher the weight on the expected state in her report and, consequently, in the policy induced by this report. If the researcher disregards her reputation ([beta] [right arrow] 0), then the report coincides with the policy and they are both equal to a weighted average of the ex post expected state of the world and the politician's ideal policy. Conversely, if the researcher is very concerned with her reputation ([beta] [right arrow] [infinity]), then she publishes an unbiased report, that is, [m.sub.i] = [[bar.[theta]].sup.i], while the politician still chooses a policy which is a weighted average of the ex post state of the world and his ideal policy.

The researcher reports truthfully if the politician is not punished for deviating from the public report ([[alpha].sub.m] [right arrow] 0) or if the politician disregards her ideal policy ([[alpha].sub.[theta]] [right arrow] 0), or if he strongly cares for the policy correctly addressing the state of the world ([[alpha].sub.[theta]] [right arrow] [infinity]). However, only in the last two cases would the politician be forced to also implement the optimal policy [y.sup.i] = [[bar.[theta]].sup.i], while in the first case he implements a policy which is a weighted average of the ex post expected state and his ideal policy. Conversely, if the politician gives a larger weight to the policy being close to the message (larger [[alpha].sub.m]), then not only the message moves closer to the ex post expected state, but the implemented policy moves closer to both the researcher's report as well as to the ex post expected state of the world.

Proposition 1 thereby shows how a politics-science interaction can be improved from the perspective of voters: making the scientific reports more salient can be beneficial through two different channels: first, it may make the politician more accountable to implement policies based on scientific advice (increasing [[alpha].sub.m]). Second, an improved visibility of the messages sent by researchers may discipline the researchers to more truthful reporting of scientific evidence (increased [beta]). Given a current trend to evidence-based policy, both these channels could be catered to by demanding a public release of the evidence underlying policy choice. In parallel, researchers may make their research results more accessible to the general public, both in terms of exposition and outlets, and relate them to the policy choices made by politicians.

B. Choosing the Researcher

We now explore with what type of researcher a politician would choose to contract. Let us first consider the case in which researchers only differ with respect to their reputational concerns:

PROPOSITION 2. If researchers do not differ with respect to the quality of their research, the politician chooses the researcher with the least concern for reputation.

The proof of Proposition 2 is given in Appendix A2. Note that Proposition 2 continues to hold if [U.sub.0] is increasing in the researcher's concern for reputation. Since Proposition 1 implies that voters, who are interested in the adoption of the optimal policy, would prefer a researcher with a high concern for reputation, this result implies that interests of the voters are conflicting with those of the politician: unlike the voter, the politician prefers to contract with the researcher with the lowest concern for reputation.

For researchers who only differ in their ability, we obtain the following proposition (proof in Appendix A3):

PROPOSITION 3. If researchers do not differ with respect to their reputational concerns, the politician chooses the researcher with the highest ability.

Concerning ability, the voters' and politician's interests are therefore aligned: both prefer to hire the researcher with the highest ability. For the voters this follows from the fact that the precision of the signal increases in the researcher's ability, so that the expected state and hence the chosen policy (see Equation (11)) move closer to the true state of the world. However, observe that Proposition 3 may not hold anymore if the reservation utility increases in the researcher's ability. In this case, the politician may prefer to contract with a less-able researcher which conflicts with the interests of the voters.

More generally than the cases captured by Propositions 2 and 3, the politician may typically choose from a whole spectrum of researchers who differ both in ability and reputational concern. It seems natural to assume that ability and reputational concern are correlated. The following proposition then establishes a general result concerning the politician's choice of a researcher:

PROPOSITION 4. If the politician can choose from any convex combination of researcher types ([q.sup.1], [[beta].sup.1] and ([q.sup.2], [[beta].sup.2]), he will always contract with either ([q.sup.1], [[beta].sup.1]) or ([q.sup.2], [[beta].sup.2]), but not with intermediate types.

Proposition 4 establishes a preference of the politician for extreme, rather than intermediate types of researchers. The proof is given in Appendix A4 and relies on showing that the politician's utility over the relevant characteristics (q, [beta]) of the researcher is a strictly quasi-convex function. This is due to the fact that under expected utility the politician's and researcher's utility functions are both linear in q since the posterior probabilities for the high/low state are linear in q. Moreover, the researcher's utility is linear in [beta]. Hence, the politician is maximizing over a family of linear functions which results in a convex and even strictly quasiconvex utility function over (q, [beta]). (10)

In light of Propositions 2 and 3, it is clear that if q and [beta] are negatively correlated, the politician will prefer the researcher characterized by high ability and low reputational concern. However, if q and [beta] are positively correlated, then a tradeoff arises between informative research results, offered by a high-ability researcher, and cheap manipulation of the research report, offered by a researcher with a low concern for reputation. Proposition 4 implies that, when ability and reputational concern are perfectly positively correlated or, more general, when the latter is a concave function of the former, then the politician will either hire the least able researcher who can also be more easily manipulated to deliver a biased report or, on the contrary, the most able researcher who also has the highest concern for academic reputation. One of those extreme types is always preferred to researchers with more intermediate ability and reputational concerns. It is, therefore, important to understand the conditions under which such q-[beta] trade-off leads the politician to hire one or the other type of researcher. The following proposition demonstrates that, depending on the politician's preference, indeed both types of researchers can be chosen (proof in Appendix A5):

PROPOSITIONS. Given the choice between researchers of types ([q.sub.1], [[beta].sup.l] and ([q.sup.h], [[beta].sup.h]) with low versus high reputational concerns and ability ([q.sup.1] < [q.sup.h], [[beta].sup.l] < [[beta].sup.h]), the politician prefers the less-able researcher if y is sufficiently far away from the prior [bar.[theta]] of the state of the world. Moreover, an increase in [[alpha].sub.m] may lead the politician to switch from contracting with the high-type ([q.sup.h], [[beta].sup.h]) to contracting with the low-type ([q.sup.l], [[beta].sup.l]): if for a given [[alpha].sub.m] the politician is indifferent between contracting with the high or low type, then he will strictly prefer the high type for all [[??].sub.m] < [[alpha].sub.m], but prefers the low type for all [[??].sub.m] > [[alpha].sub.m].

The intuition for this result is straightforward: the lower the policy bias of the politician, the more he prefers to learn the true state of the world and, thus, to hire the more able researcher. (11) Conversely, with a large bias, being able to manipulate the researcher's messages toward the desired report outweighs increasing the quality of information about the state of the world. We note, however, that even if his [gamma] is close to the ex ante expected state of the world [bar.[theta]], a politician may still prefer the least able researcher as this person also has the lowest concerns for reputation and is therefore less concerned with giving a message which deviates from the finally realized state of the world. Conversely, a perfectly able researcher ([q.sup.h] = 1) will definitely be chosen by a politician with no prior stakes in any policy ([[alpha].sub.[gamma]] [right arrow] 0): in this case the message becomes undistorted and coincides with the realized state of the world ([m.sup.i] = [[bar.[theta]].sup.i] = [[theta].sup.i]) such that differences in reputational concerns do not matter anymore.

Importantly, for an increase in the importance to back policy with scientific reports (increasing [[alpha].sub.m]), a politician may switch toward the less-able and less reputation-seeking researcher. The reason is that it is easier to manipulate this researcher's reports toward the desired policy. While Proposition 1 showed that a larger weight on evidence-based policy, that is, a larger [[alpha].sub.m] is unambiguously positive for voters as it moves policy closer to the posteriors, Proposition 5 demonstrates an important caveat: if politicians have the choice between different types of researchers, the increased importance to match policy to scientific advice may lead the politician to contract with less-able researchers. Of course, this negative impact only occurs when the politician is at the margin between contracting with either type.

In general, the fact that only extreme rather than intermediate types of researchers are chosen immediately implies that small differences in politicians' preferences may already lead politicians to choose from opposite ends of the spectrum of researchers when facing a trade-off between a researcher's ability and reputational concern and the latter is a concave function of the former. This suggests that only two types of researchers may coexist who rely on funding by politicians.

V. ASYMMETRIC INFORMATION

In this section we turn to the case in which the researcher's characteristics are her private information. Although for completeness we consider asymmetric information both with respect to ability and concern for reputation, we believe that the latter is the most (and maybe only) relevant case. Unlike the concern for reputation, ability is a characteristic that can be readily inferred from the researcher's scientific record, which is public information.

A. Asymmetric Information with Respect to the Researcher's Concern for Reputation

We analyze now the case in which the researcher has private information about her concern for reputation p, but her ability is known to the politician. We also maintain the assumption that the politician can observe the signal received by the researcher. For simplicity we restrict to the case in which there are only two types of researchers: a researcher with a high concern for reputation, characterized by [[beta].sup.h] and a researcher with a low concern for reputation, characterized by [[beta].sup.l], where [[beta].sup.l] < [[beta].sup.h]. While the politician does not know which type of researcher he faces, he knows the probability that a researcher has a high concern for reputation. We denote this probability by [pi]:

(13) P([[beta].sup.h]) = [pi], 0 < [pi] < 1.

Under asymmetric information concerning the researcher's reputational concern, the politician offers a menu of contracts such that the high- and the low-reputation researchers select themselves into the appropriate contract. Let the menu of contracts be ([C.sup.h], [C.sup.l]), where [C.sup.j] = [([T.sup.ij], [m.sup.ij]).sub.i] = H, L, is the contract for type j, j = h, l, and i = H, L, refers to the signal, [s.sup.H] or [s.sup.L], received by the researcher. The politician chooses the policies [([y.sup.ij]).sub.i=H, L,j = h,l] and the contracts [C.sup.h], [C.sup.l], to maximize his expected utility subject to the participation and incentive compatibility constraints of the two types of researchers. By E [UR (C) |[beta]] we denote the expected utility of a researcher under contract C given that his reputational concern is p. The politician then solves the following optimization problem:

(14) [mathematical expression not reproducible]

subject to

(15) (P[C.sup.[beta].sub.h]) : E [[U.sup.R] ([C.sup.h]) |[[beta].sup.h]] [greater than or equal to] [greater than or equal to] [U.sub.0],

(16) (P[C.sup.[beta].sub.l]) : E [[U.sup.R] ([C.sup.l]) |[[beta].sup.l]] [greater than or equal to] [U.sub.0],

(17)

(I[C.sup.[beta].sub.h]) : E [[U.sup.R] ([C.sup.h]) |[[beta].sup.h]] [greater than or equal to] E [[U.sup.R] ([C.sup.l]) |[[beta].sup.h]],

(18)

(I[C.sup.[beta].sub.l]) : E [[U.sup.R] ([C.sup.l]) |[[beta].sup.l]] [greater than or equal to] E [[U.sup.R] ([C.sup.h]) |[[beta].sup.l]].

Note that the signal distribution is independent of the researcher's concern for reputation. Hence, any contract which is acceptable for the high type h, that is, which satisfies (P[C.sup.[beta].sub.h]), is strictly acceptable for the low type l, that is, satisfies (P[C.sup.[beta].sub.l])with a strict inequality. This implies that the first-best contracts cannot be implemented under asymmetric information because they violate the low type's incentive compability constraint (I[C.sup.[beta].sub.l]) Following the usual argument, one can then verify that (P[C.sup.[beta].sub.h]) and (I[C.sup.[beta].sub.l]) bind.

The proofs are in Appendices A6 and A7, respectively. Note that from the binding incentive constraint for the low type, (I[C.sup.[beta].sub.l]), the fact that [[beta].sup.h] > [[beta].sup.l] and (P[C.sup.[beta].sub.l]) it follows that ([PC.sup.[beta].sub.l]) always holds. In Appendix A8 we verify that (I[C.sup.[beta].sub.h]) is satisfied if we maximize the politician's expected utility subject to the binding constraints (P[C.sup.[beta].sub.h]) and (I[C.sup.[beta].sub.l]).

The second-best contract menu is as follows (see Appendix A8). The messages demanded from the high and low type are

(19)

[mathematical expression not reproducible]

and

(20)

[m.sup.il] = [[[beta].sup.l] ([[alpha].sub.[gamma]] + [[alpha].sub.[theta]] + [[alpha].sub.m]) + [[alpha].sub.m] [[alpha].sub.[theta]]] [[bar.[theta]].sup.i] + [[alpha].sub.m] [[alpha].sub.[gamma]] [gamma]/ [[beta].sup.l] ([[alpha].sub.[gamma]] + [[alpha].sub.[theta]] + [[alpha].sub.m]) + [[alpha].sub.m] ([[alpha].sub.[theta]] + [[alpha].sub.[gamma]])

i = L, H.

The policies are given by

(21)

[y.sup.ij] = [[alpha].sub.[theta]] [[bar.[theta]].sup.i] + [[alpha].sub.m][m.sup.ij] + [[alpha].sub.[gamma]][gamma]/ [[alpha].sub.[theta]] + [[alpha].sub.m] + [[alpha].sub.[gamma]], i = L, H, j = h, l.

Again only the expected transfers are determined in the second-best contract menu.

Comparing Equation (20) with Equation (9) one can easily see that the low-reputation researcher sends the same message as in the first-best contract, but the transfer is now such that she gets a positive surplus. However, the message for the high-reputation researcher is distorted as compared to the first-best message, but her surplus is kept at zero. This distortion results in the high-reputation researcher reporting closer to the true expected state. The intuition for this is simple. Because the low-reputation researcher would pretend to be of high type, the politician has to distort the contract for the high-reputation researcher and pay an information rent to the low-reputation researcher in order to induce her to reveal her true type. Consequently, the high-reputation researcher is not compensated enough to bias her report as in the first-best case.

We also note that the second-best contract is still given by Equations (19), (20), and (21) if the reservation utility is increasing in p.

From Equations (19) and (21), on the one hand and from Equations (9) and (10), on the other hand, it follows that a researcher with a high concern for reputation (j = h) sends a message which induces a policy closer to the optimal policy [[bar.[theta]].sup.i] when the politician cannot observe her reputational concern (asymmetric information about [beta]) than when the politician can identify the type of the researcher. Hence, the following result is immediate.

PROPOSITION 6. For any given level of ability, the voters prefer the politician to be uninformed about the researcher's concern for reputation.

Similar to Proposition 2, Proposition 6 again shows differences between the interests of the voters relative to those of the politician: under full information voters prefer researchers with high reputational concerns, while the politician likes to contract with researchers with little concerns of reputation. It is clear that the politician benefits from overcoming the asymmetric information, that is, he values the information about the researcher's type. The voters, on the other hand, are better off under asymmetric information because this makes the employment of the high-reputation researcher more likely and her report is closer to the true expected state than under symmetric information. Assuming that ability can be inferred from publications records, a potential policy recommendation would be to demand open calls for policy advice and that selection of advisors is exclusively based on ability. However, given the repeated nature of many policy-science interactions, holding up the asymmetric information with respect to reputational concerns might be a bit unrealistic.

B. Asymmetric Information with Respect to the Researcher's Ability

We now assume that the politician can observe the researcher's concern for reputation, p, as well as the researcher's signal, but he cannot observe her research ability, that is, the precision q of the generated signal. Again, for simplicity let there be two types or researchers in the economy: a low-ability type with q = [q.sup.f] and a high-ability type with q = [q.sup.h], such that [q.sup.l] < [q.sup.h]. We shortly write [p.sup.ij] = P([s.sup.i]\[q.sup.j]) for the probability that the researcher of type q' receives signal s' and [[sigma].sup.ij] = P([[theta].sup.H]\[s.sup.i], [q.sup.j]) for the probability that the true state of the world is high, given signal s' and the researcher's ability [q.sup.j], for i = H, L, and j = h, l.

As in the case of asymmetric information with respect to [beta], the politician designs a contract menu ([C.sup.h], [C.sup.l]), where [C.sup.j] = [([T.sup.ij], [m.sup.ij]).sub.i] = H, L is the contract for type j, j = h, l, and i = H, L refers to the signal, [s.sup.H] or [s.sup.L], received by the researcher. The politician chooses the policies [([y.sup.ij]).sub.i = H, L,j = h,l] ant' the contracts [C.sup.h] [C.sup.l], to maximize his expected utility subject to the participation and incentive compatibility constraints of the two types of researchers. By E [[U.sup.R] (C) |q] we denote the expected utility of a researcher under contract C given that his ability is q. The politician then solves the following optimization problem:

(22) [mathematical expression not reproducible]

subject to

(23) (P[C.sup.q.sub.h]) : E [[U.sup.R] ([C.sup.h]) |[q.sup.h]] [greater than or equal to] [U.sub.0],

(24) (P[C.sup.q.sub.l]) : E [[U.sup.R] ([C.sup.l]) |[q.sup.l]] [greater than or equal to] [U.sub.0],

(25) (I[C.sup.q.sub.h]) : E [[U.sup.R] ([C.sup.h]) |[q.sup.h]] [greater than or equal to] E [[U.sup.R] ([C.sup.l]) |[q.sup.h]],

(26)

(I[C.sup.q.sub.l]) : E [[U.sup.R] ([C.sup.l]) |[q.sup.l]] [greater than or equal to] E [[U.sup.R] ([C.sup.h]) |[q.sup.l]].

Observe that different from the case with asymmetric information with respect to the researcher's concern for reputation, the signal distribution may now depend on the researcher's type. This is the case whenever the prior probability for the high state, p, is different from .5. If p [not equal to] .5, then it is not true that any contract which is acceptable for the high ability type, that is, which satisfies (P[C.sup.q.sub.h]), is also acceptable for the low ability type, that is, satisfies (P[C.sup.q.sub.h]), or vice versa. Hence, the single-crossing property is violated (Edlin and Shannon 1998; Milgrom and Shannon 1994) which opens up the possibility that the first-best contracts can be implemented under asymmetric information. (12) To see that this is indeed true, note that the first-best messages and policies given by Equations (9) and (10), respectively, maximize Equation (22) under the binding participation constraints Equations (23) and (24). These contracts are given by

(27)

[m.sup.il] = [[beta] ([[alpha].sub.[gamma]] + [[alpha].sub.[theta]] + [[alpha].sub.m]) + [[alpha].sub.m] [[alpha].sub.[theta]]] [[bar.[theta]].sup.i] + [[alpha].sub.m] [[alpha].sub.[gamma]] [gamma]/ [beta] ([[alpha].sub.[gamma]] + [[alpha].sub.[theta]] + [[alpha].sub.m]) + [[alpha].sub.m] ([[alpha].sub.[theta]] + [[alpha].sub.[gamma]])

i = L, H, j = h, l,

and

(28)

[y.sup.ij] ([m.sup.ij]) = [[alpha].sub.[theta]] [[bar.[theta]].sup.i] + [[alpha].sub.m][m.sup.ij] + [[alpha].sub.[gamma]][gamma]/ [[alpha].sub.[theta]] + [[alpha].sub.m] + [[alpha].sub.[gamma]], i = L, H, j = h, l.

where [[bar.[theta]].sup.ij] = [[sigma].sup.ij] [[theta].sup.H] + (1 - [[sigma].sup.ij]) [[theta].sup.L]. As we have noted before, only expected transfers are determined at the first-best contracts and expected transfers are such that (P[C.sup.q.sub.h]), respectively (P[C.sup.q.sub.l]) bind. It then remains to verify that there exist transfers such that the first-best contracts also satisfy the incentive compatibility constraints (I[C.sup.q.sub.h]) and (I[C.sup.q.sub.l]). This amounts to showing that there exists a solution to the linear equation system, where (P[C.sup.q.sub.h]), (P[C.sup.q.sub.l]), (I[C.sup.q.sub.h]) and (I[C.sup.q.sub.l]) bind at the first-best contracts, which we prove in Appendix A9. We can then state the following result:

PROPOSITION 7. Let p [not equal to] 5. If the politician does not observe the researcher's ability, then there are transfers such that the politician can implement the first-best messages given by Equation (27).

As can be readily inferred from the proof, Proposition 7 still holds if the reservation utility [U.sub.0] is increasing in q.

Proposition 7 and the discussion in the previous subsection imply that the qualitative impact of asymmetric information crucially depends on which dimension of the type of the researcher is not known to the politician: while asymmetric information with respect to the ability of the researcher, that is, the quality of the research, does not preclude reaching the first-best scenario for the politician, the first-best cannot be implemented if the politician does not know the reputational concerns of the researcher.

VI. CONCLUSION

In this paper we presented a model which captures some stylized facts regarding the involvement of politics into the production of policy research. We explored the outcome of contracting between a politician who hires a researcher to get information about the state of the world. In our model the politician has a preference for a certain policy which is independent of the state. However, he is punished by the voters if the adopted policy is not in line with the public research report. Finally, the politician is also interested in correctly addressing the state of the world (e.g., in order to increase the probability of reelection). The researchers, on the other hand, may differ with respect to their ability for research or their concern for academic reputation. We considered both the case of symmetric and asymmetric information with respect to the researcher's ability and concern for reputation, respectively, but we assumed that the politician can always observe the research results.

In the first-best contract all researchers bias their research reports toward the politician's preferred policy, and for a given ability the politician prefers to contract with the researcher who has the lowest concern for reputation. This is in contrast with the voters' preferences, as a researcher with a high concern for reputation induces a policy closer to the optimal policy. However, the politician's and the voters' preferences are aligned with respect to the researcher's ability: A high ability researcher is preferred by both parties. When the researchers' abilities and reputational concerns are positively correlated, so that the politician faces a trade-off between the two characteristics, and when reputational concern is concave in ability, then the politician prefers to contract with extreme types of researchers: either with the researcher with lowest ability and least reputational concern or with the researcher with highest ability and greatest reputational concern, but never with any intermediate researcher types. The choice is particularly governed by the politician's ideal policy: if this is far from the prior about the state of the world, then a researcher with low reputational concern (and low ability) is preferred. Conversely, if the politician is not strongly biased toward any policy, then he has little need to manipulate the research results. Therefore, the politician prefers to learn quality information about the state of the world and, thus, prefers to hire a researcher who provides maximal quality.

When the politician is not informed about the researcher's concern for reputation, the low-reputation researcher reports as in the first-best contract, but the report of the high-reputation researcher is distorted such that it is closer to the true expected state. Consequently, the induced policy is closer to the optimal policy. Importantly, this implies that voters may prefer the politician to be uninformed about the type of the researcher such that asymmetric information can be beneficial. However, this result particularly hinges on researchers having a similar ability. By contrast it is preferable that the politician is informed about the researchers' types if ability correlates with reputational concerns. In particular, a politician would prefer high-reputational concerned, high-ability researchers if voters can punish the politician for not adopting a policy that rightly addressed the state of the world.

A few implications for the process of commissioning policy research can be formulated from our results: making the scientific reports more salient may improve the politics-science interaction to the benefit of the voters. One channel through which this may occur is by increasing the reputational concerns ([beta]) of researchers. Another channel is by making the politician more accountable for indeed following scientific advice when formulating the policy (increasing [[alpha].sub.m]). Nonetheless, we also showed that such demands for evidence-based policy may backfire as it makes it more likely that the politician prefers low-integrity over more able researchers. As such our paper demonstrates that fears that less-qualified researchers may be promoted at the expense of more qualified ones in government-commissioned projects are not undue.

Our model is general and refers to any type of research meant to guide the formulation of policy actions. Climate change research is just one prominent application where doctrines, ideologies, and beliefs outside scientific evidence appear to often guide the policy agenda on climate change. With our model, we investigated a single politician contracting with a single researcher. It appears worthwhile to extend this model by explicitly allowing several politicians to interact and potentially independently contract with different researchers. We leave this for further research.

APPENDIX

A1. The First-Best Contract under Symmetric Information

It is straightforward to see that the researcher's participation constraint is binding in the first-best contract. (13)

The politician therefore solves the following optimization problem:

(A1)

[mathematical expression not reproducible]

Maximizing over [m.sup.H], [m.sup.L], [y.sup.H], [y.sup.L], yields the following messages

(A2)

[m.sup.il] = [[beta] ([[alpha].sub.[gamma]] + [[alpha].sub.[theta]] + [[alpha].sub.m]) + [[alpha].sub.m] [[alpha].sub.[theta]]] [[bar.[theta]].sup.i] + [[alpha].sub.m] [[alpha].sub.[gamma]] [gamma]/ [beta] ([[alpha].sub.[gamma]] + [[alpha].sub.[theta]] + [[alpha].sub.m]) + [[alpha].sub.m] ([[alpha].sub.[theta]] + [[alpha].sub.[gamma]]), i = L, H

and policies

(A3)

[y.sup.i] = [[alpha].sub.[theta]] [[bar.[theta]].sup.i] + [[alpha].sub.m][m.sup.i] + [[alpha].sub.[gamma]][gamma]/ [[alpha].sub.[theta]] + [[alpha].sub.m] + [[alpha].sub.[gamma]], i = L, H,

This can easily be combined to

(A4)

[y.sup.i] = ([[alpha].sub.[theta]] [beta] + ([[alpha].sub.[theta]] + [beta])) [[bar.[theta]].sup.i] + [[alpha].sub.[gamma]] ([[alpha].sub.m] + [beta])[gamma] [[alpha].sub.[theta]][beta] + [[alpha].sub.m] + ([[alpha].sub.[theta]] + [beta]) [[alpha].sub.[gamma]], ([[alpha].sub.m] + [beta])

A2. Proof of Proposition 2

Let [[??].sup.p] (q, [beta]) denote the politician's expected utility at the first-best contract given (q, [beta]). By the envelope theorem the partial derivative of [[??].sup.p] (q, [beta]) with respect to [beta] equals the derivative of the objective function in Equation (A4) with respect to p, evaluated at the first-best contract, that is,

[mathematical expression not reproducible]

where [m.sup.L], [m.sup.H], [y.sup.L], and yH are the first-best policies and messages.

A3. Proof of Proposition 3

The proof relies on Blackwell's theorem (e.g., Blackwell 1951, 1953; Marschak and Miyasawa 1968) which gives a condition under which an information structure is more informative than another, that is, creates a larger expected value: Let the information structures [mathematical expression not reproducible] where [Q.sub.ij] and [[??].sub.ij] are the respective probabilities that for a given state i signal j will be received. Blackwell's theorem states that Q is more informative than [??] if and only if there exists a Markov matrix M (with positive entries and entries of each column summing up to one) such that QM = [??]. Applied to our setting

[mathematical expression not reproducible]

and it is straightforward to see that

[mathematical expression not reproducible]

satisfies QM = [??] and that M > 0 as q > q > [1/2]. Therefore, Blackwell's theorem implies that [partial derivative][[??].sup.P] (q, [beta])/[partial derivative]q > 0. where [[??].sup.P] (q, [beta]) is the politician's expected utility at the first-best contract given (q, [beta]).

A4. Proof of Proposition 4

Note that the expected utility of the politician given by Equation (A1) is linear in both q and [beta] since [mathematical expression not reproducible] To prove the proposition we rely on the following more general lemma:

LEMMA 1. Let x [member of] [R.sup.n], z [member of] [R.sup.m] and let f(x, z) be a real-valued function that is linear in z. For z [member of] [R.sup.m] let x(z) be the unique solution to

(A5) [mathematical expression not reproducible].

Let F(z) = f(X(Z), Z). Then F is convex. Moreover, if for all [Z.sub.1], [Z.sub.2], with [Z.sub.1] [not equal to] [Z.sub.2], it is true that either

(i) for all [mu] [member of] (0, 1), X([mu][Z.sub.1] + (1 - [mu])[Z.sub.2]) [not equal to] x([Z.sub.i]) for some i [member of] {1, 2},

or

(ii) F([Z.sub.1]) [not equal to] F([Z.sub.2]), then F is strictly quasiconvex, that is,

F([mu][z.sub.1] + (1 - [mu])[z.sub.2]) < max {F (z1), F([z.sub.2])}

for all [z.sub.1], [z.sub.2], with [z.sub.1] [not equal to] [z.sub.2], and for all [mu] [member of] [0, 1],

Proof of Lemma 1

Let [z.sub.1], [z.sub.2] [member of] [R.sup.m] and [mu] [member of] [0, 1], Then,

F ([mu][z.sub.1] +(1 - [mu])[z.sub.2])

=f(x([mu][z.sub.1] +(1 - [mu])[z.sub.2]), [mu][z.sub.1] +(1 - [mu]s)[z.sub.2])

= [mu]f (x([mu][z.sub.1] + (1 - [mu])[z.sub.2]), [z.sub.1]) + (1 - [mu])f(x([mu][z.sub.1] + (1 - [mu])[z.sub.2]), [z.sub.2])

[less than or equal to] [mu]F ([z.sub.1]) + (1 - [mu])F([z.sub.2])

[less than or equal to] max {F ([z.sub.1]), F ([z.sub.2])}.

This proves that F is convex (and quasiconvex). Observe that the first inequality is strict if x([mu][z.sub.1] +(1 - [mu])[z.sub.1]) [not equal to] x([z.sub.i]) for some i [member of] {1, 2}, and the second inequality is strict if f([z.sub.1]) [not equal to] F([z.sub.2]) and 0 < [mu] < 1. This proves that F is strictly quasiconvex if for all [z.sub.1], [z.sub.2], with [z.sub.1] [not equal to] [z.sub.1], either condition (i) or (ii) is satisfied.

Let [[??].sup.p] (q, P) denote the politician's expected utility at the first-best contract given (q, [beta]). Using Lemma 1, the linearity of the politician's objective function in q and p therefore immediately implies that [[??].sup.p] (q, [beta]) is convex. To prove that [[??].sup.p] (q, P) is strictly quasiconvex consider two types of researchers, ([q.sup.1], [[beta.sup.]1]) and ([q.sup.2], [[beta].sup.2]), with ([q.sup.1], [[beta].sup.1]) [not equal to] ([q.sup.1], [[beta].sup.2]). If for some i and j [not equal to] i, [q.sup.i] [greater than or equal to] [q.sup.j] and [[beta].sup.i] [less than or equal to] [[beta].sup.j] with at least one strict inequality, then Propositions 2 and 3 imply that [[??].sup.P] ([q.sup.i], [[beta].sup.1]) > [[??].sup.p] [[q.sup.j], [[beta].sup.j]). Hence, condition (ii) in Lemma 1 is satisfied. It remains to consider the case where [q.sup.i] > [q.sup.j] and [[beta].sup.i] > [[beta].sup.j] for some i and j [not equal to] i. For [mu] [member of] [0, 1] let g([mu]) = [mu][q.sup.i] +(1 - [mu])[q.sup.j] and [beta]([mu]) = [mu][[beta].sup.i] + (1 - [mu])[[beta].sup.j]. Then both and [beta]([mu]) are increasing in [mu]. Hence, [[bar.[theta]].sup.H] is increasing in [mu], [[bar.[theta]].sup.L] is decreasing in [mu], and [lambda] is increasing in [mu], where [lambda] is given in Equation (12). Consider the first-best policy [y.sup.i] = [lambda][[bar.[theta]].sup.i] + (1 - [lambda])[gamma] (see Equation (11)). If [[bar.[theta]].sup.H] [greater than or equal to] [gamma] for [mu] = 0 (i.e., for [q.sup.j]), then [y.sup.H] is increasing in [mu, and if [[bar.[theta]].sup.H] < [gamma] for [mu] = 0, then 0 < [gamma] for all [mu] which implies that [y.sup.L] is decreasing in [mu]. Hence, in either case the first-best contract for (q([mu]), [beta]([mu])) is different from the first-best contract for ([q.sup.i], [[beta].sup.i]) for i = 1, 2, and for all [mu] [member of] (0, 1), that is, condition (i) in Lemma 1 is satisfied. This proves that [[??].sup.P] (q, [beta]) is strictly quasiconvex such that we immediately obtain:

[mathematical expression not reproducible]

We conclude that the politician will always contract with either ([q.sup.l], [beta]([g.sup.l])) or ([q.sup.h], [beta]([q.sup.h])) which proves the proposition.

A5. Proof of Proposition 5

Let [[??].sup.P] (q, [beta]) denote the politician's expected utility at the first-best contract given (q, [beta]). Let ([q.sup.l], [[beta].sup.l]) and ([q.sup.h], [[beta].sup.h]) be given with [q.sup.l] < [q.sup.h] and [[beta].sup.l] < [[beta].sup.h]. Then, the politician prefers the less able researcher, that is, type ([q.sup.l], [[beta].sup.l]), if [DELTA] = [[??].sup.P] ([q.sup.h], [[beta].sup.h]) [[??].sup.P] ([q.sup.l], [[beta].sup.l]) < 0. Using the envelope theorem, differentiating Equation (A1) with respect to y immediately gives

(A6) [mathematical expression not reproducible]

where Z is increasing in [beta]. Thus d[DELTA]/d[gamma] = (Z ([[beta].sup.h]) - Z ([[beta].sup.l])) ([bar.[theta]] - [gamma]). We note that [DELTA] is concave in [gamma] and reaches its maximum at [gamma] = [bar.[theta]]. Moreover, [DELTA] < 0 if the distance between [gamma] and [bar.[theta]] is sufficiently large. Similarly,

[mathematical expression not reproducible]

where [??] is increasing in [beta] and E [[([[bar.[theta]].sup.i] - [bar.[theta]]).sup.2]] is increasing in q. Thus,

(A7) d[DETA]/d[[alpha].sub.m] = d[[??].sup.P] ([q.sup.h], [[beta].sup.h])/d[[alpha].sub.m] - d[[??].sup.P] ([q.sup.l], [[beta].sup.l])/d[[alpha].sub.m] < 0.

such that an increase in [[alpha].sub.m] makes it relatively more attractive to contract with (q', p') rather than relatively ([q.sup.h], [[beta].sup.h]). If [DELTA] = 0 at [[alpha].sub.m], it immediately follows that [DELTA] > 0 at [[??].sub.m] < [[alpha].sub.m] and [DELTA] < 0 at [[??].sub.m] > [[alpha].sub.m].

This proves the proposition.

A6. Equation (15) Binds in the Second-Best Contract Menu Suppose it does not, that is, it is slack. This means that:

(A8)

[mathematical expression not reproducible]

Then, using (I[C.sup.[beta].sub.l]) and the fact that [[beta].sup.h] > [[beta].sub.l] we have the following:

(A9) [mathematical expression not reproducible]

This means that the politician can increase his expected utility by decreasing each of [T.sup.Hl], [T.sup.Ll], [T.sup.Hh], [T.sup.Lh] by a small [epsilon] > 0, without violating any of the participation constraints. Again, this is a contradiction to the fact that these were the second-best transfers. Therefore. (P[C.sup.[beta].sub.l]) binds.

A7. Equation (18) Binds in the Second-Best Contract Menu Suppose it does not, that is, it is slack. This means that:

(A10) [mathematical expression not reproducible]

where the last inequality follows from [[beta].sup.h] > [[beta].sup.l]. Note that the last term in Equation (A10) is greater than or equal to U0 by (P[C.sup.[beta].sub.h]) Therefore, it follows that:

(A12) [mathematical expression not reproducible]

that is, (P[C.sup.[beta].sub.l]) is slack. This means that the politician can increase his expected utility by decreasing [T.sup.Hl] and [T.sup.Ll], without violating any constraints. This contradicts the fact that [T.sup.Hl] and [T.sup.Ll] were second-best payments. Therefore, it must be that binds.

A8. The Second-Best Contract Menu under Asymmetric Information with Respect to [beta]

The politician solves

(A 12) [mathematical expression not reproducible]

subject to

(A 13)

[mathematical expression not reproducible]

(A 14)

[mathematical expression not reproducible]

(A 15)

[mathematical expression not reproducible]

(A16)

[mathematical expression not reproducible]

In Appendices A6 and A7 we have shown that Equations (A13) and (A16) are binding. Together with [[beta].sup.h] > [[beta].sup.l] this implies that Equation (A 14) is satisfied. We ignore Equation (A 15) for the moment and later verify that it holds in the contract menu we derive. From Equations (A13) and (A16) we can solve for the expected transfers [T.sup.Lh] (1 - [p.sup.H]) + [p.sup.H][T.sup.Hh] and [T.sup.Ll] (1 - [p.sup.H]) + [p.sup.H] [T.sup.Hl] and substitute in Equation (A12). Maximizing over [m.sup.ij], [y.sup.ij], i = H, L, j = l, h yields the following messages and policies:

(A 17)

[m.sup.ih] = [[bar.[theta]].sup.i][)[[beta].sup.h] - (1 - [pi]) [[beta].sup.l]) ([[alpha].sub.[gamma]] + [[alpha].sub.[theta]] + [[alpha].sub.m]) + [pi][[alpha].sub.m][[alpha].sub.[theta]]]/ ([[beta].sup.h] - (1 - [pi])[[beta].sup.l]) ([[alpha].sub.[gamma]] + [[alpha].sub.[theta]] + [[alpha].sub.m]) +[pi][[alpha].sub.m] ([[alpha].sub.[theta]] + [[alpha].sub.[gamma]])

i = L, H,

(A18)

[m.sup.il] = [[bar.[theta]].sup.i] [[[beta].sup.l] ([[alpha].sub.[gamma]] + [[alpha].sub.[theta]] + [[alpha].sub.m]) + [[alpha].sub.m] [[alpha].sub.[theta]]] + [gamma][[alpha].sub.m] [[alpha].sub.[gamma]]/[[beta].sup.l] ([[alpha].sub.[gamma]] + [[alpha].sub.[theta]] + [[alpha].sub.m] + [[alpha].sub.m] ([[alpha].sub.[theta]] + [[alpha].sub.[gamma]]), i = L, H,

and

(A19) [y.sup.ij] = [[alpha].sub.[theta]] [[bar.[theta]].sup.i] + [[alpha].sub.m][m.sup.ij] + [gamma] [[alpha].sub.[gamma]]/[[alpha].sub.[theta]] + [[alpha].sub.m] + [[alpha].sub.[gamma]], i = L, H, j = h, l.

Finally, we verify that (I [C.sup.[beta].sub.h]) holds. Substituting the expected transfers [T.sup.Lh](1 - [p.sup.H]) + [p.sup.H] [T.sup.Hh] and [T.sup.Ll] (1 - [p.sup.H]) + [p.sup.H] [T.sup.Hl] from the binding Equations (A13) and (A 16) into (I[C.sup.[beta].sub.h]), the latter is equivalent to:

[p.sup.H] ([m.sup.Hh] - [m.sup.Hl]) ([m.sup.Hh] + [m.sup.Hl] - 20 [[bar.[theta]].sup.H]) + (1 - [p.sup.H]) ([m.sup.Lh] - [m.sup.Ll]) ([m.sup.Lh] + [m.sup.Ll] - 2[[bar.[theta]].sup.L]) [less than or equal to] 0.

Next, using the expressions for [m.sup.ij] it can be shown that

[mathematical expression not reproducible]

for i = H, L, which completes the proof.

A9. Proof of Proposition 7

It suffices to show that for the messages and policies given by Equations (27) and (28) we can find transfers [T.sup.Hh], [T.sup.Hl], [T.sup.Lh], and [T.sup.Ll] such that the Equations (23), (24), (25), and (26) are satisfied with equality (i.e., both the participation and the incentive compatibility constraints are binding). This amounts to solving the linear equation system

[mathematical expression not reproducible]

for appropriately chosen constants [x.sub.1], [x.sub.2], [x.sub.3], and [x.sub.4]. This linear equation system has a (unique) solution ([T.sup.Hh], [T.sup.Lh], [T.sup.Hl], [T.sup.Ll]) which can easily be checked by computing the determinant of the corresponding matrix

[mathematical expression not reproducible]

Note that det(A) = [([p.sup.Hh] - [p.sup.Hl]).sup.2] [not equal to] 0 because p [not equal] .5 and [q.sup.l] < [q.sup.h]. Thus, there exist transfers which support the first-best contracts under asymmetric information.

ABBREVIATIONS

IPCC: Intergovernmental Panel on Climate Change

SPM: Summary for Policymakers

REFERENCES

Banks, G. "Evidence-Based Policy Making: What Is It? How Do We Get It?" Paper presented by ANZSOG, in ANU Public Lecture Series. Canberra: Productivity Commission, February 4, 2009.

Blackwell. D. "Comparison of Experiments." Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability, "l. 1951, 93-102.

--. "Equivalent Comparisons of Experiments." The Annals of Mathematical Statistics, 24(2), 1953, 265-72.

Blair, T., and J. Cunningham. "Modernising Government." London, UK. 1999. Accessed December 21, 2015. https://www.wbginvestmentclimate.org/uploads/modgov.pdf

Brocas, I., J. D. Carrillo, and T. R. Palfrey. "Information Gatekeepers: Theory and Experimental Evidence." Economic Theory, 51(3), 2012, 649-76.

Brownson, R. C., J. F. Chriqui, and K. A. Stamatakis. "Understanding Evidence-Based Public Health Policy." American Journal of Public Health, 99(9), 2009, 1576-83.

Callander, S., and B. Harstad. "Experimentation in Federal Systems." Quarterly Journal of Economics, 130(2), 2015, 951-1002.

Campello, D., and C. Zucco Jr. "Presidential Success and the World Economy." Journal of Politics, 78, 2016, 569-602.

Cowen, T., and D. Sutter. "Why Only Nixon Could Go to China." Public Choice, 97(4), 1998, 605-15.

Cukierman, A., and M. Tommasi. "When Does It Take a Nixon to Go to China?" American Economic Review, 88(1), 1998, 180-97.

Delli-Carpini, M. X., and S. Keeter. "Measuring Political Knowledge: Putting First Things First." American Journal of Political Science, 37(4), 1993, 1179-206.

Dewatripont. M., and J. Tirole. "Advocates." Journal of Political Economy, 107(1), 1999, 1-39.

Edlin, A. S., and C. Shannon. "Strict Single Crossing and the Strict Spence-Mirrlees Condition: A Comment on Monotone Comparative Statics." Econometrica, 66, 1998, 1417-25.

Ewerhart, C., and P. W. Schmitz. "Yes Men", Integrity, and the Optimal Design of Incentive Contracts." Journal of Economic Behavior & Organization, 43, 2000. 115-25.

Geden, O. "Comment: Climate Advisers Must Maintain Integrity." Nature, 521, 2015, 27-8.

Gui, F., and W. Pesendorfer. "The War of Information." Review of Economic Studies, 79(2), 2012, 707-34.

Inderst, R., and M. Ottaviani. "Competition through Commissions and Kickbacks." American Economic Review, 102(2), 2012, 780-809.

Levitan, D. "This Is Not How Science Works." 2017. Accessed January 22, 2018. http://gizmodo.com/thisis-not-how-science-works-1797232302

Luskin. R. C. "Measuring Political Sophistication." American Journal of Political Science, 31(4), 1987, 856-99.

Marschak, J., and K. Miyasawa. "Economic Comparability of Information Systems." International Economic Review, 9(2), 1968, 137-74.

Milgrom, P., and C. Shannon. "Monotone Comparative Statics." Econometrica, 62, 1994, 157-80.

Morris, S. "Political Correctness." Journal of Political Economy, 109(2), 2001, 231-65.

Ottaviani, M., and P. N. Sorensen. "Professional Advice." Journal of Economic Theory, 126, 2006, 120-42.

Prendergast, C. "A Theory of 'Yes Men'." American Economic Review, 83(4), 1993, 757-70.

Smith, A. F. M. "Mad Cows and Ecstasy: Chance and Choice in an Evidence-Based Society." Journal of the Royal Statistical Society. Series A (Statistics in Society), 159(3), 1996, 367-83.'

Stavins, R. "Is the IPCC Government Approval Process Broken?" 2014. Accessed January 8, 2015. http:// www.robertstavinsblog.org/2014/04/25/is-the-ipccgovernment-approval-process-broken-2/

Szalay, D. "Contracts with Endogenous Information." Games and Economic Behavior, 65, 2009, 586-625.

The Daily Caller. "Scientists Say IPCC Puts Politics Before Science, Needs Reform." 2014. Accessed January 12, 2015. http://dailycaller.com/2014/05/29/scientists-sayipcc-puts-politics-before-science-needs-reform/

The LSE GV314 Group. "Evaluation under Contract: Government Pressure and the Production of Policy Research." Public Administration, 92(1), 2014, 224-39.

Union of Concerned Scientists. Interference at the EPA. Science and Politics at the U.S. Environmental Protection Agency. Cambridge, MA: UCS Publications, 2008.

ANKE GERBER, CORINA HAITA-FALAH and ANDREAS LANGE *

* The authors thank two anonymous referees for valuable comments. Financial support by the Cluster of Excellence "CliSAP" (EXC177), Universitat Hamburg, funded through the German Science Foundation (DFG), is gratefully acknowledged.

Gerber: Department of Economics, University of Hamburg, Hamburg 20146, Germany. Phone +49 40 42838 2076. E-mail anke.gerber@wiso.uni-hamburg.de

Haita-Falah: Department of Economics, University of Kassel 34109, Kassel, Germany. Phone +49 561 804 7078, E-mail haita-falah@uni-kassel.de

Lange: Department of Economics, University of Hamburg, Hamburg 20146, Germany. Phone +49 40 42838 4035, E-mail andreas.lange@wiso.uni-hamburg.de

doi: 10.1111/ecin. 12562

(1.) Richard Tol is a climate economist who withdrew from the writing team of the Intergovernmental Panel on Climate Change Fifth Assessment Report.

(2.) The issue of competition on information acquisition has also been studied in a series of related papers. Considering the role of advocacy, Dewatripont and Tirole (1999) show that increased competition either lowers the costs of information or increases its quality. One mechanism is related to our model: competition lowers the chance of misrepresentation or manipulation of information by agents that disserve their cause. However, Dewatripont and Tirole (1999) do not explicitly consider the role of ideology-biased principals on information. In our setting the manipulation of evidence by agents arises solely upon incentivized requests by the principal and not due to preferences of the agent themselves. Brocas, Carrillo, and Palfrey (2012) consider the reverse setting where multiple adversaries can spend resources to acquire information to influence a decision-maker to change the decision in the adversary's direction. Brocas, Carrillo, and Palfrey (2012) do not consider the role played by the contracts offered by the principals. Furthermore, they assume that information becomes publicly available such that the messages sent cannot be distorted. Gul and Pesendorfer (2012) consider a similar setting where two parties with opposing interest try to inform the voters. Again, no distortion of information is considered. Instead the parties base their decision to continue to gather information (which is relevant for the voters' decision as they face uncertainty about the benefits of the proposed policy) on the probability of influencing the decision of a median voter.

(3.) Callander and Harstad (2015) point out that policymakers may gather information where decision-makers can learn from experimentation in their own district, but may also benefit from informational spillovers from other districts. Their paper points toward an interesting extension of our work: in a federal system, the scope for biasing policies by politicians in districts might be limited through the observation of other districts (with politicians with other preferences) choosing other policies.

(4.) Most of our results are unaffected by the assumption of binary signals and states of the world. In particular, continuous signal and state of the world render the same conclusions for the symmetric information case.

(5.) For example, in a survey from 1990/1991 for the United States, Delli-Carpini and Keeter (1993) found that only 57% of voters could correctly identify relative ideological positions of the Republican and Democratic parties on a left/right spectrum and only 45% of voters could correctly identify the parties' relative position on federal spending.

(6.) Indeed, Luskin (1987) concludes that the American voters are extremely unsophisticated and that this feature extents to foreign electorate as well. Note that Luskin's (1987) definition of electoral sophistication does not mean rationality, and its opposite is pure ignorance rather than naivete. In fact, ignorance can be a rational choice if voters economize on informational cost. For the case where voters lack information on the state of the world, Cowen and Sutter (1998) and Cukierman andTommasi (1998) show that an incumbent's options to transmit information are limited such that public support for a specific policy may be larger if a policy proposal is rather "atypical."

(7.) This is, for example, the case of the United Kingdom where, in his presidential address to the Royal Statistical Society in 1996, Adrian F. M. Smith urged for an evidence-based approach to policy making (Smith 1996). The same principle was later on stressed in the Blair Government white paper which called for the need of producing "policies that really deal with problems; that are forward-looking and shaped by the evidence rather than a response to short-term pressures"(Blair and Cunningham 1999).

(8.) It is the academic community rather than the general public (the voters) who follow the publications of the peers. For example, Thomson Reuters collects data on academic reputation by surveying academic faculty and researchers around the world by asking their opinion on universities and research institutions in their respective discipline. In our model, we take a more objective approach and let the research output be compared to the actual realization of the state of the world.

(9.) We interpret the enforceability of the contract as being driven by a long-term relationship between the politician and the researcher. While such a contract may not be enforced by a court, it is self-enforced via the negative reputation the politician would generate if he does not execute his contractual obligations. Given that the politician needs scientific advice to formulate policy (see the "evidence-based policy" term in the politician's utility function), a negative reputation in the community would lead to the lack of available scientific advisers. Similarly, the researcher is concerned about her reputation with the politician in view of future contractual opportunities and research funding.

(10.) Note that Proposition 4 continues to hold if the reservation utility [U.sub.0] is concave in q: the politician still weakly (but not necessarily strictly) prefers either extreme over any convex combination of these types.

(11.) Again observe that Proposition 5 still holds if the two types of researchers have different reservation utilities.

(12.) If p = .5 the signal distribution is independent of the researcher's ability and any contract that is acceptable for the low-ability type is also acceptable for the high-ability type. In this knife-edge case the single-crossing property is satisfied and the contract of the low-ability type will be distorted compared to the first-best.

(13.) Note that only the expected transfer is determined while the individual transfers are undetermined.

Caption: FIGURE 1 Timeline
COPYRIGHT 2018 Western Economic Association International
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Gerber, Anke; Haita-Falah, Corina; Lange, Andreas
Publication:Economic Inquiry
Article Type:Report
Geographic Code:7IRAN
Date:Jul 1, 2018
Words:13776
Previous Article:TRUST, RECIPROCITY, AND RULES.
Next Article:PRIZE-BASED MECHANISMS FOR FUND-RAISING: THEORY AND EXPERIMENTS.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters