Next Article in Journal
The Incompatibility of Pareto Optimality and Dominant-Strategy Incentive Compatibility in Sufficiently-Anonymous Budget-Constrained Quasilinear Settings
Previous Article in Journal
An Adaptive Learning Model in Coordination Games
Previous Article in Special Issue
Multidimensional Screening with Complementary Activities: Regulating a Monopolist with Unknown Cost and Unknown Preference for Empire Building
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Optimality of Team Contracts

1
FASS, Sabancı University, Tuzla, Istanbul 34956, Turkey
2
Department of Economics, TOBB University of Economics and Technology, Sögütözü Cad. No. 43, Sögütözü, Ankara 06560, Turkey
*
Author to whom correspondence should be addressed.
Games 2013, 4(4), 670-689; https://doi.org/10.3390/g4040670
Submission received: 11 April 2013 / Revised: 31 October 2013 / Accepted: 11 November 2013 / Published: 18 November 2013
(This article belongs to the Special Issue Contract Theory)

Abstract

:
This paper analyzes optimal contracts in a linear hidden-action model with normally distributed returns possessing two moments that are governed jointly by two agents who have negative exponential utilities. They can observe and verify each others’ effort levels and draft enforceable side-contracts on effort levels and realized returns. Standard constraints, resulting in incentive contracts, fail to ensure implementability, and we examine centralized collusion-proof contracts and decentralized team contracts, as well. We prove that the principal may restrict attention to team contracts whenever returns from the project satisfy a mild monotonicity condition.

1. Introduction

The design of contracts in situations involving strategic interaction among employees has taught us that teams (decentralization) outperform the standard incentive contracts when employees’ performances do not affect the riskiness and are separable with either no or weak interactions or employees are restricted to being identical [1,2]. In an agency environment where these restrictions are weakened and the riskiness is determined endogenously, we show that team arrangements may be surpassed by incentive contracts. However, adopting particular formulations of team and collusion contracts, we prove that team contracts beat all other implementable methods of contracting whenever returns from the project satisfy a monotonicity requirement. Moreover, it is shown that whenever teams are outperformed by incentive contracts, optimal incentive contracts are prone to a strong form of collusion.
A novel feature of our setting is the endogenous determination of the riskiness of the endeavor. In general, the principal’s (i.e., shareholders’ or owners’) desired attitude towards risk needs to be reflected in the executive contracts. Then, the following aspects arise: (1) whether or not agents (executives) can employ state-contingent binding side-contracts among themselves in order to alleviate the riskiness of their contracts; (2) whether or not they are better informed than the principal about the managerial details. The first of these holds, because in any free society, agents cannot be prevented from access to the judicial system. Additionally, the second point holds in general, as the executives are often provided with the capacity and responsibility to exploit information technology to collect information about other employees’ actions. Then, they may employ enforceable side-contracts based on their joint choices. This enables them to coordinate their actions and to insure one another. Therefore, this paper analyzes the construction of optimal incentives when the agents can write enforceable side-contracts based on effort levels and realized outcomes with the additional feature that the riskiness of returns is endogenously determined.
In our linear hidden-action framework, the returns of the organization are governed by a normal distribution with a mean and variance that depend on the effort profile the two agents choose.1 All parties have negative exponential utility functions with given coefficients of absolute risk aversion (CARA, henceforth), and the principal may be risk neutral. Agents can observe and verify each other’s effort choices and exploit all feasible collusion opportunities via enforceable side-contracts contingent on effort levels and realized outcomes.
First, we establish that incentive contracts, contracts that are individually rational, incentive compatible and involve efficient risk sharing, are not necessarily implementable: agents may jointly deviate to an effort profile, a feasible redistribution of their compensations and obtain strictly higher payoffs, while making the principal strictly worse off. Therefore, we concentrate on optimal contracts free from collusion.2 A contract is collusion-proof if it is in the core of the strategic interaction it induces, i.e., the principal’s proposed effort profile and state-contingent compensation scheme must be such that there should not be a non-empty set of agents and another feasible and participatory side-contract making each of these agents strictly better off. After dealing with these two types of centralized contracts, we examine decentralized team contracts, which can be interpreted as outsourcing or subcontracting, as well. Indeed, agents’ side-contracting abilities may be beneficial for the principal. This is because, with teamwork, agents allocating the total share coordinating their choices in order to maximize the sum of their expected utilities implies that their voluntary coordination in effort choices and efficient risk allocation are ensured without the need for incentive compatibility constraints.3
We prove that team contracts provide the principal with the highest returns among implementable contracts whenever returns are monotone and implementing the best effort profile is optimal with incentive contracts. Returns are said to be monotone if the mean is increasing and the variance decreasing separately in the effort levels of both of the agents. Additionally, the best effort profile is one that induces the highest mean, the lowest variance and is not related with costs. It exists whenever returns are monotone. Under these conditions, the principal may restrict attention to decentralization, even when riskiness is determined endogenously by employees who may exploit all collusion opportunities. Moreover, whenever team contracts are strictly outperformed by incentive contracts (e.g., our example in Appendix E), we show that the associated optimal incentive contracts are not immune to strong collusion: a stronger notion of collusion in which strongly collusion-proof contracts are contained in the set of incentive contracts, and every strongly collusion-proof contract is both a team contract and a collusion-proof contract. In order to reach these conclusions, we employ our finding, which provides a full characterization of situations under which incentive contracts are strongly collusion-proof: incentive contracts are strongly collusion-proof provided that returns are monotone and implementing the best effort profile is optimal with incentive contracts; and, these conditions are minimal as examples, in which the optimal incentive contract is not immune to strong collusion, and can be identified whenever any one of their parts does not hold.
Earlier literature displays that the sufficiency conditions, under which the principal prefers team contracts to incentive contracts, involve separable production functions with either no or weak interactions or the limitation to identical agents when a richer set of interactions is allowed.4 The current study, using a monotonicity condition, provides an extension to the optimality of team contracts, as this result holds in environments with agents having different abilities and attitudes towards risk and are jointly determining both the mean and the riskiness in non-restricted ways.
Section 2 presents the model. Section 3 shows that standard constraints fail to eliminate collusion and presents two formulations of collusion constraints. In Section 4, we formulate the principal’s problem with teamwork, and in the subsequent section, we present our results. Section 6 concludes.

2. The Model

Ours is a linear two-agent single-task hidden-action model. The principal possesses an asset that delivers observable and verifiable returns drawn from a normal distribution, whose mean and variance are determined by employees’ effort choices that the principal cannot observe. Hence, contracts cannot depend on agents’ efforts. The return-contingent contracts that the principal offers are all observable and verifiable by the agents employed. We assume that each agent can observe and verify others’ efforts. Finally, everyone is assumed to have access to capital markets and not to be wealth-constrained.
We assume that E i , the set of effort levels of agent i = 1 , 2 , is a finite and ordered set.5 An asset’s effort-contingent returns x are distributed with F ( x e ) , such that for all e = ( e 1 , e 2 ) E E 1 × E 2 , F ( x e ) is the normal cumulative distribution given by μ ( e ) , the mean and σ 2 ( e ) , the variance. All have exponential utilities with the following CARA coefficients: R for the principal and r i for agent i = 1 , 2 , where R 0 and r 1 , r 2 > 0 . Indeed, our formulation includes cases with a risk-neutral principal. Agent i’s cost of effort is in terms of returns and denoted by c i ( e i ) , e i E i . Each agent has an outside employment opportunity resulting in a reserve certainty equivalent, W i , i = 1 , 2 . Given a contract ( S i ( x ) ) i = 1 , 2 , x (the amount of returns given to agent i in state x under S i ), which makes both agents exert the effort level, e E , the expected utilities of the principal and agent i = 1 , 2 are:
E u p ( S 1 , S 2 e ) = - exp { - R ( x - S ¯ ( x ) ) } d F ( x e ) E u i ( S i e i , e - i ) = - exp { - r i ( S i ( x ) - c i ( e i ) ) } d F ( x e i , e - i )
where S ¯ ( x ) = S 1 ( x ) + S 2 ( x ) . The attention is restricted to feasible linear contracts of the form S i ( x ) = γ i x + ρ i , where γ i + are such that i = 1 , 2 γ i [ 0 , 1 ] and ρ i and e i E i , i = 1 , 2 .6 These restrictions entail feasibility by making sure that the principal’s asset cannot be inflated and that the agents’ efforts are feasible. The certainty equivalent (CE, henceforth) of agent i = 1 , 2 is:
C E i ( γ i , ρ i e ) = γ i μ ( e ) + ρ i - r i 2 γ i 2 σ 2 ( e ) - c i ( e i )
Similarly, the CE of the principal is:
C E p ( γ , ρ e ) = 1 - i = 1 2 γ i μ ( e ) - R 2 1 - i = 1 2 γ i 2 σ 2 ( e ) - i = 1 2 ρ i
In this setting, the individual rationality constraint of agent i = 1 , 2 having a reserve CE of W i is:
γ i μ ( e ) + ρ i - r i γ i 2 2 σ 2 ( e ) - c i ( e i ) W i
Similarly, the incentive compatibility constraint of agent i = 1 , 2 is:
γ i μ ( e ) - μ ( e i , e - i ) - r i γ i 2 2 σ 2 ( e ) - σ 2 ( e i , e - i ) c i ( e i ) - c i ( e i ) , for all e i E i
The principal’s use of centralized contracts involves dealing separately with each agent, and this brings about an interesting strategic interaction among the agents. We refer to this as the labor union’s problem, and the particular timing employed in its formulation is as follows: first, the principal offers a contract to each agent separately, while letting them know of the effort profile she proposes; then, agents accept or reject separately; and, finally, the labor union is formed by the accepting agents.7 Inside the labor union, agents’ contracts are revealed to one another, and agents are involved in a utilitarian bargaining with some predetermined bargaining powers that are known by the agents, but not necessarily by the principal. The consideration of voluntary participation into the labor union entails the natural requirement that arrangements emerging in the labor union cannot hurt any agent when compared with the principal’s offer (the participation constraint). This is different from individual rationality, because it makes sure that an agent who has already accepted the principal’s offer should be voluntarily participating in the arrangements in the labor union.
Since returns from the asset and contracts the principal offers are all observable and verifiable, it is natural to consider efficient risk sharing in the labor union; resulting in contracts with better consumption smoothing.8 Formulating efficient risk sharing among the agents via a utilitarian bargaining game delivers: for any proposed effort profile, e E , and compensation S i : , given bargaining weights θ i ( 0 , 1 ) with θ 1 + θ 2 = 1 for i = 1 , 2 , the labor union solves:
max y 1 , y 2 i = 1 , 2 θ i E u i ( y i e )
subject to (1) the feasibility of the side-contracting scheme, y i : , i.e., i = 1 , 2 y i ( x ) = S ¯ ( x ) for all x and i = 1 , 2 (where y i ( x ) denotes the amount of returns given to agent i in state x under side-contract y i ); and (2) participation constraint, i.e., E u i ( y i e ) E u i ( S i e ) for i = 1 , 2 . Letting the objective function of the labor union’s problem be denoted by Φ S , e , θ , one can show that in an interior solution, it must be that marginal rates of substitutions of the agents between any two states must be equal to each other. That is, side-contracts y i : , i = 1 , 2 , have to be so that Φ S , e , θ / y 1 ( x ) / Φ S , e , θ / y 1 ( x ) = Φ S , e , θ / y 2 ( x ) / Φ S , e , θ / y 2 ( x ) for any two states, x , x .9 Consequently, agents’ efficient risk sharing is captured by the substitution compatibility constraint, which takes the following simple form, due to the use of CARA utilities, normal distribution and linear contracts:
γ 1 γ 2 = r 2 r 1
Definition 1. A profile of centralized feasible contracts ( S i , e i ) i = 1 , 2 = ( γ i , ρ i , e i ) i = 1 , 2 constitutes a vector of incentive contracts, if and only if ( I R i ) , ( I C i ) and ( S C ) , for all i = 1 , 2 , are satisfied.
The principal’s problem in finding optimal incentive contracts is to maximize Equation (1) by choosing ( S i , e i ) i = 1 , 2 subject to ( I R i ) , ( I C i ) and ( S C ) , for all i = 1 , 2 .
It should be emphasized that a relevant concern about efficient risk sharing being contingent on the effort profiles does not arise in our setting. With more general utility functions and distributions of returns, efficient risk sharing arrangements may have to be contingent on effort profiles. Each possible deviation from the principal’s proposed effort profile would possibly induce a different labor union’s problem and, hence, different efficient risk sharing conditions. Then, the principal’s problem while finding the optimal contracts would have to consider these intriguing aspects. However, we do not encounter such problems with CARA utilities (with separable effort costs) and linear contracts, and normally distributed returns as an efficient risk sharing condition is simplified to a ratio of CARA coefficients. The other point is that agents engage in side trading after effort decisions have been made. Thus, even if they are fully insured, they cannot shirk from the previously made effort choices.

3. Collusion

We present a numerical example displaying that an optimal incentive contract may be susceptible to collusion before going into the formal execution.

3.1. Example 1

Let E i = { e L , e H } , and returns x are normally distributed with the mean, μ ( e ) , and variance σ 2 ( e ) , as given in Table 1. The private cost of agents’ effort choices are: c 1 ( e H ) = 1 , c 1 ( e L ) = c 2 ( e L ) = 0 and c 2 ( e H ) = 0 . 35 . Let r i = 10 , i = 1 , 2 , and the principal is risk-neutral, i.e., R = 0 . The reserve CE figures of agents are W 1 = 1 and W 2 = 0 . 5 . In this example, the effort choices of the first agent only affect the mean and those of the second agent, only the variance. Another interesting feature is that the principal would desire high effort from the second agent, only to decrease the risk faced by the first agent.
Table 1. The mean and variance figures of Example 1.
Table 1. The mean and variance figures of Example 1.
( e H , e H ) ( e H , e L ) ( e L , e H ) ( e L , e L )
μ ( e ) 101055
σ 2 ( e ) 1212
When an optimal incentive contract is considered, the ( I C i ) and ( S C ) constraints are:
( I C 1 ) : γ 1 μ ( e 1 , e 2 ) - μ ( e 1 , e 2 ) ( c 1 ( e 1 ) - c 1 ( e 1 ) ) , e 1 E ( I C 2 ) : - r 2 γ 2 2 2 σ 2 ( e 1 , e 2 ) - σ 2 ( e 1 , e 2 ) ( c 2 ( e 2 ) - c 2 ( e 2 ) ) , e 2 E ( S C ) : γ 1 = γ 2
Consequently, Table 2 presents optimal incentive contracts and corresponding CE figures to the principal when a given effort level, e E , is to be implemented with ( I R i ) , ( I C i ) and ( S C ) , i = 1 , 2 .
Table 2. Optimal incentive contracts of Example 1.
Table 2. Optimal incentive contracts of Example 1.
( e 1 , e 2 ) γ 1 ( e ) γ 2 ( e ) ρ 1 ( e ) ρ 2 ( e ) C E p ( e )
( e H , e H ) 0 . 264575 0 . 264575 - 0 . 295751 - 1 . 44575 6 . 45
( e H , e L ) 0 . 20 0 . 20 0 . 40 - 1 . 10 6 . 70
( e L , e H ) - - - - - - - - - -
( e L , e L ) 001 0 . 50 3 . 50
Thus, the optimal incentive contract, ( S i * ) i = 1 , 2 , involving the implementation of the effort profile, e = ( e H , e L ) , is given by γ 1 * , γ 2 * ; ρ 1 * , ρ 2 * = 0 . 20 , 0 . 20 ; 0 . 40 , - 1 . 10 and delivers the principal a return of 6 . 70 .10 It should be mentioned that when the effort profile ( e L , e H ) is to be implemented, the incentive compatibility constraint of the first and second agents result in the set of constraints being empty.11
However, S * is not immune to collusion; hence, it is not implementable. There is a feasible side-contract contingent on the agents’ effort choices, making both strictly better off. Consider S , involving ( e H , e H ) and γ 1 , γ 2 ; ρ 1 , ρ 2 = 0 . 20 , 0 . 20 ; 0 . 205 , - 0 . 905 . It is feasible, because γ 1 * + γ 2 * = γ 1 + γ 2 , γ 1 , γ 2 0 and ρ 1 + ρ 2 = ρ 1 * + ρ 2 * . Moreover, ( S C ) holds, since γ i = γ i * , i = 1 , 2 . The resulting CE figures are C E 1 ( S ) = 1 . 005 > 1 = C E 1 ( S * ) , C E 2 ( S ) = 0 . 545 > 0 . 50 = C E 2 ( S * ) and C E P ( S ) = 6 . 70 = C E P ( S * ) . Agent 1 identifies a collusion opportunity (a coordinated deviation) by agreeing to make a side transfer to agent 2 in return for her high effort choice, resulting in a lower variance of output; and this, in turn, mitigates the amount of risk to which the principal desires the agents to be exposed. Therefore, side-contracting agents are able to sustain ( e H , e H ) , even though the risk-neutral principal finds it costly to make the second agent exert high effort. Such a side-contracting on effort levels enables non-incentive-compatible (yet, participatory) arrangements to be beneficial for the agents.12

3.2. Collusion Constraints

The strategic interaction emerging among the agents is nurtured by their ability to share risk efficiently and observe and verify each other’s effort choices. Indeed, [3,5] point out that there can be many Bayesian–Nash equilibria when this interaction is modeled with a Bayesian non-cooperative game and the question of implementation is analyzed by the construction of a non-cooperative message game la Maskin. It turns out that these equilibria are not payoff-equivalent for the principal. The authors of [6] attack this problem by constructing a specific non-cooperative game theoretic structure featuring the use of “whistle blowers”, agents who are to be rewarded if they were to provide verification about the deviation of other agents. Hence, it proposes “a solution” in which the principal does not incur any additional costs.
Our formulation, resulting in “another solution”, involves the use of cooperative game theoretic tools in the analysis of the interaction among agents. Indeed, our solution method is based on the core: with two agents, it gets simplified and requires the principal to be restricted in offering individually rational contracts that both of the agents cannot strictly benefit from by deviating jointly to a feasible side-contract and effort profile (via the use of binding side-contracts utilizing their ability to observe and verify each others effort choices). Because the agents can write binding contracts based on effort levels, as well as side trades (for efficient risk sharing), the solution of the labor union’s problem must be a strong Nash equilibrium of the corresponding two-player strategic interaction. Due to this formulation, we do not need to consider enriched classes of principal’s offers or particular (static or dynamic) non-cooperative interaction structures to eliminate collusion or unreasonable (Bayesian–Nash) equilibria. This, in turn, strengthens our method, since it allows us to avoid specific structural and informational assumptions.13
The collusion-proofness with two agents restricts the principal as follows: his offer ( S i , e i ) i = 1 , 2 is collusion-proof, if there is no feasible side-contracting arrangement ( S i , e i ) i = 1 , 2 (i.e., i = 1 , 2 S i ( x ) S ¯ ( x ) for all x and e E ), such that the expected utility of every agent when ( S i , e i ) i = 1 , 2 is used strictly exceeds that under ( S i , e i ) i = 1 , 2 (i.e., E u i ( S i e ) > E u i ( S i e ) for all i = 1 , 2 ). The critical point is that the individually rational contract the principal needs to offer must be one that cannot be improved upon in the labor union by joint effort choices and more efficient risk sharing. Therefore, it must be on the efficiency frontier of the agents’ utilitarian bargaining game.14 Because u i , i = 1 , 2 , is twice differentiable and satisfies Assumptions 1–3 of [27] (resulting in the interiority of the solution), the use of linear contracts and normally distributed returns and CARA utilities imply ( S i , e i ) i = 1 , 2 , a centralized feasible and linear contract, satisfying the collusion-proofness constraint, if and only if: (1) i = 1 , 2 C E i ( S i e ) - C E i ( S i e ˜ ) 0 for i = 1 , 2 and all e ˜ E ; and (2) there exists a bargaining weight vector θ { θ : θ i ( 0 , 1 ) for all i , and i θ i = 1 } , such that θ 1 u 1 ( S 1 ( x ) ) = θ 2 u 2 ( S 2 ( x ) ) for all x . The latter requirement is nothing but the general form of the substitution compatibility evaluated at the effort profile of the principal’s offer.
Definition 2. A profile of centralized feasible contracts ( S i , e i ) i = 1 , 2 = ( γ i , ρ i , e i ) i = 1 , 2 is said to be collusion-proof, if and only if the following conditions hold: (1) ( I R i ) for all i = 1 , 2 ; (2) ( S C ) ; and (3) i = 1 , 2 C E i ( S i e ) - C E i ( S i e ˜ ) 0 , for all e ˜ E .
Instead of directly dealing with the complex comparison of incentive contracts with collusion-proof contracts and team contracts, clear-cut answers emerge with a stronger notion of collusion: the principal’s offer must be so that no agent should start “contemplating” joint effort deviations.
Definition 3. A profile of centralized feasible contracts ( S i , e i ) i = 1 , 2 = ( γ i , ρ i , e i ) i = 1 , 2 is strongly collusion-proof, if and only if the following conditions hold: (1) ( I R i ) for all i = 1 , 2 ; (2) ( S C ) ; and (3) ( C C i ) for all i = 1 , 2 , where it is defined by:
γ i μ ( e ) - μ ( e ) - r i γ i 2 2 σ 2 ( e ) - σ 2 ( e ) c i ( e i ) - c i ( e i ) , for all e E
We refer to the set of constraints involved in the definition of strong collusion-proofness as ( C C ) ; and, the principal’s problem under strong collusion is to maximize Equation (1) subject to ( C C ) .

3.2.1. Example 1 under Strong Collusion

Table 3 presents optimal strongly collusion-proof contracts and associated CE figures to the risk-neutral principal when a given effort level, e E , is to be implemented.15
Table 3. Optimal strongly collusion-proof contracts of Example 1.
Table 3. Optimal strongly collusion-proof contracts of Example 1.
( e 1 , e 2 ) γ 1 ( e ) γ 2 ( e ) ρ 1 ( e ) ρ 2 ( e ) C E p ( e )
( e H , e H ) 0 . 264575 0 . 264575 - 0 . 295751 - 1 . 44575 6 . 45
( e H , e L ) - - - - - - - - - -
( e L , e H ) - - - - - - - - - -
( e L , e L ) 001 0 . 5 3 . 5
The optimal strongly collusion-proof contract ( S * * , e * * ) involves e * * = ( e H , e H ) and ( γ 1 * * , γ 2 * * ; ρ 1 * * , ρ 2 * * ) = ( 0 . 264575 , 0 . 264575 ; - 0 . 295751 , - 1 . 44575 ) and delivers the risk-neutral principal a return of 6 . 45 . Recall that the optimal incentive contract, ( S * , e * ) , is given by e * = ( e H , e L ) and γ 1 * , γ 2 * ; ρ 1 * , ρ 2 * = 0 . 20 , 0 . 20 ; 0 . 40 , - 1 . 10 , with an associated return of 6 . 70 to the risk-neutral principal.16 The principal is strictly worse off under optimal strongly collusion-proof contracts compared to incentive contracts. This is because the principal is obliged to go for ( e H , e H ) , as e * = ( e H , e L ) cannot be implemented.

4. Teamwork

In Example 1, the CE of a risk-averse principal with R = 1 / 2 under the optimal incentive contract, S * , is 6 . 52 , strictly lower than 6 . 61 , the CE associated with the side-contract, S , employed in agents’ joint deviation from S * (see footnote10 for the details). Thus, in general, the principal may benefit from side-contracting. This leads to the analysis of situations when the principal uses decentralized contracts by hiring agents as a team, offering a total share from the returns and letting agents coordinate effort choices and allocations themselves. The timing is: first, the principal proposes to the agents as a team; then, before the accept/reject decisions, the team identifies their ideal within-team arrangement given the principal’s offer; and finally, the team accepts or rejects, where acceptance emerges only if both of the agents agree to accept. Thus, the principal does not need to deal with incentive constraints, because within-team arrangements, while not necessarily incentive-compatible, are binding.
A decentralized feasible contract, ( T , e T ) , consists of T : , where T ( x ) = γ T x + ρ T , γ T [ 0 , 1 ] and ρ T ; and e T E . Given a decentralized feasible contract, ( T , e T ) , a feasible within team allocation ( T i , e i ) i = 1 , 2 consists of T i : with T 1 ( x ) + T 2 ( x ) T ( x ) and T i ( x ) = γ i T x + ρ i T for all x (i.e., γ 1 T + γ 2 T = γ ¯ T , γ i T [ 0 , 1 ] , i = 1 , 2 ; ρ 1 T + ρ 2 T = ρ ¯ T ); and ( e 1 , e 2 ) E . We say that, given a decentralized feasible contract, ( T , e T ) , a feasible within-team allocation ( T i , e i ) i = 1 , 2 solves the team’s problem if: (1) it satisfies ( I R i ) , for all i = 1 , 2 ; and (2) there is no other feasible within-team allocation ( T ^ i , e ^ i ) i = 1 , 2 satisfying ( I R i ) for all i = 1 , 2 and providing both of the agents strictly higher CE figures. That is, given a decentralized feasible contract, ( T , e T ) , a feasible within-team allocation ( γ i T , ρ i T , e ˜ i T ) i = 1 , 2 solves the team’s problem, if and only if: (1) ( I R i ) holds for all i = 1 , 2 ; (2) ( S C ) holds; and (3) the team constraint defined below (denoted by ( T C ) ) holds:
γ 1 T + γ 2 T μ ( e ˜ T ) - μ ( e 1 , e 2 ) - γ 1 T 2 r 1 + γ 2 T 2 r 2 2 σ 2 ( e ˜ T ) - σ 2 ( e 1 , e 2 ) i = 1 , 2 c i ( e ˜ i T ) - c i ( e i ) , for all ( e 1 , e 2 ) E
Definition 4. A decentralized feasible contract, ( T , e T ) , is said to be a team contract if there exists a feasible within-team allocation ( γ i T , ρ i T , e ˜ i T ) i = 1 , 2 that solves the team’s problem and e ˜ T = e T .
The principal’s problem with team contracts is:
max γ T , ρ T , e T 1 - γ T μ ( e ) - R 2 1 - γ T 2 σ 2 ( e ) - ρ T
subject to ( T , e T ) being a team contract.

4.1. Example 1 with Teamwork

Revisiting Example 1 of Section 3.1, Table 4 presents optimal team contracts and associated CE figures to the principal when a given effort profile, e E , is to be implemented.
Table 4. Optimal team contracts of Example 1.
Table 4. Optimal team contracts of Example 1.
( e 1 , e 2 ) γ 1 ( e ) γ 2 ( e ) ρ 1 ( e ) ρ 2 ( e ) C E p ( e )
( e H , e H ) 0 . 187083 0 . 187083 0 . 304171 - 0 . 845829 6 . 80
( e H , e L ) 0 . 10 0 . 10 1 . 10 - 0 . 40 7 . 30
( e L , e H ) - - - - - - - - - -
( e L , e L ) 001 0 . 50 3 . 50
The optimal team contract, ( S i * * * ) i = 1 , 2 , involving e * * * = ( e H , e L ) is given by γ 1 * * * , γ 2 * * * ; ρ 1 * * * , ρ 2 * * * = 0 . 10 , 0 . 10 ; 1 . 10 , - 0 . 40 , and delivers the principal a CE of 7 . 30 . Recall that the principal obtains 6 . 45 as the CE achieved at the optimal strong collusion-proof contract, ( S * * , e * * ) , involving e * * = ( e H , e H ) and γ 1 * * , γ 2 * * ; ρ 1 * * , ρ 2 * * = 0 . 264575 , 0 . 264575 ; - 0 . 295751 , - 1 . 44575 . The optimal incentive contract, S * , delivers a CE of 6 . 70 to the risk-neutral principal and involves the same effort profile e * = ( e H , e L ) , but a different compensation scheme given by γ 1 * , γ 2 * ; ρ 1 * , ρ 2 * = 0 . 20 , 0 . 20 ; 0 . 40 , - 1 . 10 . Thus, in this particular example, the principal strictly prefers team contracts when compared with incentive contracts.17

5. The Optimality of Team Contracts

This section presents our findings concerning the comparison of the principal’s welfare under team contracts and centralized contracts.
The key difference between (centralized) collusion-proof contracts and (decentralized) team contracts lies in the way individual rationality is treated. To see this, consider an optimal collusion-proof contract in which individual rationality does not bind, implying that agents are strictly better off in comparison to their reserve levels. Now, in the labor union, agents would veto any arrangement that provides strictly less returns than the principal’s offer, even if that arrangement were to be individually rational. (Then, we say that such an arrangement is not “participatory”.) That is, the disagreement point of the labor union’s bargaining is given by the utilities from the principal’s offer. Because team arrangements are decentralized, the associated endeavor only considers individual rationality, as the principal’s offer is not a “sunk” offer. That is why every collusion-proof contract is a team contract, as well. On the other hand, the use of CARA utilities brings about the dismissal of income effects in agents’ decisions. Hence, it should not be surprising to observe that in the current setting, the individual rationality has to bind (by adjusting the constant terms, ρ i , for i = 1 , 2 ). Thus, the participation constraint equals the individual rationality; therefore, we obtain the following result:
Lemma 1.
The set of team contracts is equal to the set of collusion-proof contracts.
The comparison of team contracts with strongly collusion-proof contracts reveals the following: ( C C i ) , i = 1 , 2 , implying ( T C ) as ( T C ) is obtained from the summation of ( C C 1 ) and ( C C 2 ) . Hence, every strongly collusion-proof contract must be a team contract. Yet, there are team contracts that are not strongly collusion-proof.18 This delivers the following:
Lemma 2.
Strongly collusion-proof contracts constitute a strict subset of the set of team contracts.
Therefore, optimal team contracts provide the principal higher CE levels than those obtained with optimal strongly collusion-proof contracts. However, such a conclusion cannot be made in the comparison between incentive contracts and team contracts: while Example 1 portrays a setting in which team contracts deliver the principal strictly higher returns than incentive contracts, Example 5 provides a situation in which team contracts are outperformed by incentive contracts. Indeed, in the latter example, each agent’s effort choice prescribed with the optimal team contract is different from the one associated with the optimal incentive contract, and the principal obtains strictly lower CE levels with the optimal team contract. However, we show that when this happens, the optimal incentive contracts are not necessarily immune to strong collusion, i.e., they may not be implementable.
To that regard, we provide a full characterization of situations in which the principal can ignore strong collusion. We need the following in the statement of our results.
Definition 5. The asset of the principal is said to have monotone returns if μ ( e 1 , e 2 ) is weakly increasing and σ 2 ( e 1 , e 2 ) is weakly decreasing separately in both e 1 and e 2 . Moreover, define the best effort profile, e E , by μ ( e ) μ ( e ) and σ 2 ( e ) σ 2 ( e ) for all e e .
The monotonicity of returns entails the interesting case in which the first agent governs only the mean, i.e., μ ( e 1 , e 2 ) = μ ( e 1 ) , which is weakly increasing in e 1 , and the second only the variance, i.e., σ 2 ( e 1 , e 2 ) = σ 2 ( e 2 ) , which is weakly decreasing in e 2 . On the other hand, in general, the best effort profile may not exist. However, because the set of effort levels is finite, there exists a best effort profile whenever returns are monotone. Furthermore, with monotone returns, agents’ effort levels can be ordered with respect to their effects on the mean and variance; hence, the best effort profile (not necessarily the most costly) corresponds to ( max e i E i e i ) i = 1 , 2 , where the maximization is done with this order on E i , i = 1 , 2 .
Proposition 1.
Suppose that returns are monotone and that the optimal incentive contract involves the best effort profile. Then, any incentive contract is strongly collusion-proof. Furthermore, for situations in which any of these conditions are violated, optimal incentive contracts are not necessarily immune to strong collusion, which may make the principal strictly worse off.
Proof.
By hypothesis there, exists a best effort profile in E, and it is denoted by e. Moreover, the principal finding it optimal with incentive contracts implies that ( C C i ) , i = 1 , 2 , hold, because for i j :
( C C i ) : γ i μ ( e ) - μ ( e 1 , e 2 ) - r i γ i 2 2 σ 2 ( e ) - σ 2 ( e 1 , e 2 ) c i ( e i ) - c 1 ( e i ) , e E ( I C i ) : γ i μ ( e ) - μ ( e i , e j ) - r i γ i 2 2 σ 2 ( e ) - σ 2 ( e i , e j ) c i ( e i ) - c i ( e i ) , e i E i
and the left-hand side of ( I C i ) is less than the left-hand side of ( C C i ) ; we conclude that any solution satisfying ( I C i ) satisfies ( C C i ) , i = 1 , 2 .
The four examples, presented in the Appendix, consider cases when the hypothesis of Proposition 1 is not satisfied and conclude the proof. These examples’ features are: (1) returns are monotone, but implementing the optimal incentive contract does not involve the best effort profile (in Appendix A); (2) returns are not monotone, but there exists a best effort profile that the principal finds optimal with incentive contracts (in Appendix B); (3) returns are not monotone, but there is a best effort profile, yet implementing that effort profile with incentive contracts is not optimal (in Appendix C); (4) returns are not monotone, and there is no best effort profile (in Appendix D).
Proposition 1 identifies conditions with which incentive contracts are guaranteed to be implementable, as the set of incentive contracts equals the set of strongly collusion-proof contracts. This is because the set of incentive contracts, containing strongly collusion-proof contracts, is a subset of the set of strongly collusion-proof contracts. Since strongly collusion-proof contracts are included in the set of team contracts (Lemma 2), incentive contracts are contained in the set of team contracts. Thus, under the conditions identified in Proposition 1, the optimal team contracts provide the principal with higher CE compared to the optimal incentive contracts. Additionally, they are minimal: any violation enables us to construct an example in which optimal incentive contracts are not strongly collusion-proof.19
Therefore, we conclude that team contracts provide the principal with the maximum implementable certainty equivalent under the condition that returns are monotone and the best effort profile is chosen in the optimal incentive contract. Moreover, any violations of these conditions may trigger joint deviations from the optimal team contract and prevent its implementability.

6. Concluding Remarks

Our first concluding remark concerns the relation of the current model to the standard framework of Holmstrom and Milgrom [1] (HM, henceforth), the details of which can be found in footnote 4. The two models are essentially different. This is because the general version of ours is a single-task agency model in which agents control both the mean and variance of returns. Additionally, obtaining a single-task version from the model of HM with correlated error terms and interactions in the production functions using an aggregation measure (e.g., addition) of the two returns corresponds to a setting that cannot be obtained using our model. This is because the addition of two correlated normal distributions is not necessarily normal. With stochastically-independent error terms (possibly with interactions in the production functions), the single-task version of HM can be associated with the special case of our model in which agents control only the mean. While HM does not provide an optimality of the teamwork result in that setting, the monotonicity condition of Proposition 1 suffices in this regard for our model. Additionally, the single-task version of HM delivers a similar result with the additional feature of technologically-independent production (Proposition 4).
The second concluding remark concerns the role of substitution compatibility. It is clearly not needed in Lemma 2 and does not play a major part in Proposition 1. On the other hand, our examples establishing the minimality of the hypothesis of Proposition 1 utilize the substitution compatibility constraint.
The final concluding remark is about the efficiency properties of the optimal team contracts. Because that team contracts, making use of binding side-contracts within the team, can circumvent the individual incentive compatibility constraints to some extent, whether or not the first best outcome can be achieved is a relevant concern. In our model, the timing of team contracts implies the restriction of ( T C ) , which is nothing but a “team” incentive compatibility constraint. That is precisely why optimal team contracts are not necessarily best. To see this, consider the case in which agents are identical in every regard. Then, the team would be acting as a single person, but still, the “team” incentive compatibility would matter. In fact, Example 5 of Appendix E depicts a situation in which team contracts are outperformed by incentive contracts; hence, in general, they are not even second-best. On the other hand, using a particular protocol and structure in the formation of the team with “snitches”, incentivizing agents to report the other’s deviations, can deliver the best, as in [6].

Acknowledgments

Earlier versions of the current paper were titled “Team Beats Collusion”. We thank Beth Allen, Guilherme Carmona, Kim Sau Chung, Alpay Filiztekin, Ioanna Grypari, Özgür Kıbrıs, Gina Pieters and Jan Werner for helpful comments. All remaining errors are ours.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Holmstrom, B.; Milgrom, P. Regulating the trade among agents. J. Inst. Theor. Econ. 1990, 146, 85–105. [Google Scholar]
  2. Itoh, H. Coalitions, incentives and risk sharing. J. Econ. Theory 1993, 60, 410–427. [Google Scholar] [CrossRef]
  3. Demski, J.; Sappington, D. Optimal incentive contracts with multiple agents. J. Econ. Theory 1984, 33, 152–171. [Google Scholar] [CrossRef]
  4. Demski, J.; Sappington, D.; Spiller, P. Incentive schemes with multiple agents and bankruptcy constraints. J. Econ. Theory 1988, 44, 156–167. [Google Scholar] [CrossRef]
  5. Mookherjee, D. Optimal incentive schemes with many agents. Rev. Econ. Stud. 1984, 51, 433–446. [Google Scholar] [CrossRef]
  6. Ma, C.; Moore, J.; Turnbull, S. Stopping agents from “cheating”. J. Econ. Theory 1988, 46, 355–372. [Google Scholar] [CrossRef]
  7. Tirole, J. Hierarchies and bureaucracies: On the role of collusion in organizations. J. Law Econ. Organ. 1986, 2, 181–214. [Google Scholar]
  8. Laffont, J.-J.; Rochet, J.-C. Collusion in organizations. Scand. J. Econ. 1997, 99, 485–495. [Google Scholar] [CrossRef]
  9. Brusco, S. Implementing action profiles when agents collude. J. Econ. Theory 1997, 73, 395–424. [Google Scholar] [CrossRef]
  10. Laffont, J.-J.; Martimort, D. Mechanism design with collusion and correlation. Econometrica 2000, 68, 309–342. [Google Scholar] [CrossRef]
  11. Barlo, M. Essays in Game Theory. Ph.D. Thesis, University of Minnesota, Minneapolis, MN, USA, 2003. [Google Scholar]
  12. Felli, L.; Hortala-Vallve, R. Preventing Collusion through Discretion; CEPR Discussion Paper No. DP8302; 2011; Available online: http://ssrn.com/abstract=1794892 (accessed on 14 November 2013).
  13. Holmstrom, B. Moral hazard in teams. Bell J. Econ. 1982, 13, 324–340. [Google Scholar] [CrossRef]
  14. Itoh, H. Incentives to help in multi-agent situations. Econometrica 1991, 59, 611–636. [Google Scholar] [CrossRef]
  15. Ramakrishnan, R.T.S.; Thakor, A.V. Cooperation versus competition in agency. J. Law Econ. Organ. 1991, 7, 248–283. [Google Scholar]
  16. Varian, H.R. Monitoring agents with other agents. J. Inst. Theor. Econ. 1990, 146, 153–174. [Google Scholar]
  17. Itoh, H. Job design, delegation and cooperation: A principal-agent analysis. Eur. Econ. Rev. 1994, 38, 691–700. [Google Scholar] [CrossRef]
  18. Baliga, S.; Sjostrom, T. Decentralization and collusion. J. Econ. Theory 1998, 83, 196–232. [Google Scholar] [CrossRef]
  19. Macho-Stadler, I.; Perez-Castrillo, J.D. Centralized and decentralized contracts in a moral hazard environment. J. Ind. Econ. 1998, 46, 1–20. [Google Scholar] [CrossRef]
  20. Jelovac, I.; Macho-Stadler, I. Comparing organizational structures in health services. J. Econ. Behav. Organ. 2002, 49, 501–522. [Google Scholar] [CrossRef]
  21. Hortala-Vallve, H.; Sanchez Villalba, M. Internalizing team production externalities through delegation: The british passenger rail sector as an example. Economica 2010, 77, 785–792. [Google Scholar] [CrossRef]
  22. Holmstrom, B.; Milgrom, P. Aggregation and linearity in the provision of intertemporal incentives. Econometrica 1987, 55, 303–328. [Google Scholar] [CrossRef]
  23. Schättler, H.; Sung, J. The fisrt-order approach to the continuous-time principal-agent problem with exponential utility. J. Econ. Theory 1993, 61, 331–371. [Google Scholar] [CrossRef]
  24. Hellwig, M.; Schmidt, K.M. Discrete-time approximations of the holmstrom-milgrom brownian-motion model of intertemporal incentive provision. Econometrica 2002, 70, 2225–2264. [Google Scholar] [CrossRef]
  25. Sung, J. Linearity with project selection and controllable diffusion rate in continuous-time principal-agent problems. Rand J. Econ. 1995, 26, 720–743. [Google Scholar] [CrossRef]
  26. Barlo, M.; Ozdogan, A. Optimality of Linearity with Collusion and Renegotiation; Sabancı University Working Paper: ID SU-FASS-2011/0008; Sabancı University: Istanbul, Turkey, 2011. [Google Scholar]
  27. Grossman, S.; Hart, O. An analysis of the principal-agent problem. Econometrica 1983, 51, 7–45. [Google Scholar] [CrossRef]
  28. Fudenberg, D.; Levine, D.K.; Maskin, E. The folk theorem with imperfect public information. Econometrica 1994, 62, 997–1039. [Google Scholar] [CrossRef]

Appendix

A. Example 1

This is given in Section 3.1 and displays that strong collusion cannot be ignored even when the monotonicity of returns holds (thus, there is a best effort profile), but the best effort profile, ( e H , e H ) , is not chosen in the optimal incentive contract. The optimal strongly collusion-proof contract, discussed in Section 3.2.1., involves e * * = ( e H , e H ) and ( γ 1 * * , γ 2 * * ; ρ 1 * * , ρ 2 * * ) = ( 0 . 264575 , 0 . 264575 ; - 0 . 295751 , - 1 . 44575 ) , delivering a risk-neutral principal a CE of 6 . 45 . On the other hand, the optimal incentive contract, ( S i * ) i = 1 , 2 , involving e = ( e H , e L ) , is given by γ 1 * , γ 2 * ; ρ 1 * , ρ 2 * = 0 . 20 , 0 . 20 ; 0 . 40 , - 1 . 10 , and delivers the risk-neutral principal a return of 6 . 70 . As displayed in footnote 16, our conclusions do not change if the principal were to be risk averse with a CARA of 1 / 2 : the contract is the same, and the principal’s CE becomes 6 . 39458 under strong collusion and 6 . 52 with the optimal incentive contract.

B. Example 2

This example demonstrates that strong collusion cannot be ignored when returns are not monotone, even when there is a best effort profile that the principal finds optimal to implement with incentive contracts. The mean and variance of the returns depend on effort levels of both agents. Let E 1 = { e L , e M , e H } , E 2 = { e L , e H } and the levels of mean and variance figures be given by ( 31 , 1 / 2 ) , ( 29 , 1 ) , ( 30 , 1 ) , ( 31 , 2 ) , ( 26 . 5 , 1 / 2 ) and ( 28 . 3 , 4 / 3 ) , corresponding to effort profiles ( e H , e H ) , ( e H , e L ) , ( e M , e H ) , ( e M , e L ) , ( e L , e H ) , and ( e L , e L ) , respectively. c 1 ( e L ) = 0 , c 1 ( e M ) = 0 . 75 , c 1 ( e H ) = 1 . 25 ; c 2 ( e L ) = 0 and c 2 ( e H ) = 0 . 01 . Moreover, W 1 = 1 , W 2 = 1 . 5 ; and, R = 2 , r i = 10 for i = 1 , 2 .
In this example, the best effort profile is given by ( e H , e H ) , yet the agents’ effort choices may affect the variables in opposite directions. That is why returns are not monotone; so, Proposition 1 does not hold. For any given effort profile, optimal incentive contracts are: For ( e H , e H ) , the optimal contract is ( γ 1 , γ 2 , ρ 1 , ρ 2 ) = ( 0 . 289898 , 0 . 289898 , - 6 . 52673 , - 7 . 26673 ) , providing the principal with C E p = 26 . 7315 . For ( e M , e H ) , the optimal contract becomes ( γ 1 , γ 2 , ρ 1 , ρ 2 ) = ( 0 . 26411 , 0 . 26411 , - 5 . 82453 , - 6 . 06453 ) , and the principal gets C E p = 25 . 8199 . Finally, for ( e L , e L ) , the optimal contract is ( γ 1 , γ 2 , ρ 1 , ρ 2 ) = ( 0 . 142866 , 0 . 142866 , - 2 . 90705 , - 2 . 40705 ) , providing the principal with C E p = 24 . 8476 . Note that the principal cannot make agents choose ( e H , e L ) , ( e M , e L ) and ( e L , e H ) . This is because incentive and substitution compatibility constraints for these effort profiles result in the constraint set of the principal’s problem being empty.20
Optimal strongly collusion-proof contracts for given effort profiles are: For ( e H , e H ) , the optimal contract is ( γ 1 , γ 2 , ρ 1 , ρ 2 ) = ( 0 . 312377 , 0 . 312377 , - 7 . 18975 , - 7 . 92975 ) , providing the principal with C E p = 26 . 6817 . For ( e L , e L ) , the optimal contract becomes ( γ 1 , γ 2 , ρ 1 , ρ 2 ) = ( 0 , 0 , 1 , 1 . 5 ) , and the principal gets C E p = 24 . 4667 . With strong collusion, implying incentive compatibility, the effort levels, ( e H , e L ) , ( e M , e L ) and ( e L , e H ) , continue to be impossible to be obtained. Moreover, with strong collusion, the effort profile, ( e M , e H ) , is added to this list.21 Without strong collusion, the principal has an optimal level of CE given by 26 . 7315 , which results from the contract involving ( e H , e H ) and γ 1 = γ 2 = 0 . 289898 . However, this level of shares does not suffice to make the agents ignore joint deviations in effort choices. In order to do that, the principal has to increase the share of the project allocated to the agents. This, however, is costly, because agents are more risk averse than the principal. Indeed, the optimal strong collusion-proof contract involves the same effort profile of ( e H , e H ) , but higher shares allocated to the agents, 0 . 312377 . This decreases the principal’s optimal CE from 26 . 7315 to 26 . 6817 .

C. Example 3

This example involves non-monotone returns and features a best effort profile, which is not optimal with incentive contracts. In fact, Example 3 is the same as Example 2 with only c 1 ( e H ) changed from 1 . 25 to 1 . 75 . Hence, optimal incentive contracts implementing e E are: For ( e H , e H ) , the optimal contract is ( γ 1 , γ 2 , ρ 1 , ρ 2 ) = ( 0 . 463325 , 0 . 463325 , - 11 . 0764 , - 12 . 3164 ) , providing the principal with C E p = 25 . 664 . For ( e M , e H ) , the optimal contract becomes ( γ 1 , γ 2 , ρ 1 , ρ 2 ) = ( 0 . 26411 , 0 . 26411 , - 5 . 82453 , - 6 . 06453 ) , and the principal gets C E p = 25 . 8199 . Finally, for ( e L , e L ) , the optimal contract is ( γ 1 , γ 2 , ρ 1 , ρ 2 ) = ( 0 . 142858 , 0 . 142858 , - 2 . 90682 , - 2 . 40682 ) , providing C E p = 24 . 8476 . The best effort profile is ( e H , e H ) , but the optimal one for the principal is ( e M , e H ) .22
The optimal strongly collusion-proof contracts implementing e E are: For ( e H , e H ) , the optimal contract is ( γ 1 , γ 2 , ρ 1 , ρ 2 ) = ( 0 . 463325 , 0 . 463325 , - 11 . 0764 , - 12 . 3164 ) , providing C E p = 25 . 664 . For ( e L , e L ) , the optimal contract becomes ( γ 1 , γ 2 , ρ 1 , ρ 2 ) = ( 0 , 0 , 1 , 1 . 5 ) and C E p = 24 . 4667 . Notice that besides ( e H , e L ) , ( e M , e L ) and ( e L , e H ) , strong collusion additionally renders ( e M , e H ) impossible.23 The optimal incentive contract involves ( e M , e H ) and a share allocation of 0 . 26411 to each agent. However, these shares fail to eliminate strong collusion considerations. Indeed, implementing ( e M , e H ) is impossible, and thus, with strong collusion, the principal has to go for ( e H , e H ) , allocating the 0 . 463325 portion of the asset to each of the agents and, in turn, decreasing his CE from 25 . 8199 to 25 . 664 .

D. Example 4

This example shows that strong collusion cannot be ignored when returns are not monotone and that there is no best effort profile. Let E 1 = { e L , e H } and E 2 = { e L , e H } , and the mean and variance figures are given by ( 30 , 1 ) , ( 31 , 2 ) , ( 26 . 5 , 1 / 2 ) and ( 28 . 3 , 4 / 3 ) , corresponding to effort profiles ( e H , e H ) , ( e H , e L ) , ( e L , e H ) and ( e L , e L ) , respectively. The costs of efforts are, c 1 ( e L ) = 0 , c 1 ( e H ) = 0 . 75 ; c 2 ( e L ) = 0 and c 2 ( e H ) = 0 . 01 . Moreover, the reserve CE figures are W 1 = 0 . 50 , W 2 = 1 . 5 ; and CARA levels R = 2 , r i = 10 for i = 1 , 2 . The optimal incentive contracts are: For ( e H , e H ) , the optimal contract is ( γ 1 , γ 2 , ρ 1 , ρ 2 ) = ( 0 . 26411 , 0 . 26411 , - 6 . 32453 , - 6 . 06453 ) , providing the principal with C E p = 26 . 3199 . For ( e L , e L ) , the optimal contract becomes ( γ 1 , γ 2 , ρ 1 , ρ 2 ) = ( 0 . 142858 , 0 . 142858 , - 3 . 40682 , - 2 . 40682 ) , providing the principal with C E p = 25 . 3476 . Similarly, optimal strongly collusion-proof contracts are: For ( e H , e H ) , the solution is ( γ 1 , γ 2 , ρ 1 , ρ 2 ) = ( 0 . 332674 , 0 . 332674 , - 8 . 17687 , - 7 . 91687 ) , providing the principal with C E p = 26 . 0213 . For ( e L , e L ) , the optimal contract becomes ( γ 1 , γ 2 , ρ 1 , ρ 2 ) = ( 0 , 0 , 0 . 50 , 1 . 50 ) , providing the principal with C E p = 24 . 9667 . Therefore, strong collusion decreases the optimal CE of the principal from 26 . 3199 to 26 . 0213 , even though the associated effort profile remains the same.24

E. Example 5

This example shows that incentive contracts may provide the principal strictly higher returns than team contracts. Let E 1 = { e L , e M , e H } and E 2 = { e L , e H } and the levels of mean and variance figures be given by ( 31 , 1 / 2 ) , ( 29 , 1 ) , ( 30 , 1 ) , ( 31 , 1 . 1 ) , ( 26 . 5 , 1 / 2 ) and ( 28 . 3 , 3 ) corresponding to effort profiles ( e H , e H ) , ( e H , e L ) , ( e M , e H ) , ( e M , e L ) , ( e L , e H ) and ( e L , e L ) , respectively. The costs of efforts are c 1 ( e L ) = 0 , c 1 ( e M ) = 0 . 1 , c 1 ( e H ) = 0 . 75 ; c 2 ( e L ) = 0 and c 2 ( e H ) = 0 . 15 . Moreover, reserve CE figures are W 1 = W 2 = 1 ; and CARA levels R = 6 . 75 , r i = 10 for i = 1 , 2 .
The optimal incentive contracts implementing e E are given in Table A1.
Table A1. Optimal incentive contracts of Example 5.
Table A1. Optimal incentive contracts of Example 5.
γ 1 γ 2 ρ 1 ρ 2 C E P
( e H , e H ) 0 . 347723 0 . 347723 - 8 . 72712 - 9 . 32712 27 . 3389
( e M , e L ) 0 . 287269 0 . 287269 - 7 . 35146 - 7 . 45146 27 . 3202
( e L , e L ) 0 . 0331666 0 . 0331666 0 . 0778859 0 . 0778859 17 . 4407
Table A2. Optimal team contracts of Example 5.
Table A2. Optimal team contracts of Example 5.
γ 1 γ 2 ρ 1 ρ 2 C E P
( e H , e H ) 0 . 365148 0 . 365148 - 9 . 23627 - 9 . 83627 27 . 3106
( e M , e L ) 0 . 287234 0 . 287234 - 7 . 35049 - 7 . 45049 27 . 3202
( e L , e L ) 0 . 0174474 0 . 0174474 0 . 510804 0 . 510804 16 . 8602
Therefore, with incentive contracts, the optimal effort profile is ( e H , e H ) delivering a CE of 27 . 3389 to the principal.25 The optimal team contracts implementing e E are presented in Table A2. Hence, with team contracts, the optimal effort profile is ( e M , e L ) , delivering a CE of 27 . 3202 to the principal.26
It is important to point out that, in this example, incentive contracts outperform team contracts: 27 . 3389 versus 27 . 3202 . Moreover, the associated effort choices are different, as well: ( e H , e H ) versus ( e M , e L ) . Another observation to stress is that, due to Proposition 1 and Lemmas 1 and 2, we know that the incentive contract is not strongly collusion-proof. Thus, strong collusion will be binding and result in a payoff that is less than or equal to that of the team contracts.
  • 1.Agents jointly control the two moments in non-restricted ways; yet, an interesting case happens when the effort choices of an agent, the sales manager, only increases the mean and those of the other, the finance manager, only decreases the variance.
  • 2.A large body of literature, including [3,4,5,6,7,8,9,10,11,12], among many others, has emphasized the significance of collusion in hidden-action models.
  • 3.Some of the significant studies on teamwork include [1,2,13,14,15,16]. It is indicated that agents’ sharing information unobservable to the principal is necessary for them to benefit from teamwork. This is because when only returns, and not chosen effort levels, are contractible, the principal can offer such contracts with efficient risk sharing herself.
  • 4.In [1], agents are engaged in two tasks by providing inputs to both. A performance measure is observed for each activity that depends on the input profile chosen by agents combined with activity-specific and possibly correlated error terms. The principal pays each agent as a function of both performance measures. That study shows that team contracts are beneficial to the principal when compared with incentive contracts with technologically independent production (meaning that the production function of each task depends only on the input of one of the agents) and the sufficiently low correlation coefficient of the error terms. Hence, this result suggests that cooperation may be potentially harmful because of interactions in the production function and/or correlated error terms. In a similar model, [2] shows that when each one of the two agents governs only the mean of his process and his effort choice affects the mean of the other’s returns only through the noise term (both of which are stochastically independent), the principal benefits from the use of team contracts. In an extension, the principal can observe an aggregate output level, the distribution of which depends on the efforts of the two agents. It is found that the same result holds with identical agents. Other relevant papers include [17,18,19,20,21].
  • 5.The finite effort set is assumed to abstract from non-fruitful technicalities and to keep numerical programming simple.
  • 6.The pioneer work in establishing theoretical justifications for the use of linear contracts obtained from normally distributed returns and exponential utility functions is [22] and was generalized by [23,24]. They involve repeated settings with a single agent, and the lack of income effects due to exponential utility functions is employed to obtain the optimality of linearity: among optimal contracts, there is one that is linear in aggregate output. Thus, the situation, given by a complicated repeated agency setting, is as if the agent chooses the mean of a normal distribution only once, and the principal is restricted to employ linear sharing rules. Sung [25] generalizes this result by allowing the single agent to control the variance, as well. The authors of [26] consider the multi-agent version of this generalization with instantaneous efficient risk sharing and/or collusion possibilities and prove that the optimality of linearity continues to hold, in turn, justifying the analysis of the current study.
  • 7.Before and during the accept/reject decisions, we assume that agents cannot communicate with one another. Hence, an agent observes only the contract he is offered and the proposed effort vector. Communication and coordination among agents emerges after the acceptance decisions.
  • 8.Not allowing agents the ability to insure each other is rather restrictive, because it entails the restraint of not permitting rational decision makers the use of the legal system. Indeed, even under such a restriction, an alternative way of obtaining such an insurance is as follows: suppose that markets are complete, and agents have access to them. Then, there exists portfolios with returns that are equal to the returns that agents wish to obtain via the insurance contracts. Thus, agents may obtain the insurance they desire by trading them.
  • 9.The interiority of the solution to the labor union’s problem is guaranteed as the current setting satisfies Assumptions 1–3 of [27]. Moreover, we thank an anonymous referee for pointing out an alternative interpretation where risk sharing can be seen as the equilibrium of an economy where each agent is endowed with a ( ρ i , γ i ) pair determined by the principal’s original contract. Agents trade these results in the marginal rate of substitution figures being equalized across the agents.
  • 10.When the principal is risk averse with a CARA given by 1 / 2 , the optimal contract involves the same compensation scheme and the same effort profile as the one with a risk-neutral principal, γ 1 * , γ 2 * ; ρ 1 * , ρ 2 * = 0 . 20 , 0 . 20 ; 0 . 40 , - 1 . 10 ; but, it delivers a CE of 6 . 52 to the risk-averse principal.
  • 11.To be precise, the ( I C 1 ) implies that case calls for γ 1 0 . 20 and ( I C 2 ) for γ 2 0 . 07 = 0 . 26458 . Finally, due to ( S C ) , γ 1 = γ 2 , resulting in the constraint set being empty.
  • 12.The side-contract, S , is not incentive-compatible for the second agent, because γ 2 = 0 . 20 is strictly lower than 0 . 07 = 0 . 264575 . Moreover, in this example, S does not hurt the principal. However, a risk-neutral principal (or a risk-averse principal with a sufficiently low CARA) may become strictly worse off by side-contracting, when it involves an effort profile that results in a lower mean. In order for agents to benefit from such an arrangement, they should have sufficiently high CARA coefficients, and the effort profile agreed upon results in a variance low enough to compensate for the decrease in the total surplus, due to a lower mean. To see this, reconsider Example 1 by changing only the mean associated with ( e H , e H ) from 10 to 9 . 98 : the optimal incentive contract remains the same and delivers a return of 6 . 70 to the risk-neutral principal, and the same side-contract, S , is still strictly beneficial to both of the agents and brings about C E p ( S ) = 6 . 688 < 6 . 70 = C E p ( S * ) .
  • 13.Our formulation is consistent with a model in which the principal would hire agents at the beginning of the project, involving an infinite time horizon, using stationary contracts. Agents would interact in an infinitely repeated manner. The stage game would involve agents individually choosing efforts at the beginning of the period and getting paid at the end of the period according to the contract, which would be based on the project’s daily and normally distributed returns. Agents would have discounted CARA utilities, and one would not need the assumption that agents observe others’ effort choices. The requirement that these choices are observable to others in the next period would be enough. (Due to the Folk Theorem of [28], even more elaborate stage games in which there is imperfect monitoring of others’ previous choices can be incorporated.) There may be inefficient subgame perfect equilibrium payoffs, yet the efficient frontier can be obtained with subgame perfection, and it is the one that we are interested in; and it can be sustained in our one-shot model with the use of binding side-contracts among agents. In such a setting, it is not clear whether or not an agent, a member of the labor union aiming to obtain an efficient payoff, would ever blow the whistle, even if the stage game were to be one with contracts and structures, as in [6]. This is because, now, he could be credibly punished for being a “snitch”.
  • 14.The current analysis is appropriate when agents are engaged in a repeated interaction facing a sequence of different short-run principals who cannot observe the history of agents’ actions. As an example, consider a house-owner who wants a renovation job and hires two contractors. The value of the renovated house depends on the effort levels of the contractors, who are compensated based on the sale value of the house. The principal, while not observing the effort choices of the contractors, knows that they can perfectly observe each other’s effort choices and have means, not available to the short-run principal, to verify the effort choices. If there are strictly positive chances that the two contractors would work together in similar renovation works in the future, it is innocuous to assume that they can write binding side-contracts in the corresponding one-shot settings. Moreover, in such dynamic situations, an agent would require a considerable reward to “blow the whistle”.
  • 15.When ( e H , e L ) and ( e L , e H ) are to be implemented, the set of constraints is empty, i.e., they cannot be sustained with strong collusion: for ( e H , e L ) , γ 1 2 0 , due to ( C C 1 ) , considering deviations from ( e H , e L ) to ( e H , e H ) , γ 1 0 . 20 from ( C C 1 ) , considering deviations from ( e H , e L ) to ( e L , e L ) ; for ( e L , e H ) , 5 γ 2 2 0 . 35 , due to ( C C 2 ) , considering a deviation from ( e L , e H ) to ( e L , e L ) , and γ 2 0 from ( C C 2 ) , considering a deviation from ( e L , e H ) to ( e H , e H ) .
  • 16.When the principal is risk-averse with a CARA of 1 / 2 , the optimal contract involves the same compensation scheme and the same effort profile as the one given for the risk-neutral principal, γ 1 * , γ 2 * ; ρ 1 * , ρ 2 * = 0 . 264575 , 0 . 264575 ; - 0 . 295751 , - 1 . 44575 ; but, it delivers a return of 6 . 39458 to the risk-averse principal. Recall that the optimal contract with ( I R i ) , ( I C i ) , i = 1 , 2 , and ( S C ) constraints, ( S * , e * ) is given by e * = ( e H , e L ) and γ 1 * , γ 2 * ; ρ 1 * , ρ 2 * = 0 . 20 , 0 . 20 ; 0 . 40 , - 1 . 10 , with a return of 6 . 52 to the risk-averse principal with a CARA of 1 / 2 .
  • 17.If the principal were to be risk averse with a CARA 1 / 2 , the optimal team contract, ( S i * * * ) i = 1 , 2 , involving e * * * = ( e H , e L ) and γ 1 * * * , γ 2 * * * ; ρ 1 * * * , ρ 2 * * * = 0 . 10 , 0 . 10 ; 1 . 10 , - 0 . 40 , delivering the principal a CE of 6 . 98 , which is strictly higher than 6 . 39458 attained by the optimal strong collusion-proof contract, S * * , and higher than 6 . 52 provided by S * , the optimal incentive contract. Hence, the conclusions of Example 1 can be sustained with a risk averse principal.
  • 18.We thank an anonymous referee for suggesting the use of a modified version of our Example 1 towards this regard: consider Example 1 with the sole modification of increasing the second agent’s cost of high effort from 0 . 35 to one. Then, the contract, S * , with ( e H , e L ) is a team contract (so a collusion-proof contract), yet ( C C 1 ) does not hold. In essence, agent 1 would obtain strictly more returns from agent 2 choosing e H , yet these do not suffice to cover the loss of agent 2.
  • 19.Then, there exists i = 1 , 2 for whom ( C C i ) does not hold, i.e., i contemplates a joint deviation. Yet, this joint deviation does not necessarily lead to the dismissal of the optimal team contract: the situation may be as in the example of footnote 18, where implementing the joint deviation that agent 1 contemplates is too costly. Thus, even though the optimal incentive contract may not be immune against strong collusion, it may still be the optimal (team) collusion-proof contract.
  • 20.When ( e H , e L ) is to be implemented with incentive contracts, the I C 1 ensuring that agent 1 chooses e H instead of e M is - 2 γ 1 + 5 γ 1 2 0 . 50 and holds only if γ 1 0 . 57417 . However, due to ( S C ) , γ 1 = γ 2 ; so, γ 1 + γ 2 > 1 , an impossibility. For, ( e M , e L ) , the I C 1 ensuring agent 1 chooses e M instead of e L is 2 . 70 γ 1 - 10 / 3 γ 1 2 0 . 75 and cannot be satisfied for any γ 1 [ 0 , 1 / 2 ] . Finally, for ( e L , e H ) , the I C 2 making sure that agent 2 chooses e H instead of e L is - 1 . 80 γ 2 + 25 / 5 γ 2 2 0 . 01 , which is satisfied for every γ 2 [ 0 . 43749 , 0 . 50 ] . However, due to ( S C ) , γ 1 = γ 2 , and thus, the I C 1 guaranteeing that agent 1 does not choose e M and e H over e L (given by - 3 . 50 γ + 5 / 2 γ 1 2 - 0 . 75 and 4 . 50 γ 1 1 . 25 , respectively) cannot be obtained.
  • 21.When the principal desires ( e M , e H ) under strong collusion, the ( C C 2 ) guaranteeing agent 2 to prefer this to ( e H , e H ) is - γ 2 - 5 / 2 γ 2 2 0 ; so, γ 2 = 0 . However, due to ( S C ) , γ 1 = γ 2 , and γ 1 = 0 is not compatible with ( C C 1 ) , ensuring agent 1 will prefer ( e M , e H ) to ( e L , e H ) , which is given by 3 . 50 γ 1 - 5 / 2 γ 1 2 0 . 75 .
  • 22. ( e H , e L ) , ( e M , e L ) and ( e L , e H ) cannot be obtained with incentive contracts. First, note that due to ( S C ) , γ 1 = γ 2 . For ( e H , e L ) , every (IC) constraints can only be satisfied for γ 1 = γ 2 0 . 83599 , which is not feasible, because γ 1 + γ 2 cannot exceed one. When ( e M , e L ) is to be sustained, the I C 1 that guarantees that agent 1 chooses e M over e L is 2 . 70 γ 1 - 10 / 3 γ 1 2 0 . 75 , and it cannot be satisfied for any γ 1 [ 0 , 1 / 2 ] . Finally, for ( e L , e H ) , the I C 2 implying that agent 2 prefers e H to e L is - 1 . 80 γ 2 + 25 / 6 γ 2 2 0 . 01 , which requires γ 2 to be greater or equal to 0 . 43749 . However, then, all the ( I C 1 ) constraints (which are given by - 3 . 5 γ 1 + 5 / 2 γ 1 2 - 0 . 75 and 4 . 50 γ 1 1 , 75 ) cannot be satisfied for γ 1 = γ 2 [ 0 . 43749 , 0 . 50 ] .
  • 23.The reason is the very same given in footnote 21.
  • 24.In this case, ( e H , e L ) and ( e L , e H ) cannot be implemented. For ( e H , e L ) , I C 1 implies 2 . 70 γ 1 - 10 / 3 γ 1 2 0 . 75 , and cannot be satisfied for any γ 1 [ 0 , 0 . 50 ] . Considering ( e L , e H ) , I C 1 is - 3 . 50 γ 1 + 5 / 2 γ 1 2 - 0 . 75 , and holds only if γ 1 0 . 26411 . Additionally, I C 2 is - 1 . 8 γ 2 + 25 / 6 γ 2 2 0 . 01 and holds whenever γ 2 0 . 43749 . Thus, due to γ 1 = γ 2 by ( S C ) , they cannot be satisfied simultaneously.
  • 25. ( e H , e L ) , ( e M , e H ) and ( e L , e H ) cannot be obtained with incentive contracts. First, note that due to ( S C ) , γ 1 = γ 2 = γ . For ( e H , e L ) , the incentive constraint of the first player that prevents deviations from e H to e M cannot be satisfied, because it takes the form γ ( 29 - 31 ) - ( 10 / 2 ) γ 2 ( 1 - 1 . 1 ) - 0 . 75 + 0 . 1 0 . For ( e M , e H ) , the incentive constraint of the second agent preventing his deviation from e H to e L cannot be satisfied, as it takes the following form: γ ( 30 - 31 ) - ( 10 / 2 ) γ 2 ( 1 - 1 . 1 ) - ( 0 . 15 - 0 ) 0 . Furthermore, ( e L , e H ) cannot be obtained with incentive contracts: the incentive constraint of the second agent preventing him to choose e H instead of e L ( γ ( 26 . 5 - 28 . 3 ) - ( 10 / 2 ) γ 2 ( 1 / 2 - 3 ) - 0 . 15 0 ) and the incentive constraint of the first player making sure that he chooses e L instead of e M ( γ ( 26 . 5 - 30 ) - ( 10 / 2 ) γ 2 ( 1 / 2 - 1 ) - ( 0 - 0 . 1 ) 0 ) are not compatible.
  • 26. ( e H , e L ) , ( e M , e H ) and ( e L , e H ) cannot be obtained with teams contracts. Due to ( S C ) , γ 1 = γ 2 = γ . Then, considering ( e H , e L ) , we observe that the team constraint preventing deviations to ( e M , e L ) takes the following form and cannot be satisfied: 2 γ ( 29 - 31 ) - 10 γ 2 ( 1 - 1 . 1 ) - 0 . 75 + 0 . 1 0 . Next, for ( e M , e H ) , the team constraint preventing deviations to ( e M , e L ) cannot hold, as it is given by 2 γ ( 30 - 31 ) - 10 γ 2 ( 1 - 1 . 1 ) - ( 0 . 1 + 0 . 15 ) + ( 0 . 1 ) 0 . Finally, ( e L , e H ) cannot be obtained by team contracts, as well: the team constraint preventing deviations to ( e M , e L ) cannot be satisfied and is given as follows: 2 γ ( 26 . 5 - 31 ) - 10 γ 2 ( 1 / 2 - 1 . 1 ) - ( 0 + 0 . 15 ) + ( 0 . 1 + 0 ) 0 .

Share and Cite

MDPI and ACS Style

Barlo, M.; Özdoğan, A. The Optimality of Team Contracts. Games 2013, 4, 670-689. https://doi.org/10.3390/g4040670

AMA Style

Barlo M, Özdoğan A. The Optimality of Team Contracts. Games. 2013; 4(4):670-689. https://doi.org/10.3390/g4040670

Chicago/Turabian Style

Barlo, Mehmet, and Ayça Özdoğan. 2013. "The Optimality of Team Contracts" Games 4, no. 4: 670-689. https://doi.org/10.3390/g4040670

Article Metrics

Back to TopTop