Next Article in Journal
Special Issue “Ageing Population Risks”
Previous Article in Journal
A Risk-Based Approach for Asset Allocation with A Defaultable Share
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Preliminary Investigations for Better Monitoring: Learning in Repeated Insurance Audits

Département d’Economie, École Polytechnique, Route de Saclay, Palaiseau 91128, France
*
Author to whom correspondence should be addressed.
Risks 2018, 6(1), 15; https://doi.org/10.3390/risks6010015
Submission received: 5 February 2018 / Revised: 22 February 2018 / Accepted: 26 February 2018 / Published: 28 February 2018

Abstract

:
Audit mechanisms frequently take place in the context of repeated relationships between auditor and auditee. This paper focuses attention on the insurance fraud problem in a setting where insurers repeatedly verify claims satisfied by service providers (e.g., affiliated car repairers or members of managed care networks). We highlight a learning bias that leads insurers to over-audit service providers at the beginning of their relationship. The paper builds a bridge between the literature on optimal audit in insurance and the exploitation/exploration trade-off in multi-armed bandit problems.

1. Introduction

Claim fraud represents a serious threat to insurance markets: by artificially inflating the frequency and the cost of reported losses, defrauders lead to higher insurance premiums and they contribute to jeopardizing the efficiency of risk sharing mechanisms. Besides the free-riding problem that it poses, large scale fraud may even endanger the sustainability of the insurance markets that are prone to fraud.
Insurance claim fraud is sometimes referred to as a form of ex-post moral hazard in that it occurs after an (alleged) accident—for example when policyholders build up their claim or when they announce accidents that never actually happened. It essentially differs from ex-ante moral hazard through the associated timing (before/after the accident) and the modus operandi of addressing these inefficiencies: while principals tend to rely on a contract design approach to distort the agent’s incentives in an ex-ante context (without being able to monitor the agent’s effort), the ex-post situation is usually addressed through costly auditing in order to check what actually occurred.
The economic literature has mainly examined these issues through the lens of the costly state verification approach, whose foundations were laid by the seminal papers of Townsend (1979) and Gale and Hellwig (1985). Within this setting, it is assumed that the insurer can verify the true value of claims by incurring an audit cost.1 The audit may be either deterministic, random or guided by signals perceived by the insurer. In particular, Mookherjee and Png (1989) establish that random auditing dominates deterministic auditing, while Dionne et al. (2008) build a scoring methodology to show how audits are triggered by signals observed by the insurer. In one way or another, an optimal claim monitoring strategy achieves a trade-off between the additional costs of more frequent audits and the advantages of a more efficient fraud detection. The deterrence effect highlighted by Dionne et al. (2008) is an example of such an advantage: they consider a setting where more frequent audits reduce the frequency of fraud, and they show that some individually unprofitable audits should be performed because of this deterrence effect.
Audits may also play an important role in gathering evidence about the auditee (e.g., does he seem to have a penchant for dishonest behavior?), information that may be useful at later stages. Indeed, claimants (or service providers with whom they collude) may have some intrinsic and hard to observe characteristics that affect their propensity to defraud. Audits may help the insurer to mitigate this informational asymmetry about claimants’ type.2
The learning dimension is particularly relevant when it comes to repeated audits. Consider, for instance, the health insurance fraud case when there is a third party involved beside the insurer and the policyholders. Health service providers (doctors, opticians, pharmacists, etc.) play a central role since collusion between providers and policyholders is usually a necessary condition for fraud to take place. Furthermore, health care providers interact on a regular basis with the insurer, as they provide services to many policyholders and during several periods. Because of this repeated interaction, health insurers’ anti-fraud efforts often focus on service providers as much as on policyholders. The same logic applies to property insurance, when insurers interact with car repairers or construction companies, sometimes within a network of affiliated service providers.
The purpose of the present paper is to investigate how such repeated interactions affect the optimal audit strategy. We will show that the insurer may find it optimal to perform unprofitable audits because of a learning effect. In short, auditing is a way to gather information that can be used at later stages of the auditor–auditee interaction. This learning dimension may lead the insurer to perform audits beyond what would be optimal from a purely instantaneous standpoint.
To highlight this effect, we will rely on a simple model with two types of service providers (honest and dishonest) and where the likelihood of submitting an invalid claim is exogenous and type-dependent. Thus, we abstain from analyzing the strategic interaction between the insurer and policyholders. We focus attention on the role of service providers as mandatory intermediaries who certify claims, without examining the collusion process between providers and claimants.3 Dishonest providers may certify invalid claims on purpose, while honest providers may only do it unintentionally. The insurer has beliefs about the type of each service provider and his decision consists of choosing the probability with which he audits each provider over the course of two consecutive periods. Auditing a claim allows the insurer to discover whether it is valid or invalid, but, in the latter case, it does not reveal if the misbehavior was done on purpose or unintentionally. Auditing a claim allows the insurer to update his belief about the type of the service provider.
We find that, in the first period, the insurer has an incentive to perform some unprofitable audits, in order to improve his information about the service providers’ type, and this additional information will allow him to more efficiently focus his auditing strategy at the second period. Ultimately, deviating from a strategy that would be guided by instantaneous expected gains proves to be profitable. It corresponds to insisting on preliminary investigations early in the relationship to better monitor the agents later on.
This conclusion reveals an exploration/exploitation dilemma analogous to the multi-armed bandit problem in machine learning.4 In this approach, a player is repeatedly facing a slot machine with multiple arms. He must choose an arm at each period, each arm providing a random payoff with an imperfectly known stationary distribution. The player faces a trade-off between playing the most profitable arm according to early beliefs, and playing different arms in order to refine his beliefs. Exploring new arms induces an opportunity cost of not exploiting the arm that is the most profitable according to the current information, but this may allow the player to discover that other arms are in fact more profitable. Similarly, in our model, by trading current revenue for information, the better informed insurer gets a higher future payoff that compensates the initial loss.
The rest of the paper is organized as follows. In Section 2, as a preliminary stage, we consider a single period model where audits are not repeated and we characterize the corresponding optimal auditing strategy. In Section 3, we extend our model to account for repeated audits. We exhibit the competing roles of auditing as sources of revenue and information, and we define the insurer’s dynamic optimization problem that will be solved by backward induction. Hence, we start by characterizing how available information is used at the second period and, in a second stage, we deduce how the first period audit should be performed. We show that the learning effect leads the insurer to audit more at the beginning of the relationship, with the magnitude depending on the informativeness of the audit and on the degree of short-sightedness of the insurer. In Section 2 and Section 3, we restrain ourselves to a simple model where all claims have the same value. Section 4 extends our results to a more general setting with variable claim values. The final section concludes. Proofs are in the Appendix A.

2. Single Period Auditing

2.1. Setting

Let us start by considering an insurer who interacts with a population of service providers (SPs) during a single period. SPs are mandatory intermediaries between insurer and insured. In particular, they certify the claims filed by policyholders, which means that they attest that the claims actually correspond to the value of the services paid by the policyholders following the event covered by the insurance policy. Each SP transmits exactly one claim with a value normalized to 1 to the insurer, with the claim being either valid or invalid. Invalid claims should not lead to insurance payments. They may be transmitted either in good faith (for instance because the SP makes an error due to imperfect information about the circumstances of the loss) or in bad faith with the intention of defrauding.
SPs are heterogeneous when it comes to their propensity to transmit invalid claims. There are many possible determinants of this propensity, among which is the sense of moral values, which is negatively correlated with the propensity to defraud or the ability to build complex defrauding schemes.5 Hereafter, we consider that each SP may be either honest (H) or dishonest (D). Honest SPs only transmit invalid claims by error (they are always in good faith), while dishonest SPs may transmit invalid claims either by error or intentionally (they may be in bad faith). Hence, a type H is less likely to transmit invalid claims than a type D. We include this aspect by defining probabilities P ( I n v | H ) = p H and P ( I n v | D ) = p D of submitting an invalid claim by type H and type D, respectively, such that p H < p D .
There is a continuum of SPs with mass 1 and the insurer has initial belief π [ 0 , 1 ] for each SP that represents the a priori probability that the SP is of type D. The prior π is distributed in [ 0 , 1 ] with density f ( π ) and cumulative distribution function (c.d.f.) F ( π ) in the population of SPs. While we consider this prior as given, we may consider that it has been induced by signals (including the outcome of audits) that have been previously perceived by the insurer about each SP. These beliefs may be biased or not among SPs, i.e., the expected value 0 1 π f ( π ) d π may or may not be equal to the true proportion of dishonest SPs in the continuum.
Each claim may be audited and, in that case, the insurer observes whether it is valid or invalid. The audit is costly and represents the fundamental constraint that the insurer faces when it comes to choosing audit targets. It costs c to investigate a claim, with p H < c < p D .
Claims found to be invalid are not paid, inducing a net proceed of 1 c . No penalty is paid to the insurer by SPs whose invalid claims are detected by the audit. Hence, auditing a claim is profitable (in expected terms) only when it has been certified by a dishonest SP. From the insurer’s point of view, an SP with prior π transmits invalid claims with probability p ¯ ( π ) = p D π + p H ( 1 π ) and the corresponding expected net proceed of auditing is p ¯ ( π ) c .

2.2. Auditing Strategy, Objective Function and Optimization Problem

For each SP, the insurer must decide whether an audit will be performed or not. We define an auditing strategy as a function x ( · ) : [ 0 , 1 ] [ 0 , 1 ] that assigns a probability x ( π ) of being audited to each SP with belief π . For a given auditing strategy, the net expected proceed of audits is written as
Ω ( x ( · ) ) = 0 1 p ¯ ( π ) c x ( π ) f ( π ) d π .
The optimization problem of the insurer is written as
max x ( · ) Ω ( x ( · ) ) s . t . 0 x ( π ) 1 π [ 0 , 1 ] .
Lemma 1.
The single period optimal auditing strategy x * ( π ) consists in auditing all claims transmitted by SPs with associated beliefs π π * and not auditing claims when π < π * , i.e.,
x * ( π ) = 1 i f π [ π * , 1 ] 0 i f π [ 0 , π * ) = 1 { π π * } ,
where the threshold π * is
π * = c p H p D p H with p ¯ ( π * ) c = 0 .
Lemma 1 is unsurprising: one should only perform audits that are individually profitable, which amounts to focusing audits on SPs with π such that p ¯ ( π ) c 0 .

3. Two-Period Auditing: The Learning Effect

Because SPs take care of many policyholders, they repeatedly interact with the insurer. For the sake of simplicity, we assume that this interaction takes place during two consecutive periods i = 0 , 1 . There are different beliefs at the beginning of each period and the insurer’s strategy is based on these beliefs. From now on, variables of interest will be indexed by the corresponding periods ( π i , x i , Ω i ) i { 0 , 1 } . The insurer’s inter-temporal objective function depends on both period specific objective functions Ω 0 and Ω 1 , the latter being weighted by γ > 0 .6 His optimization problem is written as
max x 0 ( · ) , x 1 ( · ) Ω 0 + γ E 0 [ Ω 1 ] , s . t . 0 x 0 ( π 0 ) 1 π 0 [ 0 , 1 ] , s . t . 0 x 1 ( π 1 ) 1 π 1 [ 0 , 1 ] ,
where E 0 corresponds to the expected value operator at the beginning of period 0, i.e., before performing audits during this period.

3.1. Auditing as a Source of Information

Period 0 audits allow the insurer to update his belief at the beginning of period 1. Depending on whether an audit has been performed and, if it has been, whether the claim was valid or invalid (Val and Inv, respectively), the updated beliefs π ˜ 1 are deduced from initial beliefs π 0 through Bayes’ Law:
π 1 ˜ = P ( D | a u d i t , I n v ) = A ( π 0 ) = p D π 0 p ¯ ( π 0 ) , P ( D | a u d i t , V a l ) = B ( π 0 ) = ( 1 p D ) π 0 1 p ¯ ( π 0 ) , P ( D | n o a u d i t ) = π 0 ,
with
B ( π 0 ) < π 0 < A ( π 0 ) , A > 0 , A < 0 , B > 0 , B > 0 .
Hence, A ( π 0 ) and B ( π 0 ) are the probabilities that the SP is dishonest if a period 0 audit revealed that the claim was invalid or valid, respectively. In particular, an invalid claim detected by audit leads the insurer to increase his beliefs that the SP is dishonest, i.e., A ( π 0 ) > π 0 , and it is the other way around if an audit reveals that the claim was valid, i.e., B ( π 0 ) < π 0 . Of course, beliefs are unchanged if there is no audit.
For illustrative purposes, Figure 1 and Figure 2 describe the degree of informativeness of an audit as a function of parameters p H and p D . In Figure 1, the graphs of functions A ( · ) and B ( · ) are symmetric on each side of the 45 line. This is due to the specific condition p H + p D = 1 . Maintaining this assumption, Figure 2a shows that bringing p H and p D closer makes both learning curves less concave/convex and closer to the 45 line, underlining the fact that the audit is less informative in this case. Figure 2b,c illustrate the extreme case where the audit is, respectively, totally informative ( p H = 0 and p D = 1 ) and not informative at all ( p H = p D , both types behave the same way). Relaxing the p H + p D = 1 assumption, Figure 2d–f exemplify the asymmetry of informativeness between invalidity and validity of a claim: in Figure 2d, both probabilities of defrauding are rather low, so stumbling upon a valid claim does not say much, while finding a claim to be invalid induces a stronger change in the belief. The opposite happens in Figure 2e where both types defraud often, with the validity status becoming more informative.
This aspect of auditing suggests some influence of period 0 auditing outcomes on period 1 auditing decisions. The information revealed at period 0 may lead to an expected efficiency gain at period 1. To express this idea, let us denote ω ( π 1 , x 1 ) = p ¯ ( π 1 ) c x 1 the expected gain of an audit performed at period 1 with probability x 1 under belief π 1 .
Proposition 1.
The optimal period 1 optimal audit strategy x 1 * ( · ) is such that
E 0 [ ω ( π ˜ 1 , x 1 * ( π ˜ 1 ) ) | π 0 , x 0 ( π 0 ) ] ω ( π 0 , x 1 * ( π 0 ) ) π 0 [ 0 , 1 ]
with a strict inequality if π 1 [ 0 , 1 ] exists such that x 1 * ( π 1 ) x 1 * ( π 0 ) and P ( π ˜ 1 = π 1 | π 0 ) > 0 .
Proposition 1 implies that period 0 auditing increases the insurer’s period 1 expected payoff if it affects the period 1 auditing strategy: its informational value translates into an increase in period 1 net proceeds besides its period 0 income maximization value.
Let π ˜ 1 be the updated belief. This is a random variable defined by Equation (1) whose distribution depends on initial beliefs π 0 and on the period 0 auditing probability x 0 ( π 0 ) .

3.2. Inter-Temporal Optimization Problem

Period 0 expected auditing profit is written as
Ω 0 ( x 0 ( · ) ) = 0 1 p ¯ ( π 0 ) c x 0 ( π 0 ) f 0 ( π 0 ) d π 0 ,
where f 0 ( π 0 ) is the density of prior beliefs.
The updating process corresponds to a mapping of period 0 beliefs into period 1 beliefs, thus changing the latter’s distribution depending on the chosen period 0 strategy. Let f 1 ( · ) be the density of period 1 updated beliefs. It depends on period 0 auditing strategy and then it is written as f 1 ( π 1 | x 0 ( · ) ) . Period 1 expected auditing profit is written as
Ω 1 ( x 0 ( · ) , x 1 ( · ) ) = 0 1 p ¯ ( π 1 ) c x 1 ( π 1 ) f 1 ( π 1 | x 0 ( . ) ) d π 1 .
The inter-temporal optimal auditing strategy of the insurer can now be characterized by backward induction. At period 1, the insurer follows the optimal single period strategy x 1 * ( · ) characterized by Lemma 1. Hence, the optimal period 1 expected net proceed of auditing Ω 1 * ( x 0 ( · ) ) = Ω 1 ( x 0 ( · ) , x 1 * ( · ) ) is written as
Ω 1 * ( x 0 ( · ) ) = 0 1 p ¯ ( π 1 ) c 1 { π π * } f 1 ( π 1 | x 0 ( . ) ) d π 1 = π * 1 p ¯ ( π 1 ) c f 1 ( π 1 | x 0 ( . ) ) d π 1 .
Given this period 1 optimal strategy, the period 0 optimal strategy x 0 * ( · ) is an optimal solution to
max x 0 ( · ) Ω 0 ( x 0 ( · ) ) + γ E 0 [ Ω 1 * ( x 0 ( · ) ) ] s . t . 0 x 0 ( π 0 ) 1 π 0 [ 0 , 1 ] .

3.3. Effect of Period 0 Audit on the Audit Decision in Period 1

In order to show how period 0 audit affects the decision to audit at period 1, let us define π a and π b by
A ( π a ) = π * , π a = π * p H π * p H + ( 1 π * ) p D , B ( π b ) = π * , π b = π * ( 1 p H ) π * ( 1 p H ) + ( 1 π * ) ( 1 p D ) .
One easily checks that
0 < π a < π * < π b < 1 .
We have A ( π ) π * if and only if π π a and B ( π ) π * if and only if π π b . Hence, π a is the lowest belief such that, if found invalid at period 0, the SP will be audited at period 1 and π b is the highest belief such that, if found valid at period 0, the SP will be not be audited at period 1.
These thresholds lead us to Lemma 2 in which we express the probability of auditing at period 1 as a function of the period 0 belief and auditing outcomes.
Lemma 2.
The effect of period 0 audit on the audit decision x 1 at period 1 is characterized by Table 1.
Figure 3 and Figure 4 illustrate this relationship between the outcomes of the period 0 audit and the obtained posteriors.
Lemma 2 directly yields the probability of being audited at period 1 conditionally on π 0 and x 0 ( π 0 ) . This is written as
P ( π 1 π * | π 0 [ 0 , π a ) ) = 0 , P ( π 1 π * | π 0 [ π a , π * ) ) = p ¯ ( π 0 ) x 0 ( π 0 ) , P ( π 1 π * | π 0 [ π * , π b ) ) = p ¯ ( π 0 ) x 0 ( π 0 ) + 1 x 0 ( π 0 ) , P ( π 1 π * | π 0 [ π b , 1 ] ) = p ¯ ( π 0 ) x 0 ( π 0 ) + 1 x 0 ( π 0 ) + ( 1 p ¯ ( π 0 ) ) x 0 ( π 0 ) = 1 .
For instance, when π 0 [ π * , π b ) , there will be an audit at period 1 either if, at period 0, an audit revealed an invalid claim or if there was no audit, which occurs with probability p ¯ ( π 0 ) x 0 ( π 0 ) and 1 x 0 ( π 0 ) , respectively. The other cases can be interpreted similarly.
Lemma 3.
The inter-temporal objective function can be rewritten as
Ω 0 ( x 0 ( · ) ) + γ E 0 [ Ω 1 * ( x 0 ( · ) ) ] = 0 1 C ( π 0 ) + K ( π 0 ) x 0 ( π 0 ) f 0 ( π 0 ) d π 0 ,
where functions C ( · ) and H ( · ) are defined in Table 2 and K ( π 0 ) = p ¯ ( π 0 ) c + γ H ( π 0 ) . In particular, K ( π 0 ) is a continuous piecewise linear function such that
K ( π 0 ) < 0 if π 0 π a a n d K ( π 0 ) > 0 if π 0 > π * , K ( π 0 ) = p ¯ ( π 0 ) c for π 0 ( 0 , π a ) ( π b , 1 ) , K ( π 0 ) > p ¯ ( π 0 ) c for π 0 ( π a , π b ) .
Lemma 3 decomposes the inter-temporal objective function into components that make explicit the impact of x 0 ( · ) on current and future audit proceeds. C ( π 0 ) represents the proceeds of period 1 audits when the SP is not audited at period 0, and thus this term is not affected by x 0 ( π 0 ) . Beliefs π 0 in [ 0 , π a ) and [ π a , π * ) are smaller than π * , and, thus, if the corresponding SPs are not audited at period 0, neither will they be at period 1. Hence, any benefit/loss from these beliefs is necessarily the consequence of being audited at period 0, which gives C ( π 0 ) = 0 . This is different for beliefs in [ π * , π b ) and [ π b , 1 ] because they correspond to cases where initial beliefs π 0 are higher than π * . Consequently, the SPs will be audited at period 1 and there are some proceeds from period 1 that do not depend on x 0 ( · ) , hence the presence of γ in C ( · ) .
K ( π 0 ) represents the proceeds over the two periods that are affected by period 0 audit. p ¯ ( π 0 ) c in K ( π 0 ) represents the period 0 proceeds resulting from being audited at that period, while H ( π 0 ) corresponds to the period 1 proceeds that are affected by period 0 audits through the belief updating process. For instance, an SP with π 0 [ π a , π * ) will be audited at period 1 if an audit revealed an invalid claim at period 0 (because π 1 π * in that case) and the term p D 2 π 0 + p H 2 ( 1 π 0 ) p ¯ ( π 0 ) c corresponds to the expected net proceeds. H ( π 0 ) = 0 in [ 0 , π a ) and [ π b , 1 ] , although for different reasons: in [ 0 , π a ) , regardless of the outcome of the audit, updated beliefs will remain below π * and will never be audited at period 1, while, in [ π b , 1 ] , whatever happens at period 0, all beliefs remain above π * and the claim will always be audited at period 1. Function K ( π 0 ) is illustrated in Figure 5.

3.4. Inter-Temporal Optimal Auditing Strategy

Proposition 2.
An optimal period 0 strategy x 0 * ( · ) is characterized by π * * ( π a , π * ) such that
x 0 * ( π 0 ) = 0 if π 0 [ 0 , π * * ) , x 0 * ( π 0 ) [ 0 , 1 ] if π 0 = π * * , x 0 * ( π 0 ) = 1 if π 0 ( π * * , 1 ] .
The threshold π * * is given by
π * * = π * 1 + γ p H 1 + γ p H + γ ( p D c ) < π * ,
and is such that
K ( π * * ) = 0 and p ¯ ( π 0 ) c < 0 π 0 [ π * * , π * ) .
Proposition 2 shows that, accounting for the impact of period 0 auditing on period 1, posterior beliefs lead the insurer to audit a higher number of SPs at period 0 than in the instantaneous audit problem analyzed in Section 2. The belief threshold above which claims should be audited is now π * * instead π * , with π * * < π * . An interesting aspect is that these additional auditees π 0 [ π * * , π * ) are such that the corresponding individual expect net proceeds of audit are negative (Equation (2)): in other words, in spite of the negative impact on period 0 audit proceeds, the information gathered generates enough (discounted) profit at period 1 to compensate for this initial loss. Figure 6 illustrates this deviation from the single period myopic auditing and Figure 7 shows in orange the additional period 1 positive net proceeds γ H ( π 0 ) f 0 ( π 0 ) d π 0 that come from auditing down to π * * .7
The extent of the informational value of auditing is illustrated by the comparative statics properties of π * * . If γ = 0 , i.e., if the insurer at time 0 does not care about period 1, then π * * = π * since the informational value of auditing at t = 0 serves no purpose. If γ + , i.e., if the insurer only cares about period 1 profit, then π * * π a and he seeks to get the maximum information from period 0. Of course, there is no point in having π * * lower than π a as the additional information would not be useful at period 1. The new auditing limit, like the myopic one, also depends on c: if c = p H , then π * * = π * = 0 and, if c = p D , then π * * = π * = 1 , as in both cases there is no more trade-off between information and revenue. Finally, if we write c = α p D + ( 1 α ) p H with α ( 0 , 1 ) , then π * = α and π * * π * when p D p H . When p D comes closer to p H , the separating power of the audit decreases until it becomes uninformative, and, at the limit p D = p H , there is no more distinction between types D and H.

4. Variable Claim Value

A large part of the economic analysis of insurance fraud has focused attention on optimal auditing strategies when policyholders may file smaller or larger claims, and, on the way, the insurer’s audit strategy depends on the size of the claim.8 Let us consider how this approach may be affected by the learning mechanism.

4.1. Deterministic Value

As a preliminary step, let us consider the case where the size of all claims takes some arbitrary value R + . This size is still fixed, but not necessarily equal to 1. The belief threshold π * now depends on and is defined by
π * = 1 if c p D , p ¯ ( π * ) c = 0 if c p D < < c p H , π * = 0 if c p H .
Therefore,
π * ( ) = max 0 , min 1 , c / p H p D p H .
Equivalently, we may define a threshold * for the claim size
* ( π ) = c p H + ( p D p H ) π = c p ¯ ( π ) π [ 0 , 1 ] ,
and, for beliefs π , auditing is profitable if * ( π ) .
A straightforward extension of the results of Section 3, with the same claim size at each period, shows that the first period optimal auditing threshold becomes
π * * ( ) = π * ( ) × 1 + γ p H 1 + γ p H + γ ( p D c / ) .
As π * * ( · ) is strictly decreasing from ( c p D , c p H ) to ( 0 , 1 ) , we can also define a function * * ( · ) : [ 0 , 1 ] [ c p D , c p H ] as
* * ( 0 ) = c p H , * * ( π ) = ( π * * ) 1 ( π ) for π ( 0 , 1 ) , * * ( 1 ) = c p D .
In the two period setting, a claim certified by an SP with associated belief π 0 will be audited if * * ( π 0 ) .
The set of claims Δ * for which audit is profitable within a single period setting (with belief π and claim size ) is defined by
Δ * = { ( π , ) | π * ( ) π 1 } = { ( π , ) | * ( π ) } .
In a two period setting, with claim size at both periods, claims should be audited at period 0 if ( π 0 , ) Δ * * , where
Δ * * = { ( π 0 , ) | π * * ( ) π 0 1 } = { ( π 0 , ) | * * ( π 0 ) } ,
with Δ * Δ * * . These sets are illustrated in Figure 8.

4.2. Random Homogeneous Value

Let us move on now to the more interesting case where the size of the claims is a random variable drawn from a known distribution at the beginning of each period. Let ˜ i denote this random variable with density g i ( · ) and c.d.f. G i ( · ) on [ 0 , L i ] for i = 0 , 1 . For simplicity, we assume that ˜ i has the same probability distribution for both types of SPs, and thus ˜ i and π 0 are independently distributed. The value of a claim is observed at each period before deciding to audit or not. Now, an audit strategy is written as x i ( π i , i ) at period i = 0 , 1 .
Lemma 4.
The inter-temporal objective function can be written as
Ω 0 ( x 0 ( · , · ) ) + γ E 0 Ω 1 * ( x 0 ( · , · ) ) = [ 0 , 1 ] [ 0 , L 0 ] C ( π 0 ) + K ( π 0 , 0 ) x 0 ( π 0 , 0 ) g 0 ( 0 ) f 0 ( π 0 ) d 0 d π 0 ,
where
K ( π 0 , 0 ) = p ¯ ( π 0 ) 0 c + γ H ( π 0 ) , H ( π 0 ) = p ¯ ( π 0 ) Φ ( A ( π 0 ) ) + ( 1 p ¯ ( π 0 ) ) Φ ( B ( π 0 ) ) Φ ( π 0 ) , C ( π 0 ) = γ Φ ( π 0 ) ,
with H ( π 0 ) > 0 if π 0 ( 0 , 1 ) and H ( 0 ) = H ( 1 ) = 0 , and
Φ ( π ) = * ( π ) L 1 p ¯ ( π ) 1 c g 1 ( 1 ) d 1 ,
where * ( π ) = c p ¯ ( π ) .
Lemmas 3 and 4 are similar and can be interpreted the same way. In particular, the two terms in the integral of Equation (3) correspond to the parts of cumulated expected proceeds according to whether they are affected by period 0 audit or not.
Proposition 3.
The optimal period 0 auditing strategy x 0 * ( π 0 , 0 ) is such that
x 0 * ( π 0 , l 0 ) = 1 if π 0 > π * * ( 0 ) ,
where π * * ( · ) : [ 0 , L 0 ] [ 0 , 1 ] , with π * * ( 0 ) < π * ( 0 ) for all 0 [ 0 , L 0 ] .9
Proposition 3 extends Proposition 2 to the case of claims with variable size, and its interpretation is similar. In an instantaneous setting, where learning effects would be ignored, an audit should be performed if the belief π 0 is larger than π * ( π 0 ) . The threshold is decreasing with 0 because the larger the claim, the larger the potential gain from auditing. When learning effects are taken into account, the threshold decreases. Claim ( π 0 , 0 ) should be audited at period 0 when π * * ( 0 ) < π 0 < π * ( 0 ) , although such audit is not profitable in expected terms.

5. Conclusions

This article aimed at characterizing the learning dimension of auditing when there is a repeated interaction between auditor and auditee. The insurance claim fraud problem with potentially dishonest service providers was an application example, but the same question arises in many other settings, such as tax audits and, more generally, the verification of compliance with law. In our model, the insurer has imperfect information about the service providers’ type, and, as in the machine learning multi-armed bandit approach, he extends his audit activity beyond the desire for immediate short-term gain. Compared to a myopic strategy only focusing on short term profit, the longsighted insurer faces an inter-temporal trade-off between the immediate gain from fraud detection, and the future profit made possible by more intense auditing. This learning effect leads the longsighted insurer to increase his monitoring efforts and to put some individually unprofitable claims under scrutiny. This result remains valid when the setting is extended to a more general framework with claims of varying size: the learning effect shifts the frontier in the belief-claim size space, beyond which an audit should be performed.
These results may be extended in many directions that would be worth exploring. Firstly, we have limited ourselves to a simple two-period model. Extending our analysis to an arbitrary number of periods would allow us to take into account the possibility to exclude and replace service providers when their dishonesty becomes very probable, and also to analyze the convergence features of our model when the number of periods is large. A parallel could also be drawn between our problem and the so-called greedy/ ε -greedy strategies in bandit problems, when the latter outperform the former in the long-run (see Chapter 2 in Sutton and Barto (1998)). Another interesting issue would consist in considering the case where service providers are concerned by multiple claims at each period. The auditing strategy would have to specify how many claims will be monitored. Audit costs may also reflect a potential source of heterogeneity between claims that could lead the insurer to abstain from auditing some claims with high audit costs. Furthermore, strategic defrauders could resort to manipulation of auditing costs as a protective device against auditing (see Picard (2000)). Most importantly, our model postulates an exogenous fraud rate reflected in the frequency of invalid claims for honest and dishonest service providers. Endogenizing the frequency of invalid claims would be of particular interest, in order to study the strategic interaction between insurers, service providers and claimants, in a setting where learning and deterrence effects would coexist. Finally, the paper highlights the relevance of more intense investigations during the first stages of repeated interactions between principal and agent, and this conclusion is far more general than the insurance fraud problem. Analyzing whether this conclusion fits with actual monitoring processes is an empirical extension that would be worth exploring, for better understanding the behavior of insurers facing claim fraud and in other contexts.

Acknowledgments

The research of Reda Aboutajdine has been supported by a doctoral fellowship financed by IBM France.

Author Contributions

Reda Aboutajdine and Pierre Picard equally conceived and realized the research.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Proof of Lemma 1.
Let us write the expected profit from the auditing strategy x ( . ) at a single period
Ω = 0 1 p ¯ ( π ) c x ( π ) f ( π ) d π .
The corresponding problem has a point-wise maximization structure, therefore:
x ( π ) = 1 if π > π * , x ( π ) [ 0 , 1 ] if π = π * , x ( π ) = 0 if π < π * ,
where
π * = c p H p D p H .
 ☐
Proof of Proposition 1.
Let π ˜ 1 ( π 0 ) be the period 1 belief as a function of prior belief π 0 . It is a random variable defined by
π ˜ 1 ( π 0 ) = π 1 a = A ( π 0 ) with probability q a = x 0 ( π 0 ) p ¯ ( π 0 ) , π 1 b = B ( π 0 ) with probability q b = x 0 ( π 0 ) ( 1 p ¯ ( π 0 ) ) , π 1 c = π 0 with probability q c = 1 x 0 ( π 0 ) ,
with E 0 [ π ˜ 1 ( π 0 ) ] = q a π 1 a + q b π b + q c π 1 c = π 0 .
By a point-wise maximization argument, for all π 1 , the optimal auditing strategy x 1 * ( π 1 ) maximizes
x 1 [ p ¯ ( π 1 ) c ] s . t . 0 x 1 1 ,
and thus we have
x 1 * ( π 1 i ) [ p ¯ ( π 1 i ) c ] x 1 * ( π 0 ) [ p ¯ ( π 1 i ) c ] for i = a , b , c .
Therefore,
E 0 [ ω ( π ˜ 1 , x 1 * ( · ) ) | π 0 ] = i = a , b , c q i x 1 * ( π 1 i ) [ p ¯ ( π 1 i ) c ] x 1 * ( π 0 ) i = a , b , c q i [ p ¯ ( π 1 i ) c ] = x 1 * ( π 0 ) p ¯ i = a , b , c q i π 1 i c = x 1 * ( π 0 ) [ p ¯ ( π 0 ) c ] = ω ( π 0 , x 1 * ( π 0 ) ) .
This inequality is strict if x 1 * ( π 1 i ) x 1 * ( π 0 ) for some i = a and/or i = b . ☐
Proof of Lemma 2.
We have π 1 { B ( π 0 ) , π 0 , A ( π 0 ) } depending on the period 0 scenario.
  • If 0 π 0 < π a
    (a)
    In all cases: B ( π 0 ) π 0 A ( π 0 ) < A ( π a ) = π * π 1 < π * .
  • If π a π 0 < π *
    (a)
    Invalid claim: π 1 = A ( π 0 ) A ( π a ) = π * π 1 π * .
    (b)
    Valid or No Audit
    • π 1 = B ( π 0 ) < π 0 < π * π 1 < π * .
    • π 1 = π 0 < π * π 1 < π * .
  • If π * π 0 < π b
    (a)
    Invalid or No Audit
    • π 1 = A ( π 0 ) > π 0 π * π 1 π * .
    • π 1 = π 0 π * π 1 π * .
    (b)
    Valid claim:
    • π 1 = B ( π 0 ) < B ( π b ) = π * π 1 < π * .
  • If π b π 0 1
    (a)
    In all cases: B ( π b ) = π * B ( π 0 ) π 0 A ( π 0 ) π 1 π * .
The period 1 audit decision represented in Table 1 follows from the fact that there is an audit at period 1 if and only if π 1 π * . ☐
Proof of Lemma 3.
Let Π 0 ( π 0 ) be the expected inter-temporal net proceeds of auditing an SP of type π 0 . Let (the random variable) π ˜ 1 be the updated belief at period 1. We have
Π 0 ( π 0 ) = x 0 ( π 0 ) p ¯ ( π 0 ) c + γ E 0 x 1 * ( π ˜ 1 ) p ¯ ( π ˜ 1 ) c | π 0 = x 0 ( π 0 ) p ¯ ( π 0 ) c + γ E 0 1 { π ˜ 1 π * } p ¯ ( π ˜ 1 ) c | π 0 .
If π 0 [ 0 , π a ) , we always have π ˜ 1 < π * and
Π 0 ( π 0 ) = x 0 ( π 0 ) p ¯ ( π 0 ) c .
If π 0 [ π a , π * ) , then π ˜ 1 π * if an audit reveals an invalid claim (i.e., π ˜ 1 ( π 0 ) = A ( π 0 ) ), which happens with probability p ¯ ( π 0 ) x 0 ( π 0 ) . Thus, in that case, we have
Π 0 ( π 0 ) = x 0 ( π 0 ) p ¯ ( π 0 ) c + γ p ¯ ( π 0 ) x 0 ( π 0 ) p ¯ ( A ( π 0 ) ) c = x 0 ( π 0 ) p ¯ ( π 0 ) c + γ p ¯ ( π 0 ) x 0 ( π 0 ) p D 2 π 0 p ¯ ( π 0 ) + p H 2 1 π 0 p ¯ ( π 0 ) c = x 0 ( π 0 ) p ¯ ( π 0 ) c + γ x 0 ( π 0 ) p D 2 π 0 + p H 2 ( 1 π 0 ) p ¯ ( π 0 ) c = x 0 ( π 0 ) p ¯ ( π 0 ) c + γ p D 2 π 0 + p H 2 ( 1 π 0 ) p ¯ ( π 0 ) c .
If π 0 [ π * , π b ) , then π ˜ 1 π * if an audit reveals an invalid claim (i.e., π ˜ 1 ( π 0 ) = A ( π 0 ) with probability p ¯ ( π 0 ) x 0 ( π 0 ) ) or if there is no audit (i.e., π ˜ 1 ( π 0 ) = π 0 with probability 1 x 0 ( π 0 ) ). Hence,
Π 0 ( π 0 ) = x 0 ( π 0 ) p ¯ ( π 0 ) c + γ p ¯ ( π 0 ) x 0 ( π 0 ) p ¯ ( A ( π 0 ) ) c + ( 1 x 0 ( π 0 ) ) p ¯ ( π 0 ) c = x 0 ( π 0 ) p ¯ ( π 0 ) c + γ p ¯ ( π 0 ) c + γ x 0 ( π 0 ) p D 2 π 0 + p H 2 ( 1 π 0 ) p ¯ ( π 0 ) c p ¯ ( π 0 ) + c = γ p ¯ ( π 0 ) c + x 0 ( π 0 ) p ¯ ( π 0 ) c + γ p D 2 π 0 + p H 2 ( 1 π 0 ) p ¯ ( π 0 ) c p ¯ ( π 0 ) + c .
If π 0 [ π b , 1 ] , then we always have π ˜ 1 π * , and thus
Π 0 ( π 0 ) = γ p ¯ ( π 0 ) c + x 0 ( π 0 ) p ¯ ( π 0 ) c .
The expected net proceeds for an SP of type π 0 can therefore be written as:
Π 0 ( π 0 ) = C ( π 0 ) + x 0 ( π 0 ) K ( π 0 ) ,
where
K ( π 0 ) = p ¯ ( π 0 ) c + γ H ( π 0 ) ,
and functions C ( · ) and H ( · ) are given in Table A1.
We obtain
Ω 0 ( x 0 ( · ) ) + γ Ω 1 * ( x 0 ( · ) ) = 0 1 Π 0 ( π 0 ) f 0 ( π 0 ) d π 0 = 0 1 C ( π 0 ) + K ( π 0 ) x 0 ( π 0 ) f 0 ( π 0 ) d π 0 .
The piecewise linearity of K ( π 0 ) comes from the fact that p ¯ ( π 0 ) is linear in π 0 . In addition,
H ( π a ) = p D 2 π a + p H 2 ( 1 π a ) p ¯ ( π a ) c , = p ¯ ( π a ) p ¯ ( A ( π a ) ) c , = p ¯ ( π a ) p ¯ ( π * ) c , = 0 .
Thus,
K ( π a ) = p ¯ ( π a ) c = lim π π a K ( π ) .
Notice also that
K ( π * ) = p D 2 π * + p H 2 ( 1 π * ) p ¯ ( π * ) c [ p ¯ ( π * ) c ] = p D 2 π * + p H 2 ( 1 π * ) p ¯ ( π * ) c = lim π π * K ( π ) .
Finally, by definition of π * = B ( π b ) , we have
c = p ¯ ( B ( π b ) ) = p D ( 1 p D ) π b 1 p ¯ ( π b ) + p H 1 ( 1 p D ) π b 1 p ¯ ( π b ) ,
and
p D 2 π b + p H 2 ( 1 π b ) p ¯ ( π b ) c = ( p ¯ ( π b ) c ) .
This implies
K ( π b ) = p ¯ ( π b ) c = lim π π b K ( π ) ,
which proves that K ( · ) is continuous. Finally, from the definition of K ( · ) ,
π 0 [ 0 , 1 ] K ( π 0 ) = p ¯ ( π 0 ) c + γ H ( π 0 ) p ¯ ( π 0 ) c .
We also have
π 0 > π * K ( π 0 ) p ¯ ( π 0 ) c > 0 ,
and
π 0 π a K ( π 0 ) = p ¯ ( π 0 ) c < 0 .
 ☐
Table A1. Definition of C ( · ) and H ( · ) .
Table A1. Definition of C ( · ) and H ( · ) .
π 0 C ( π 0 ) H ( π 0 )
[ 0 , π a ) 00
[ π a , π * ) 0 p D 2 π 0 + p H 2 ( 1 π 0 ) p ¯ ( π 0 ) c
[ π * , π b ) γ p ¯ ( π 0 ) c p D 2 π 0 + p H 2 ( 1 π 0 ) p ¯ ( π 0 ) c [ p ¯ ( π 0 ) c ]
[ π b , 1 ] γ p ¯ ( π 0 ) c 0
Proof of Proposition 2.
Point-wise maximization yields
x 0 * ( π 0 ) = 0 if K ( π 0 ) < 0 , x 0 * ( π 0 ) [ 0 , 1 ] if K ( π 0 ) = 0 , x 0 * ( π 0 ) = 1 if K ( π 0 ) > 0 .
Therefore, the threshold π * * [ π a , π * ) satisfies K ( π * * ) = 0 and
K ( π * * ) = p ¯ ( π * * ) c + γ H ( π * * ) = 1 + γ ( p D + p H c ) ( p D p H ) π * * + ( p H c ) 1 + γ p H .
Using K ( π * * ) = 0 gives
π * * = π * 1 + γ p H 1 + γ p H + γ ( p D c ) < π * .
 ☐
Proof of Lemma 4.
Let Δ = [ 0 , 1 ] × [ 0 , L 1 ] . The optimal strategy at period 1 defined by x 1 * ( · , · ) : Δ [ 0 , 1 ] is such that
x * ( π 1 , 1 ) = 1 if ( π 1 , 1 ) Δ * , 0 otherwise .
The associated period 1 objective function is
Ω 1 ( x 0 ( · , · ) , x 1 * ( · , · ) ) = Ω 1 * ( x 0 ( · , · ) ) ,
and thus the inter-temporal objective is written as a function of x 0 ( · , · )
Ω 0 ( x 0 ( · , · ) ) + γ E 0 Ω 1 * ( x 0 ( · , · ) ) .
Since the random variable ˜ i is independent of the type, we may write
Ω 0 ( x 0 ( · , · ) ) = ( π 0 , 0 ) Δ p ¯ ( π 0 ) 0 c x 0 ( π 0 , l 0 ) f 0 ( π 0 ) g 0 ( 0 ) d 0 d π 0 .
There is an audit at period 1 if ( π 1 , 1 ) Δ * , and thus we have
Ω 1 * ( x 0 ( · , · ) ) = ( π 1 , 1 ) Δ p ¯ ( π 1 ) 1 c x * ( π 1 , 1 ) g 1 ( 1 ) f 1 ( π 1 | x 0 ( · , · ) ) d π 1 d 1 = π 1 [ 0 , 1 ] * ( π 1 ) L 1 p ¯ ( π 1 ) 1 c g 1 ( 1 ) f 1 ( π 1 | x 0 ( · , · ) ) d 1 d π 1 = π 1 [ 0 , 1 ] Φ ( π 1 ) f 1 ( π 1 | x 0 ( · , · ) ) d π 1 ,
where * ( π 1 ) = c / p ¯ ( π 1 ) and
Φ ( π 1 ) = * ( π 1 ) L 1 p ¯ ( π 1 ) 1 c g 1 ( 1 ) d 1 .
Note that Φ ( π 1 ) is the expected net proceeds at period 1 of auditing an SP with belief π 1 . Therefore, by analogy with Section 3 and using a point-wise maximization argument, the inter-temporal expected net proceeds of an SP characterized by ( π 0 , 0 ) are written as
Π 0 ( π 0 , 0 ) = x 0 ( π 0 , 0 ) p ¯ ( π 0 ) 0 c + γ [ x 0 ( π 0 , 0 ) p ¯ ( π 0 ) Φ ( A ( π 0 ) ) + x 0 ( π 0 , 0 ) ( 1 p ¯ ( π 0 ) ) Φ ( B ( π 0 ) ) + ( 1 x 0 ( π 0 ) ) Φ ( π 0 ) ] .
Rearranging the terms in Π 0 ( π 0 , 0 ) yields
Π 0 ( π 0 , 0 ) = C ( π 0 ) + K ( π 0 , 0 ) x 0 ( π 0 , 0 ) ,
where
K ( π 0 , 0 ) = p ¯ ( π 0 ) 0 c + γ H ( π 0 ) , H ( π 0 ) = p ¯ ( π 0 ) Φ ( A ( π 0 ) ) + ( 1 p ¯ ( π 0 ) ) Φ ( B ( π 0 ) ) Φ ( π 0 ) , C ( π 0 ) = γ Φ ( π 0 ) .
Simple calculations give
Φ ( π 1 ) = ( p D p H ) c p ¯ ( π 1 ) L 1 1 g 1 ( 1 ) d 1 > 0 ,
and
Φ ( π 1 ) = ( p D p H ) 2 c 2 p ¯ ( π 1 ) 3 g 1 c p ¯ ( π 1 ) > 0 .
Thus, Φ ( · ) is increasing and convex. Using p ¯ ( π 0 ) A ( π 0 ) + ( 1 p ¯ ( π 0 ) ) B ( π 0 ) = π 0 gives H ( π 0 ) > 0 if π 0 ( 0 , 1 ) . In addition, A ( 1 ) = B ( 1 ) = 1 and A ( 0 ) = B ( 0 ) = 0 imply H ( 0 ) = H ( 1 ) = 0 . ☐
Proof of Proposition 3.
Lemma 4 shows that x 0 * ( π 0 , 0 ) = 1 if K ( π 0 , 0 ) > 0 and that x 0 * ( π 0 , 0 ) = 0 if K ( π 0 , 0 ) < 0 . Let π 0 ( π * ( 0 ) , 1 ) . Since H ( π 0 ) > 0 and p ¯ ( π 0 ) 0 c 0 , and H ( 1 ) = 0 and p ¯ ( 1 ) 0 c > 0 , we have
K ( π 0 , 0 ) > 0 .
This implies that Δ * is included in the optimal auditing set at period 0. In addition, K ( π * ( 0 ) , 0 ) > 0 implies, by continuity of K ( · ) , that there exists π * * ( 0 ) smaller than π * ( 0 ) such that
K ( π 0 , 0 ) > 0 π 0 ( π * * ( 0 ) , π * ( 0 ) ) .
We deduce that
x 0 * ( π 0 , l 0 ) = 1 if π 0 > π * * ( 0 ) .
 ☐

References

  1. Bergemann, Dirk, and Juuso Valimaki. 2006. Bandit Problems. Cowles Foundation Discussion Paper No. 1551. Available online: https://ssrn.com/abstract=877173 (accessed on 27 February 2018).
  2. Bourgeon, Jean-Marc, Pierre Picard, and Jerome Pouyet. 2008. Providers’ Affiliation, Insurance and Collusion. Journal of Banking & Finance 32: 170–86. [Google Scholar]
  3. Crocker, Keith J., and John Morgan. 1997. Is honesty the best policy? Curtailing insurance fraud through optimal incentive contracts. Journal of Political Economy 106: 355–75. [Google Scholar] [CrossRef]
  4. Dionne, Georges, Florence Giuliano, and Pierre Picard. 2008. Optimal Auditing with Scoring: Theory and Application to Insurance Fraud. Management Science 55: 58–70. [Google Scholar] [CrossRef] [Green Version]
  5. Gale, Douglas, and Martin Hellwig. 1985. Incentive-Compatible Debt Contracts: The One-Period Problem. Review of Economic Studies 52: 647–63. [Google Scholar] [CrossRef]
  6. Mookherjee, Dilip, and Ivan Png. 1989. Optimal Auditing, Insurance, and Redistribution. The Quarterly Journal of Economics 104: 399–415. [Google Scholar] [CrossRef]
  7. Picard, Pierre. 2000. On the Design of Optimal Insurance Policies Under Manipulation of Audit Cost. International Economic Review 41: 1049–71. [Google Scholar] [CrossRef]
  8. Picard, Pierre. 2013. Economic Analysis of Insurance Fraud. In Handbook of Insurance. Edited by G. Dionne. Dordrecht: Springer, pp. 349–95. [Google Scholar]
  9. Sutton, Richard S., and Andrew G. Barto. 1998. Reinforcement Learning: An Introduction. Cambridge: MIT Press. [Google Scholar]
  10. Townsend, Robert. 1979. Optimal Contracts and Competitive Markets with Costly State Verification. Journal of Economic Theory 21: 265–93. [Google Scholar] [CrossRef]
1
Crocker and Morgan (1997) develop the costly state falsification approach, where it is the defrauder who may incur some expenses to misrepresent her loss.
2
Dionne et al. (2008) introduce this hidden heterogeneity under the form of a cost reflecting the policyholder’s moral sense that affects the probability of defrauding. Still, this cost remains unobservable by the insurer and does not come into play to assess the probability of a claim being fraudulent.
3
Bourgeon et al. (2008) analyze the collusion between service providers and policyholders when insurers have networks of affiliated providers.
4
5
While defrauding in plain sight may occur (hoping for inattention of the insurer), it usually takes some effort to construct a defrauding scheme. For example, some opticians may provide sunglasses to their clients, but they certify that they have delivered regular glasses.
6
If γ ( 0 , 1 ) it can be simply interpreted as a discount factor. Period 1 can also be viewed as the aggregation of all future proceeds without restriction about the value of γ .
7
This result shows some similarity with the analysis of the deterrence effect by Dionne et al. (2008). They show that some claim should be audited although the corresponding expected gain is negative. This is due to the deterrence effect of auditing: more intense monitoring discourages fraud and it should lead the insurer to audit below the individual claim profitability threshold.
8
See Picard (2013) for a survey.
9
Proposition 3 only states that π * * ( 0 ) < π * ( 0 ) for all 0 . Additional assumptions would allow us to show that K ( π 0 ) is monotonous and thus that claims should not be audited when π 0 < π * * ( 0 ) .
Figure 1. Updating priors according to the auditing status ( p D = 0.9 , p H = 0.1 ).
Figure 1. Updating priors according to the auditing status ( p D = 0.9 , p H = 0.1 ).
Risks 06 00015 g001
Figure 2. Updating functions for different parameters.
Figure 2. Updating functions for different parameters.
Risks 06 00015 g002aRisks 06 00015 g002b
Figure 3. Period 0 priors and period 1 auditing.
Figure 3. Period 0 priors and period 1 auditing.
Risks 06 00015 g003
Figure 4. Period 1 beliefs as a consequence of Period 0 audit outcomes.
Figure 4. Period 1 beliefs as a consequence of Period 0 audit outcomes.
Risks 06 00015 g004
Figure 5. K ( π 0 ) for ( p D = 0.9 , p H = 0.1 , c = 0.5 , γ = 1 ).
Figure 5. K ( π 0 ) for ( p D = 0.9 , p H = 0.1 , c = 0.5 , γ = 1 ).
Risks 06 00015 g005
Figure 6. Learning vs. myopic.
Figure 6. Learning vs. myopic.
Risks 06 00015 g006
Figure 7. x 0 ( π 0 ) K ( π 0 ) for ( p D = 0.9 , p H = 0.1 , c = 0.5 , γ = 1 ).
Figure 7. x 0 ( π 0 ) K ( π 0 ) for ( p D = 0.9 , p H = 0.1 , c = 0.5 , γ = 1 ).
Risks 06 00015 g007
Figure 8. Auditing thresholds π * ( ) and π * * ( ) with a variable claim value .
Figure 8. Auditing thresholds π * ( ) and π * * ( ) with a variable claim value .
Risks 06 00015 g008
Table 1. Period 1 audit decision x 1 as a function of period 0 beliefs and audit outcomes.
Table 1. Period 1 audit decision x 1 as a function of period 0 beliefs and audit outcomes.
π 0 [ 0 , π a ) [ π a , π * ) [ π * , π b ) [ π b , 1 ]
Audit and Valid0001
No Audit0011
Audit and Invalid0111
Table 2. Definition of C ( · ) and H ( · ) .
Table 2. Definition of C ( · ) and H ( · ) .
π 0 C ( π 0 ) H ( π 0 )
[ 0 , π a ) 00
[ π a , π * ) 0 p D 2 π 0 + p H 2 ( 1 π 0 ) p ¯ ( π 0 ) c
[ π * , π b ) γ p ¯ ( π 0 ) c p D 2 π 0 + p H 2 ( 1 π 0 ) p ¯ ( π 0 ) c [ p ¯ ( π 0 ) c ]
[ π b , 1 ] γ p ¯ ( π 0 ) c 0

Share and Cite

MDPI and ACS Style

Aboutajdine, R.; Picard, P. Preliminary Investigations for Better Monitoring: Learning in Repeated Insurance Audits. Risks 2018, 6, 15. https://doi.org/10.3390/risks6010015

AMA Style

Aboutajdine R, Picard P. Preliminary Investigations for Better Monitoring: Learning in Repeated Insurance Audits. Risks. 2018; 6(1):15. https://doi.org/10.3390/risks6010015

Chicago/Turabian Style

Aboutajdine, Reda, and Pierre Picard. 2018. "Preliminary Investigations for Better Monitoring: Learning in Repeated Insurance Audits" Risks 6, no. 1: 15. https://doi.org/10.3390/risks6010015

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop