Next Article in Journal
Bank Competition in India: Some New Evidence Using Risk-Adjusted Lerner Index Approach
Next Article in Special Issue
Copula Model Selection for Vehicle Component Failures Based on Warranty Claims
Previous Article in Journal
Defining Geographical Rating Territories in Auto Insurance Regulation by Spatially Constrained Clustering
Previous Article in Special Issue
An Innovative Framework for Risk Management in Construction Projects in Developing Countries: Evidence from Pakistan
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Treatment Level and Store Level Analyses of Healthcare Data

1
Department of Statistics and Probability, Michigan State University, C413 Wells Hall, 619 Red Cedar Rd, East Lansing, MI 48824, USA
2
Department of Mathematics, Michigan State University, C212 Wells Hall, 619 Red Cedar Rd, East Lansing, MI 48824, USA
3
exeResearch, LLC, 32 University Dr, East Lansing, MI 48823, USA
*
Author to whom correspondence should be addressed.
Risks 2019, 7(2), 43; https://doi.org/10.3390/risks7020043
Submission received: 23 February 2019 / Revised: 3 April 2019 / Accepted: 13 April 2019 / Published: 17 April 2019
(This article belongs to the Special Issue Young Researchers in Insurance and Risk Management)

Abstract

:
The presented research discusses general approaches to analyze and model healthcare data at the treatment level and at the store level. The paper consists of two parts: (1) a general analysis method for store-level product sales of an organization and (2) a treatment-level analysis method of healthcare expenditures. In the first part, our goal is to develop a modeling framework to help understand the factors influencing the sales volume of stores maintained by a healthcare organization. In the second part of the paper, we demonstrate a treatment-level approach to modeling healthcare expenditures. In this part, we aim to improve the operational-level management of a healthcare provider by predicting the total cost of medical services. From this perspective, treatment-level analyses of medical expenditures may help provide a micro-level approach to predicting the total amount of expenditures for a healthcare provider. We present a model for analyzing a specific type of medical data, which may arise commonly in a healthcare provider’s standardized database. We do this by using an extension of the frequency-severity approach to modeling insurance expenditures from the actuarial science literature.

1. Introduction

Frequency-severity models are foundational to insurance claims’ modeling for actuarial applications. In a frequency-severity approach, the analyst would utilize a model for the frequency of insurance claims, as well as a model for the severity of insurance claims. By doing this, the modeler is able to understand the factors that influence the frequency and the severity of insurance claims. This paper illustrates how the frequency-severity modeling approach can be extended, so that it is capable of modeling healthcare expenditures at the treatment level. The idea is to not only consider the frequency of the patients, but also the frequency of the treatments incurred by each patient at each department of a healthcare network. This allows the analyst to understand the variation of the frequency of patients, the frequency of treatments at each department, as well as the severity of the expenditures, via the coefficients estimated from the analysis.
In order to illustrate our approach, we first motivate the frequency-severity modeling approach from basic principles. Then, we explain how the framework can be extended to a treatment-level analysis. In this process, the generalized additive models (GAM) approach can be helpful for capturing non-linear relationships with the response variable and the explanatory variables, and we demonstrate how this can be done. We then utilize a synthetically-generated dataset to illustrate our approach. The synthetic data have been designed to mimic the treatment-level dataset of a generic healthcare provider with multiple departments in a network of clinics. Our approach can be helpful for analyzing datasets of a similar nature. In particular, we found it useful for analyzing the store-level data and treatment-level data obtained from a local non-profit healthcare organization.
The paper proceeds in the following order: In Section 2, a review of relevant literature is provided. Section 3 provides an overview of lognormal regression and how it can be applied to a store-level data. Section 4 introduces the reader to GLM models and the motivation for their use on the store data at the county level. In our project, the discussed county-level data were obtained by merging the store data and the total annual payroll data at the county level using data from the U.S. Census Bureau. Section 5.1 describes in detail a treatment-level model that can be applied to generic medical treatment data. In Section 5.2, an overview of how the treatments can be categorized is provided. Section 6 explains how the treatment level model can be extended using a GAM framework in order to capture the nonlinear relationship between the time variable and the response variable. Section 7 summarizes our finding for the treatment level analysis, and Section 7.3 describes model validation approaches. Section 8 concludes the paper with final remarks and possible future work.

2. Literature

2.1. Frequency-Severity Modeling

The regression framework in this paper follows closely the frequency-severity approach in the actuarial science literature, where an overview of dependent frequency-severity modeling is provided by Frees et al. (2016). For an overview of regression modeling in the actuarial science context, the reader is referred to the Frees (2009) monograph Modeling with Actuarial and Financial Applications. According to Frees and coworkers Frees (2014); Frees et al. (2016), there are good reasons for modeling the frequency and the severity (cost) of claims separately. Insurers may impose coverage modifications, such as the application of deductibles, co-insurance, or policy limits on a per occurrence basis. Only knowing each policy’s aggregate claim amount does not allow for the calculation of mean expenditures under per occurrence-based coverage modifications. Meanwhile, covariates that aid the understanding of the outcomes can differ substantially between the frequency and the severity. In actuarial applications, different risk mitigation strategies may be suggested depending on the coefficient obtained for the frequency part and the severity part. In addition, the data files encountered in practice (analysis and modeling) often encourage developing independent frequency and severity models. Furthermore, in actuarial applications, it is important to consider the fact that regulators may typically require the inclusion of the number of claims along with the amounts (costs) within reports. Finally, the use of a separate severity model allows complex features for the marginal models to be captured, such as the heavy-tailed nature of the severity distribution.
The recent trend in actuarial science research is to incorporate sophisticated marginal distributions into the frequency-severity framework, allowing more flexibility in the modeling. For an overview of heavy-tail data modeling, the reader is referred to works by Sun et al. (2008), Yang (2011), and Shi (2014). Focusing on the treatment frequency aspect, zero-inflated models have been utilized in insurance modeling, and the reader is directed to the excellent overview by Boucher (2014). The zero-inflated model has been extended to zero-one-inflated models by Frees et al. (2016). Such sophisticated marginal models allow for specific features of the data to be captured within the models.

2.2. Longitudinal Modeling

The response and explanatory variables observed in a medical treatment-level analysis may be longitudinal. This means observations occur and recur over time with a certain correlation structure among the observations. The reader is referred to Frees’ book—Longitudinal and Panel Data: Analysis and Applications in the Social Sciences Frees (2004) for a primer on longitudinal data analysis. Recently, copula models have been used to model longitudinal data, and Ruscone and Osmetti (2016), Shi and Valdez (2014), and Shi (2012) have separately demonstrated copula’s ability to model longitudinal data. The application of pair-copulas can help model complex dependence structures as in the work of Smith et al. (2010).
The hierarchical modeling framework utilized in the treatment level analysis of this article is related to the work of Frees and Valdez (2008). They demonstrated the ability to model insurance claims using a stochastic variable to represent the yearly number of claims, another stochastic variable to denote the claim type, and a third stochastic variable to stand for the claim amount. These variables are observed for each observational unit { i t } , where i corresponds to risk class and t represents calendar year. In our research, time was represented by days, weeks, or months, depending on how the data were tabulated. Different tabulations may result in interesting discoveries regarding the clinics.

2.3. Medical Data Analysis

A detailed example of applying regression techniques on to healthcare expenditure modeling is demonstrated in the work of Frees et al. (2011). In the health economics literature, Keeler and Rolph (1988) studied insurance claims from the RANDHealth Insurance Experiment and approached the data analysis in a similar way as we present herein. Keeler and Rolph’s goal was to analyze the effects of health insurance plans on expenditures using a random coefficients model along with logarithmic expenditures and count distributions to model the episode frequencies. In a related paper by Rosenberg and Farrell (2008), a Bayesian approach was applied to model the inpatient utilization and expenditures at the individual level. The primary goal of these noted studies was the prediction of expenditures (costs). Our work also focuses on the prediction problem, yet we are also interested in the coefficient estimates of certain explanatory variables to understand their relationship with the response variables.

3. Lognormal Regression

3.1. Model

Imagine observing a certain response variable, which is defined on the domain ( 0 , ) . Examples of such responses may be insurance loss amounts, medical expenditure amounts, or property and crop losses due to natural disasters. One way to model this type of response is to use a lognormal variable, because the lognormal distribution is defined over ( 0 , ) . In this case, linear modeling is a powerful tool for understanding the relationship between a response variable ln Y and the explanatory (independent) variables X 1 , X 2 , , X p . If observations ln y 1 , , ln y n are obtained from random realizations of the variable ln Y , along with the corresponding explanatory variables x 11 , x 21 , , x n 1 for X 1 , and  x 12 , x 22 , , x n 2 for X 2 , all the way up to x 1 p , x 2 p , , x n p for X p , then we use matrix algebra to express the relationship between the responses and the explanatory variables by:
ln y 1 y 2 y n = 1 x 11 x 12 x 1 p 1 x 21 x 22 x 2 p 1 x n 1 x n 2 x n p β 0 β 1 β p + ϵ 1 ϵ 2 ϵ n
or in other words:
ln y = X β + ϵ
Different assumptions on ϵ result in different models. The standard linear regression model assumes each component of ϵ is normally distributed or, equivalently, each component of ln y is normally distributed. In this case, assuming the components of ln y are independent, we obtain:
ln y i N x i T β , σ 2 ,
where x i T = ( x i 1 , x i 2 , x i p ) and β and σ are parameters to be estimated. Once normality is assumed, linear regression can be performed to estimate the parameters β and σ . The parameters may be estimated using various approaches, but the maximum likelihood estimation (MLE) is considered a standard approach. Intuitively, the probabilities for observing each data point are:
f ( y i , x i ) = 1 y i 2 π σ 2 exp 1 2 σ 2 ln y i x i T β 2 ,
and multiplied out to form:
L = i = 1 n f ( y i , x i ) = 1 y 1 y n ( 2 π σ 2 ) n / 2 exp 1 2 σ 2 ln y X β T ln y X β .
The logarithm of the expression above is called the log-likelihood. The log-likelihood ln L may be maximized by setting its derivative with respect to the parameter β to zero.
β ln L = 1 2 σ 2 β ln y X β T ln y X β = 0 .
Solving the above system analytically is a standard exercise in vector calculus, and derivations are found in Generalized Linear Models with Applications in Engineering and the Sciences, by Myers et al. (2002). The result is:
β ^ = ( X T X ) 1 X T ( ln y ) ,
which is a familiar expression for those who have a background in linear regression. The important point is that standard linear regression can be understood as a procedure, where some log-likelihood function is maximized. Numerically minimizing the function ln L , with respect to the parameter  β , results in a nearly identical answer to those found analytically. Notice that the mean of a lognormally-distributed random variable with parameters μ i and σ is:
E [ y i ] = exp μ i + σ 2 2 .
Hence, for predictive applications, we have:
y ^ i = exp x i T β ^ · exp σ ^ 2 2 ,
which takes on values within ( 0 , ) . Notice that the log transform of the response variable assures the resulting predictions to be positive values.

3.2. Specific Example

In this section, we present a model that can be used for the analysis of hypothetical store-level data of a healthcare provider. The total square footage of the stores and the population within certain miles of the stores are used as independent variables for the regression modeling. In this case, our model specification is:
E [ y i ] = exp β 0 + β 1 ln x i , 1 + β 2 ln x i , 2 · exp σ 2 2 ,
where y i is store sales, x i , 1 is the total square feet of the i th observation, and x i , 2 is the population within certain miles of the i th observation. Collinearity may arise if the two variables x i , 1 and x i , 2 are highly correlated. In our analysis, due to the collinearity of these two variables, only one of the variables was significant at the 0.05 level (95% confidence). The adjusted R 2 of the model can be used to determine the fit of the model, where a high R 2 may indicate that a large portion of the variability in the response variable could be explained by the explanatory variables. The reason why the log amounts of the square feet and population are used instead of the provided observed values is because the coefficients have an elasticity interpretation when included as log amounts. We see that:
E [ y i ] x i , 1 = exp β 0 + β 1 ln x i , 1 + β 2 ln x i , 2 · exp σ 2 2 · β 1 x i , 1 = E [ y i ] x i , 1 β 1
Solving this for β 1 , we have:
β 1 = E [ y i ] / E [ y i ] x i , 1 / x i , 1 = % Δ E [ y i ] % Δ x i , 1 = percent change in E [ y i ] percent change in x i , 1
and hence, β 1 is the total square feet elasticity of the expected sales. A similar interpretation is possible for β 1 , and hence, we use the log amounts of the explanatory variables along with a log-link. The analyst may utilize Q-Q (quantile-quantile) plots to assess the goodness of the fit of the model. A Q-Q plot is a graphical approach to comparing two probability distributions by plotting their quantiles against each other. During our project, we also tried fitting the gamma model (explained later in the paper), and the Q-Q plot was not significantly different from the lognormal model for our particular dataset.

4. GLMs

4.1. Motivation

The model shown in Section 3.2 helps us tell an interesting story; however, it does not capture the correlation between annual payroll and the location of the stores. For this, we perform an analysis at the county level, meaning that the unit of analysis is a single lattice and the response variable is the number of stores observed in each area. For each lattice, we count the number of stores within the area and create a response variable using the counts. Along with the response variable, the total annual payroll for each county can be obtained by aggregating the payroll data obtained (downloaded) from the U.S. Census Bureau.
The problem with the lognormal approach to modeling the store counts at the county level is that the log response may no longer be assumed to have a normal distribution. Because the response is the number of stores in a county, it is no longer a real number in the range ( 0 , ) , but instead numbers in Z + = { 0 , 1 , 2 , } . We used generalized linear models (GLMs), where the regression technique is “generalized” to response variables that are non-normal. The technique was first introduced by Nelder and Wedderburn (1972), and today, there are numerous books covering the topic; the reader is directed to the works of Myers et al. (2002), Ohlsson and Johansson (2010), and Dobson (2008). Because of GLM’s inherent flexibility, they have been the workhorse model within the insurance industry where modeling non-normal responses such as insurance claim counts, claim indicators, or claim amounts is a major part of the actuarial analyst’s workflow.
Today, predictive analytics modeling has become a core component of the daily tasks performed by actuarial analysts. With the abundance of data and the availability of standardized statistical routines, actuaries are now able to analyze insurance claims’ data in a systematic way. The GLM framework has become a central component among the tools used by actuaries to analyze insurance claims’ data. Some advantages of GLM are as follows:
  • GLMs are able to model both the claim frequency and the claim severity in a unified language similar to that used in linear regression modeling.
  • Standardized routines are readily available for software packages such as R, SAS, SPSS, and JMP, allowing the analyst to avoid writing complex maximum likelihood code and scripts.
  • GLMs have flexibility in incorporating covariates into the modeling framework.
One of the main goals of predictive analytics modeling, in an actuarial context, is to provide a rating engine for an insurance provider. A rating engine is a way to express the relationship between explanatory variables and the response variable of interest. The coefficient estimates obtained from regression modeling can be used in the rating engine for this purpose. We believe the GLM framework is also useful for medical data analysis, as we will demonstrate.

4.2. Poisson Regression for Count Data

A GLM model can be understood as the application of the exponential family distributions to a regression problem. An exponential family distribution has the form:
f ( y ) = exp y θ b ( θ ) a ( ϕ ) + c ( y , ϕ )
Here, θ is the location parameter and ϕ is the dispersion parameter. To obtain the normal distribution from this, set θ = μ , b ( θ ) = μ 2 / 2 , a ( ϕ ) = ϕ = σ 2 , and:
c ( y ) = 1 2 y 2 σ 2 + ln ( 2 π σ 2 ) .
Hence, the normal distribution is an exponential family distribution. Another member of the exponential family is the Poisson distribution that can be obtained by setting θ = ln μ , b ( θ ) = e θ , and  c ( y ) = ln ( y ! ) . For the Poisson case, the probability function reduces to:
f ( y ) = exp y ln μ μ ln ( y ! ) = e μ μ y y !
The probability function of the Poisson model gives the probability of an observation within Z + . Hence, the Poisson distribution is used to model counts of events. Now, we informally define the GLM model. Assume the observations y 1 , y 2 , , y n are independent response variables following the exponential family distribution, with means μ 1 , μ 2 , , μ n , along with explanatory variables x 11 , x 21 , , x n 1 for the first explanatory variable, x 12 , x 22 , , x n 2 for the second explanatory variable, and so on. We use a link function to parameterize the mean of the response variables, so that:
E [ y i ] = μ i = g 1 x i T β
where g is a monotonic differentiable function. We use the log-link for the Poisson model, so that g ( · ) = log ( · ) . Once the mean of the distribution is parameterized, we apply the maximum likelihood estimation to obtain an estimate for the parameter β . Specifically, the log-likelihood:
ln L = i = 1 n y i θ i b ( θ i ) a ( ϕ ) + c ( y i , ϕ )
is minimized numerically. The important point here is that by parameterizing the mean of the response, maximizing the log-likelihood defined by the exponential family distribution turns into essentially a regression problem. Hence, using this approach, we were able to regress the count variable (number of stores in a county) onto a set of explanatory variables x i .

4.3. Other GLM Models

In Equation (2), setting θ = 1 / μ , a ( ϕ ) = r 1 , b ( θ ) = ln ( θ ) , and c ( ϕ ) = r ln r ln Γ ( r ) + ( r 1 ) ln y , where r is a shape parameter satisfying μ = r λ for a scale parameter λ , we have the gamma distribution:
f ( y ) = 1 Γ ( r ) 1 λ r e y / λ y r 1
The gamma distribution is often used with a log-link, so that:
E [ y i ] = μ i = exp x i T β
is the parameterization of the mean, resulting in a regression framework. The support of the gamma distribution is ( 0 , ) , and hence, the gamma distribution may be used to replace the log-normal model to model insurance claim amounts, treatment amounts, and expenditure sizes. The advantage of the gamma distribution is its theoretical properties when it comes to the sum of independent identically-distributed gamma random variables. The Poisson sum of gamma random variables is called the Tweedie distribution in the literature; see Ohlsson and Johansson (2010). Frees and Lee (2016) have demonstrated the ability to model endorsement premiums via a Tweedie distribution.
Meanwhile, setting θ = ln ( π / ( 1 π ) ) , a ( ϕ ) = 1 , b ( θ ) = n ln ( 1 + e θ ) , c ( ϕ ) = ln n y results in the binomial distribution. The probability function for the binomial distribution with n = 1 is:
f ( y ) = n y π y ( 1 π ) n y = π for y = 1 1 π for y = 0
The support of the binomial distribution is { 0 , 1 } , meaning that each observation is either true or false. In a predictive application, one is often interested in calculating the probability of a true response; hence, we parameterize π using a logit link function.
x i T β = logit ( π i ) = ln π i 1 π i π i = logit 1 x i T β = exp ( x i T β ) 1 + exp ( x i T β )
Again, notice that we have formulated a regression framework, for the case where the response variable y i takes on values within { 0 , 1 } . Now, we have a number of tools in our toolbox: we are able to regress binary response variables and count response variables and continuous response variables onto a set of explanatory variables, using a unified regression framework. Next, let us understand the building blocks of a hierarchical model. The most basic hierarchical model is a frequency-severity modeling framework, as explained in Section 4.4.

4.4. The Frequency-Severity Model

In the actuarial literature, it is common for an insurance claims model to be applied to a dataset, in which the following variables are observed:
  • N, the number of claims (events),
  • Y i , i = 1 , . . . , N , the amount of each claim (expense).
In this case, the aggregate amount of the claims for the insurance company, or healthcare provider, becomes:
S = Y 1 + + Y N
and if we assume independence between the frequency (N) and the severity ( Y i ) of the claims, then we can model the two components separately and have:
E [ S ] = E [ N ] · E [ Y i ]
According to Frees et al. (2016), the aggregate claim amount S is an important feature for an insurer’s balance sheet because it is the amount paid on claims. The same may be true for a non-profit organization, which might interpret the healthcare expenditures of individuals as “claims”, and the total amount of such expenditures as the cost of operating for the healthcare provider. While the aggregate amount S is often of interest, for an actuarial application, there are benefits to modeling the frequency and severity of the losses separately. If the losses represent healthcare expenditures paid by a healthcare provider, then we may imagine the model as representing the following situation: an individual’s decision to use healthcare may vary (the frequency), and the individual’s cost (the severity) is likely related to the characteristics of the physician or the type of treatment (healthcare provider). Herein, we take the position that the joint modeling of frequency and severity of claims is the point of interest. The long history of studying frequency, severity, and the aggregate claim for independent and identically-distributed realizations of random variables is a cornerstone of actuarial science. For an introduction to actuarial science, we direct the reader to the introductory textbook—Loss Models: From Data to Decisions by Klugman et al. (2012).
We assume the (actuarial) analyst has access to the explanatory variables. For an analyst in the insurance industry, the independent variables—various characteristics of the policyholder—are obtained from the policyholder’s application form. Explanatory variables for auto insurance may include the driver’s age, vehicle type, and region of operation. Recently, there have been attempts to incorporate telematics data into insurance rate-making, where distance driven, distance driven in the city (city mileage), and how often the vehicle’s operator violates speed limits are incorporated into the modeling of the claim frequencies and severities. Additionally, it has been demonstrated that a person’s credit score is an important predictor of auto claims; see Frees’ review Frees (2015) for further discussion.
In healthcare applications, the independent variables may be obtained from the patient’s clinic visitation history. For example, the age and gender of the patient may be likely predictors for the frequency of certain medical treatments. Meanwhile, independent of the patient, the treatment types and frequencies may also be influenced by the season (month of the year) or depend on which day of the week administered. The severity of the healthcare expenditures is related to the International Classification of Diseases (ICD) code chapter, sub-chapter, and major designation of the treatment. Unfortunately, such patient-level explanatory variables are often difficult to obtain for academic research because they are the private information of the patients and protected by the Health Insurance Portability and Accountability Act (HIPAA). Based on these restraints, we present a study of a more complex hierarchical model for a treatment-level analysis of healthcare expenditures for general analyses.

5. Treatment-Level Analysis

5.1. Model

In order to present a generic treatment-level model, we follow a hierarchical model, where the total cost is determined by components. The first is the number of patients arriving at a given time, and we call this random variable N t k , where t is discretized time in days and k is the treatment category. We assume that treatments are provided through three hypothetical departments within the healthcare provider network. The departments may be categorized into any different number of categories to suit the particular data in hand. Given a realization of N t k , each patient department (Departments 1, 2, and 3) will generate the following number of treatments for each patient i 1 , , N t k :
  • M 1 , t k i number of treatments in Department 1 (the department of interest)
  • M 2 , t k i number of treatments in Department 2 (a department being compared with Department 1)
  • M 3 , t k i number of treatments in Department 3 (all other departments)
For the first category, given a realization of M 1 , t k i , each treatment will correspond to an ICD-10 code chapter, which may result in either a zero or positive cost. Here, j is the index of the treatment. For an arbitrary treatment in the ICD-10 code category k, for patient i at time t, the charge for the treatment can be positive or negative. Thus, we use a binary variable P 1 , t k i j to model the variable. Given that the cost is positive, we let Y 1 , t k i j be the random variable for the cost for treatment j of patient i at time t in treatment category k. Similar definitions are applicable for Departments 2 and 3.
Then, we define T C 1 , t k i as the total cost arising from Department 1 for patient i at time t in treatment category k, and T C 2 , t k i , T C 3 , t k i are defined similarly. In this case, we have the relationship:
T C t k = i = 1 N t k T C 1 , t k i + T C 2 , t k i + T C 3 , t k i
where T C t k is the total cost for the medical care provider in year t in category k. We assume that there are K different categories of treatments, as shown in Table 1 (these categories are explained in more detail in Section 5.2). We then have:
T C 1 , t k i = j = 1 M 1 , t k i Y 1 , t k i j T C 2 , t k i = j = 1 M 2 , t k i Y 2 , t k i j T C 3 , t k i = j = 1 M 3 , t k i Y 3 , t k i j
Then, we have:
E [ Y 1 , t k i j ] = E [ P 1 , t k i j ] · E [ Y 1 , t k i j | P 1 , t k i j = 1 ] E [ Y 2 , t k i j ] = E [ P 2 , t k i j ] · E [ Y 2 , t k i j | P 2 , t k i j = 1 ] E [ Y 3 , t k i j ] = E [ P 3 , t k i j ] · E [ Y 3 , t k i j | P 3 , t k i j = 1 ]
where:
P 1 , t k i j = 1 if Y 1 , t k i j > 0 0 otherwise P 2 , t k i j = 1 if Y 2 , t k i j > 0 0 otherwise P 3 , t k i j = 1 if Y 3 , t k i j > 0 0 otherwise
Assuming independence of N t k , M 1 , t k i , the expected costs for each department become:
E [ T C 1 , t k i ] = E j = 1 M 1 , t k i Y 1 , t k i j . = E [ M 1 , t k i ] · E [ Y 1 , t k i j ] , E [ T C 2 , t k i ] = E j = 1 M 2 , t k i Y 2 , t k i j . = E [ M 2 , t k i ] · E [ Y 2 , t k i j ] , E [ T C 3 , t k i ] = E j = 1 M 3 , t k i Y 3 , t k i j . = E [ M 3 , t k i ] · E [ Y 3 , t k i j ] ,
where we assume the independence of M 1 , t k i and Y 1 , t k i j , and so on. Thus, the total cost for the medical provider within treatment category k becomes:
E T C t k = E i = 1 N t k T C 1 , t k i + T C 2 , t k i + T C 3 , t k i = E [ N t k ] · E [ M 1 , t k i ] · E [ Y 1 , t k i j ] + E [ M 2 , t k i ] · E [ Y 2 , t k i j ] + E [ M 3 , t k i ] · E [ Y 3 , t k i j ]

5.1.1. Modeling N t k Using GLMs

It is a common practice in actuarial analysis to model N t k based on covariates x t k via generalized linear models (GLMs). A Poisson or negative binomial distribution is used for count outcomes. It is common for analysts to use zero-inflated models, as described by Boucher (2014) or De Jong and Heller (2008) to accommodate the excessive number of zeros relative to the count values greater than zero implied by these distributions. An advantage of using GLMs is the ability to express the mean in terms of the explanatory variable x t k where—in actuarial practice—it is common to use a logarithmic link for this function to express the mean as:
E [ N t k ] = exp x t k T β ,
where β is a vector of parameters associated with the covariates. Logarithmic link functions are typically used because they are adept at fitting the data, allow easy parameter interpretations, and fits nicely with traditional approaches used in actuarial rate-making applications such as Mildenhall’s systematic relationship between minimum bias and generalized linear models Mildenhall (1999). Here, we assume that N t k follows a Poisson distribution so that:
P N t k ( n ) = λ t k n e λ t k n !
A “zero-one-inflated” model may be used—as in the work of Frees et al. (2016)—if there are large numbers of zeros and ones in our data. The zero-one-inflated model expands on the zero-inflated method and employs two generating processes: (1) a multinomial distribution that generates structural zeros and ones and (2) a Poisson or negative binomial distribution that generates counts, some of which may be zero or one. To parameterize the probabilities for the latent variable, a logit specification may be used, and to fit the parameters, maximum likelihood estimation can be used. In our project, for simplicity of coefficient interpretation, we assumed a Poisson distribution; the covariates x t k (explanatory variables) are explained in Section 5.3. Because we assume the patient frequencies are independent of other random variables in the model, we can estimate the coefficient β separately using maximum likelihood.

5.1.2. Modeling M 1 , t k i , M 2 , t k i , M 3 , t k i Using GLMs

Assuming the number of treatments is independent of the number of patients arriving in each category, we can model M 1 , t k i , M 2 , t k i , M 3 , t k i using a Poisson distribution and a framework similar to that in Section 5.1.1.
f M 1 , t k i ( m ) = γ 1 , t k i m e γ 1 , t k i m ! , f M 2 , t k i ( m ) = γ 2 , t k i m e γ 2 , t k i m ! , f M 3 , t k i ( m ) = γ 3 , t k i m e γ 3 , t k i m !
where:
γ 1 , t k i = exp x 1 , t k T α 1 , γ 2 , t k i = exp x 2 , t k T α 2 , γ 3 , t k i = exp x 3 , t k T α 3
where α 1 , α 2 , α are the coefficients for the number of treatments model. Notice that x 1 , t k , x 2 , t k , and x 2 , t k do not have the subscript i, because our model uses explanatory variables at the t k level. Due to the independence assumption, the coefficients may be estimated using a regression routine, or maximum likelihood, separately from the other components of the model.

5.1.3. Modeling P 1 , t k i j , P 2 , t k i j , P 3 , t k i j Using GLMs

Often, logit and probit forms are commonly used for binary outcomes; see Guillén (2014). Each treatment j in treatment category k could have zero cost, or a positive cost. We use a logistic regression model specification, where:
logit E [ P 1 , t k i j ] = x 1 , t k T γ 1 , logit E [ P 2 , t k i j ] = x 2 , t k T γ 2 , logit E [ P 3 , t k i j ] = x 3 , t k T γ 3
Here, γ 1 , γ 2 , γ 3 are the coefficients to be estimated. Notice that the explanatory variables x 1 , t k , x 2 , t k , and x 3 , t k do not have subscripts i j . Then, for Departments 1, 2, and 3, given a specific category k, the probability of positive treatment cost becomes:
π 1 , t k i j = exp ( x t k T γ 1 ) 1 + exp ( x t k T γ 1 ) , π 2 , t k i j = exp ( x t k T γ 2 ) 1 + exp ( x t k T γ 2 ) , π 3 , t k i j = exp ( x t k T γ 3 ) 1 + exp ( x t k T γ 3 ) .
Estimating γ 1 , γ 2 , and γ 3 boils down to maximum-likelihood. In the R programming language, binary outcomes may be modeled using the binomial family within the glm routine. For some datasets, it may be the case that treatments always result in expenditures. In this case, the model for P 1 , t k i j , P 2 , t k i j , P 3 , t k i j is not needed.

5.1.4. Modeling the Conditional Expenditure Severity

Finally, the cost of random variables can be modeled using gamma random variables. For regression purposes, the mean of the gamma random variable may be parametrized so that:
E [ Y 1 , t k i j | P 1 , t k i j = 1 ] = exp x 1 , t k T ξ 1 E [ Y 2 , t k i j | P 2 , t k i j = 1 ] = exp x 2 , t k T ξ 2 E [ Y 3 , t k i j | P 3 , t k i j = 1 ] = exp x 3 , t k T ξ 3
where ξ 1 , ξ 2 , ξ 3 are the regression coefficients and x 1 , t k , x 2 , t k , x 3 , t k are explanatory variables for the k th category treatment at time t.
There are many ways to model the severity of outcomes. For a dependence model, using a latent variable that affects both frequency and loss amounts induces a positive association. Copulas are another method to model non-linear associations among random variables. A strength of the GLM approach—for insurance analysts—is that the same set of routines can be used for continuous, as well as discrete (binary) outcomes.
GLMs have become the workhorse for insurance industry analysts interested in analyzing and modeling the severity of claims. Due to the industry’s primary focus on claims’ severity (total cost), several alternative approaches have been explored, and Shi (2014) presents an excellent introduction. For example, the generalized beta of the second kind (GB2) distribution is used by Frees et al. (2016). The specific severity distribution to use is an empirical question, and one may employ Q-Q plots or other model diagnostic techniques to select a distribution family that fits the data best. In the research presented here, the gamma GLM approach was used for simplicity.

5.2. Treatment Categories

The ICD-10 codes were used to encode the treatment category. The ICD-10 coding structure has been produced by the World Health Organization (WHO) and is used in countries around the world. The coding structure categorizes diseases into broad categories called “Chapters”, and these broad categories are further categorized into specific disease areas termed “Sub-Chapters”. In our data, the ICD variable contained either the ICD-9 or ICD-10 code for each treatment due to the dataset spanning the United States’ adoption date of 1 October 2015. The ICD variable could be standardized into ICD-10 code chapters using a custom R script. If a valid ICD-10 code is not identified, then the treatment is removed from the dataset and thus the analysis. The treatment categories (chapters) used in the analysis are provided in Table 1.

5.3. Patient Treatment Data

A set of explanatory variables (date of treatment, cost, department, and treatment category code) may be present in the dataset, and the patient-level demographic information (age, gender, ethnicity, height, and weight) may also be observed in the data. In our data, patient demographic information was not observed. Table 2 describes the explanatory variables used in the modeling presented below for the total number of patients visiting the clinics, and the number of treatments within each category of department. For the expenditure severity model, only the treatment categories are used as explanatory variables. The indicator variable ClinicOpen is a binary variable indicating whether Department 1 is open, and this variable has been included to study the influence of the department’s operation on the number of treatments at other departments.

6. GAMs

In order to capture the nonlinear relationship between the Time variable and the response variables, we utilized the generalized additive models (GAM) framework, which is an extension of the GLM framework. The GLM model for the patient frequency N t k is:
ln E [ N t k ] = x t k T β
In matrix form, this can be expressed as:
ln E [ N 1 , k ] E [ N 2 , k ] E [ N T , K ] = 1 x 1 , k , 1 x 1 , k , 2 x 1 , k , p 1 x 1 , k , p 1 x 2 , k , 1 x 2 , k , 2 x 2 , k , p 2 x 2 , k , p 1 x T , K , 1 x T , K , 2 x T , K , p 2 x T , K , p β 0 β 1 β p
where we assume that x t , k , 1 x t , k , p 1 correspond to the explanatory variables other than Time and x t , k , p corresponds to Time. The motivation for using a GAM model is that a polynomial of x t , k , p may be included in the design matrix. In this case, we may include the extra terms into the model matrix using the function:
f ( x ) = x + x 2 + + x r
subject to the identifiability constraint
k = 1 K t = 1 T f ( x t , k , p ) = 0
which states that the function f must sum to zero over the observed values of x t , k , p ; this concept is detailed on page 211 of Wood’s book on GAMs Wood (2017). In this case, we have:
ln E [ N 1 , k ] E [ N 2 , k ] E [ N T , K ] = 1 x 1 , k , 1 x 1 , k , 2 x 1 , k , p 1 x 1 , k , p x 1 , k , p 2 x 1 , k , p r 1 x 2 , k , 1 x 2 , k , 2 x 2 , k , p 1 x 2 , k , p x 2 , k , p 2 x 2 , k , p r 1 x T , K , 1 x T , K , 2 x T , K , p 1 x T , K , p x T , K , p 2 x T , K , p r β 0 β 1 β p 1 ψ 1 ψ r
where r is the degree of the polynomial. In other words,
ln E [ y k ] = β 0 + X ( 1 ) β ( 1 ) + X ( 2 ) β ( 2 )
where:
X ( 1 ) = 1 x 1 , k , 1 x 1 , k , 2 x 1 , k , p 1 1 x 2 , k , 1 x 2 , k , 2 x 2 , k , p 1 1 x T , K , 1 x T , K , 2 x T , K , p 1 X ( 2 ) = x 1 , k , p x 1 , k , p 2 x 1 , k , p R x 2 , k , p x 2 , k , p 2 x 2 , k , p R x T , K , p x T , K , p 2 x T , K , p R
and the coefficients are:
β 0 and β ( 1 ) = ( β 1 , β 2 , , β p 1 ) T and β ( 2 ) = ( ψ 1 , ψ 2 , , ψ R ) T
In practice, a basis other than the polynomial may be used to implement a GAM. In this case, we first define:
Φ ( 2 ) = B ( x 1 , k , p , 1 ) B ( x 1 , k , p , 2 ) B ( x 1 , k , p , R ) B ( x 2 , k , p , 1 ) B ( x 2 , k , p , 2 ) B ( x 2 , k , p , R ) B ( x T , K , p , 1 ) B ( x T , K , p , 2 ) B ( x T , K , p , R )
where B ( x , r ) for r = 1 , , R are basis functions; for our project, the B-spline basis was used. A recursive definition for the B-spline basis can also be found in Wood’s book Wood (2017). Once the matrix Φ ( 2 ) is defined, we QR decompose the vector Φ ( 2 ) T 1 so that:
Φ ( 2 ) T 1 = Q 1 Q 2 R 0
Then, we select:
X ( 2 ) = Φ ( 2 ) Q 2
Selecting the model matrix this way imposes an identifiability constraint, which states that the smooth function defined by the basis functions must satisfy:
k = 1 K t = 1 T f ( x t , k , p ) = 1 T X ( 2 ) β ( 2 ) = 0
Notice that:
1 T X ( 2 ) β ( 2 ) = 1 T Φ ( 2 ) Q 2 β ( 2 ) = R 0 Q 1 Q 2 Q 2 β ( 2 ) = R Q 1 Q 2 β ( 2 ) = 0
since Q 1 and Q 2 are orthogonal. Simply stated, if f ( x ) is a spline, the coefficients for the spline are now given by Q 2 β ( 2 ) instead of β ( 2 ) after the transformation. One more detail is that with the Poisson model, the likelihood:
ln L = k = 1 K t = 1 T y t k θ t k b ( θ t k ) a ( ϕ ) + c ( y t k , ϕ )
with θ t k = ln { E [ N t k ] } , is modified to:
ln L new = k = 1 K t = 1 T y t k θ t k b ( θ t k ) a ( ϕ ) + c ( y t k , ϕ ) + λ β ( 2 ) T Q 2 T S Q 2 β ( 2 )
where S is a matrix that penalizes the wiggliness of the function f ( x ) . In case the B-spline basis is used, difference penalties are imposed, as specified on page 206 of Wood’s book Wood (2017). This procedure is implemented in the R library mgcv.

7. Results

In this section, we show the result of a simulation study using synthetically-generated data, similar to the real data that were analyzed during the project. Section 7.1 provides the details of the simulation study. In Section 7.2, we provide qualitative descriptions of the results obtained from the real data study.

7.1. Simulation Study

In order to perform a simulation study, synthetic data have been generated. The data have been generated according to the following rules to mimic the structure of the medical data, which we were unable to directly use for publication purposes due to a non-disclosure agreement.
  • For each t, randomly generate the total number of patients from a Poisson distribution, so that N t Poisson 5 sin t 365 π + 2 sin t 365 2 π + 2 , where t is the number of days elapsed since 1 January 2010, with the maximum t corresponding to 1 January 2019.
  • Each patient i receives one treatment, whose ICD-10 code chapter is randomly generated from a multinomial distribution with the probability of each category following p k , for k = 1 , , 22 , sampled from a Dirichlet distribution.
  • Department 1 opens at time 1 January 2017.
  • Each treatment is assigned to either Department 1, 2, or 3 using a multinomial distribution with probability ( 0 , q 2 , q 3 ) , sampled from a Dirichlet distribution. We fixed q 1 = 0 before the opening of Department 1, because the probability that a treatment is assigned to Department 1 should be zero before its opening. We used a different set of probabilities ( q 1 , q 2 , q 3 ) , also sampled from a Dirichlet distribution, after Department 1 opens.
  • Each treatment results in a positive charge with probability 0.95.
  • Given that a charge in ICD-10 code chapter k is positive, it results in a charge amount sampled from a gamma distribution with its scale parameter sampled from an exponential distribution with rate 0.001 and shape parameters fixed to one.
  • The total number of days in the synthetic data is 3287, with a total of 67,983 patients.
The modeling framework described in Section 5.1 has been used for the analysis of the synthetically-generated data. The regression coefficients are shown in Table 3 and Table 4. From the regression coefficients for the number of patients, we see that the number of patients varied by month and chapter. From the charge severity models for Departments 1, 2, and 3, we see that the ICD-10 code chapters were a significant predictor of the charge amount. In Table 4, notice that the ClinicOpen variable was negative and significant for Department 2, indicating that the introduction of Department 1 has reduced the number of treatments given in Department 2. The time since 1 January 2010 has been used in conjunction with a smooth function constructed using the B-spline basis with order 10. Figure 1 shows a plot of the smooth function, which has been estimated using the GAM modeling approach described in Section 6. The plot illustrates that there is a nonlinear effect of the time variable on the number of patients.
Figure 2 shows the out-of-sample comparisons of the predicted expenditures and the synthetically-generated actual expenditures. Here, the out-of-sample data consisted of expenditures falling in the period between 1 January 2018 and 1 January 2019. Each panel shows the predictions when the result was aggregated at a daily, weekly, or monthly level. The Spearman correlation with the out-of-sample claims was 54.40%, 84.46%, and 95.78%, respectively. The Gini indices (explained more in Section 7.3) were 59.57%, 60.08%, and 62.37%, respectively.

7.2. Real Data Analysis

In this section, we present a qualitative analysis of the results from the treatment-level medical data modeling using real data. Because of the non-disclosure agreement, we provide qualitative results only. The model for the total number of patients N t k showed that WDay, Month, Chapter, and ClinicOpen (all explanatory variables) were statistically significant. There was also a nonlinear relationship between the number of patients arriving and time, which could be inferred from the GAM approach to using the Time variable as the independent variable for a smooth function. The coefficient for the variable ClinicOpen could be used to infer the relationship between the total number of patients and the opening of Department 1. The number of treatments model for M 2 , t k i and M 3 , t k i provided interesting results. The ClinicOpen variable indicated that the number of treatments in the other two departments changed following the opening of Department 1.

7.3. Model Validation

After estimating all the parameters for the hierarchical model, the model can be validated using an external test set—commonly called an “out-of-sample” comparison—where a subset of the initial dataset observations was set aside. In our analysis, the subset of observations from a particular year was the out-of-sample set (test set), and the remaining samples (training set) were used to estimate the parameters (train the model). The predicted values of the out-of-sample observations determined using the model were compared with their observed values. A graphical approach plots the observed out-of-sample expenditures with their predicted expenditures to visualize how close the points fell near the 45-degree identity line. The Spearman correlation between the predicted expenditures and the out-of-sample expenditures is a good initial method to measure whether the ranks of the observations were predicted well. A new and more informative measure was developed by Frees et al. (2011) employing the Gini index. The Gini index is defined as twice the area underneath the ordered Lorenz curve. In order to obtain the ordered Lorenz curve, let:
F ^ Π ( s ) = t = 1 T k = 1 K Π ( x t k ) I ( R ( x t k ) s ) t = 1 T k = 1 K Π ( x t k ) and F ^ L ( s ) = t = 1 T k = 1 K y t k I ( R ( x t k ) s ) t = 1 T k = 1 K y t k ,
where we define the relativity R ( x t k ) as:
R ( x t k ) = S ( x t k ) Π ( x t k ) , S ( x t k ) = insurance score , Π ( x t k ) = constant score .
The ordered Lorenz curve can be obtained by plotting the points:
F ^ Π ( s ) , F ^ L ( s ) .
The constant charge model assumes that the charge for a day in every treatment category is the average cost over all treatment-days for the in-sample (training set) subset of observations. Our model validation approaches demonstrated that our model had a positive and significant Gini index.

7.4. Dependence Modeling

The Spearman correlation is a measure of dependency among the ranks of random variables. We tested the Spearman correlation among the number of treatments M 1 , t k i , M 2 , t k i , and M 3 , t k i and discovered that there was evidence of negative dependence. Because there was evidence of dependence, a potentially interesting avenue of future work is the application of copula models to the discrete number of treatment variables. For an introduction to copula models, the reader is referred to Nelsen’s introduction to copulas Nelsen (1999), while Joe (2014) presented an in-depth monograph for the use of copulas to model complex dependence structures. One final consideration is the validity of the independence assumption between the number of patients N t k , the number of treatments M 1 , t k i , M 2 , t k i , M 3 , t k i , the positivity of the treatment charges P 1 , t k i j , P 2 , t k i j , P 3 , t k i j , and the charge amounts Y 1 , t k i j , Y 2 , t k i j , Y 3 , t k i j . We emphasize that it is an important assumption to treat these variables to be independent in order for our modeling approach to be valid.

8. Conclusions

In this paper, we have presented two unique analysis and modeling endeavors, (1) store-level sales modeling project and (2) a medical treatment-healthcare expenditure modeling project. The store sales project demonstrated the need to combine nearby business with residential demographics information. While the store sales project is not a traditional actuarial science project, it demonstrated to the students the wide applicability of the tools and methods they have learned during their undergraduate career. The store sales project also demonstrated the power (and need) to identify and harvest data outside of the provided dataset to produce informative predictive models.
The medical treatment-healthcare expenditure project was a more traditional actuarial science project and introduced the undergraduate students to real-world and messy medical data. The treatment-level healthcare data resulted in expenditure models that were motivated by the traditional hierarchical insurance claims models typically found in actuarial science. The healthcare cost models provided insights into the effect of the explanatory variables on the component response variables. In a way, the store-level sales modeling was a general introduction to generalized linear models that lead into frequency-severity and hierarchical models.
The presented research provides an overview of our generalized philosophy on modeling and analyzing healthcare expenditures that includes the creation of informative and predictive models to help understand the provided data, answer questions about the system of interest, and provide insights, all while ensuring the models are robust and predictive. The store sales models indicated that the store’s total size and the population count within certain miles of the stores are key features driving store sales for the year of interest. For the medical treatment analysis, our research indicated that the overall number of patients have a nonlinear relationship with the time variable, and this changed after the introduction of Department 1 into the healthcare network. The number of treatments for a given patient also changed in Departments 2 and 3 after the opening of Department 1 to the public. These analyses give insights into the impact of the explanatory variables on the response variables of interest. In addition, we demonstrated a treatment-level approach to estimating the total expenditures incurred to a healthcare system by patients.
We believe that our approach can be useful for predicting healthcare expenditures for the healthcare services sector. The approach can be applied to any healthcare services network with multiple departments and treatment-level records of patients. More information at the patient level (for example, demographic information) may improve the model. Because our model has the ability to predict the number of patients arriving at each department under each category of treatment, we believe that the modeling approach can help efficiently allocate resources (including physician times) at each healthcare department. We believe that this can contribute to improving the quality of healthcare services provided to the patients.

9. Disclaimer

This paper is the result of a project that took place in the form of an undergraduate Teamwork Experience course (MTH491B) at Michigan State University (MSU) during the fall semester of 2017 and continued on to the 2018 spring semester. A non-profit healthcare provider approached the Actuarial Science department of MSU with a treatment-level dataset. Their request was for us to analyze and provide insights to aid their operation. The MSU team consisted of three undergraduate students, a graduate student in the Department of Statistics and Probability Ph.D. program, an Assistant Professor with a joint appointment in the Department of Statistics and Probability and the Department of Mathematics, and a professional predictive analytics modeling consultant. The project initially focused on discovering treatment-and patient-level characteristics of the data and creating summary statistics of the data, along with predicting the total amount of cost for the operation of the healthcare provider. Through the application of statistical analysis and models, the goal was to uncover new aspects of the data for the non-profit’s management team. In addition to the medical data, the non-profit healthcare provider supplied an additional and unrelated dataset of stores containing the sales volume and store features maintained within the non-profit’s network. As there were two datasets to analyze, the project consisted of two components, one being the analysis of the store data, and the other being the treatment-level analysis. The store sales analysis portion of the project turned into a University Undergraduate Research and Arts Forum (UURAF) presentation at MSU by the undergraduate students involved in the project at the end of the spring 2018 semester. The title of the poster presentation was “Combining Population and Nearby Store Information to Aid in Selection of Future Store Locations.” In this paper, we have discussed both components of the project with the goal of reviewing the general approaches that could be used in order to analyze healthcare data at the store level and the treatment level.

Author Contributions

All authors contributed substantially to this work.

Funding

This research received no external funding.

Acknowledgments

This work was made possible by the generous support and resources provided by an anonymous non-profit healthcare provider in Michigan. The authors are grateful for their support throughout the project.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Boucher, Jean-Philippe. 2014. Regression with count dependent variables. In Predictive Modeling Applications in Actuarial Science. Cambridge: Cambridge University Press. [Google Scholar]
  2. De Jong, Piet, and Gillian Z. Heller. 2008. Generalized Linear Models for Insurance Data. Cambridge: Cambridge University Press. [Google Scholar]
  3. Dobson, Annette J., and Barnett Adrian G. 2008. An Introduction to Generalized Linear Models. Boca Raton: Chapman and Hall/CRC, Taylor & Francis Group. [Google Scholar]
  4. Frees, Edward W. 2004. Longitudinal and Panel Data: Analysis and Applications in the Social Sciences. Cambridge: Cambridge University Press. [Google Scholar]
  5. Frees, Edward W. 2009. Regression Modeling with Actuarial and Financial Applications. Cambridge: Cambridge University Press. [Google Scholar]
  6. Frees, Edward W. 2014. Frequency and severity models. In Predictive Modeling Applications in Actuarial Science. Cambridge: Cambridge University Press. [Google Scholar]
  7. Frees, Edward W. 2015. Analytics of insurance markets. Annual Review of Finacial Economics 7: 253–77. [Google Scholar] [CrossRef]
  8. Frees, Edward W., Jie Gao, and Marjorie A Rosenberg. 2011. Predicting the frequency and amount of health care expenditures. North American Actuarial Journal 15: 377–92. [Google Scholar] [CrossRef]
  9. Frees, Edward W., and Gee Lee. 2016. Rating endorsements using generalized linear models. Variance 10: 51–74. [Google Scholar]
  10. Frees, Edward W., Gee Y. Lee, and Lu Yang. 2016. Multivariate frequency-severity regression models in insurance. Risks 4: 4. [Google Scholar] [CrossRef]
  11. Frees, Edward W., Glenn Meyers, and A. David Cummings. 2011. Summarizing insurance scores using a gini index. Journal of the American Statistical Association 106: 495. [Google Scholar] [CrossRef]
  12. Frees, Edward W., and Emiliano A. Valdez. 2008. Hierarchical insurance claims modeling. Journal of the American Statistical Association 103: 1457–69. [Google Scholar] [CrossRef]
  13. Guillén, Montserrat. 2014. Regression with categorical dependent variables. In Predictive Modeling Applications in Actuarial Science. Cambridge: Cambridge University Press. [Google Scholar]
  14. Joe, Harry. 2014. Dependence Modeling with Copulas. Boca Raton: CRC Press. [Google Scholar]
  15. Keeler, Emmett B., and John E. Rolph. 1988. The demand for episodes of treatment in the health insurance experiment. Journal of Health Economics 7: 337–67. [Google Scholar] [CrossRef]
  16. Klugman, Stuart A., Harry H. Panjer, and Gordon E. Willmot. 2012. Loss Models: From Data to Decisions. Hoboken: John Wiley & Sons, Inc. [Google Scholar]
  17. Mildenhall, Stephen J. 1999. A systematic relationship between minimum bias and generalized linear models. Proceedings of the Casualty Actuarial Society 86: 393–487. [Google Scholar]
  18. Myers, Raymond, Douglas C. Montgomery, G. Geoffrey Vining, and Timothy J. Robinson. 2002. Generlized Linear Models with Applications in Engineering and the Sciences. New York: John Wiley & Sons, Inc. [Google Scholar]
  19. Nelder, John, and Robert Wedderburn. 1972. Generalized linear models. Journal of the Royal Statistical Society. Series A (General) 135: 370–84. [Google Scholar] [CrossRef]
  20. Nelsen, Roger B. 1999. An Introduction to Copulas. New York: Springer Science & Business Media, Inc. [Google Scholar]
  21. Ohlsson, Esbjörn, and Björn Johansson. 2010. Non-Life Insurance Pricing with Generalized Linear Models. Berlin Heidelberg: Springer Verlag. [Google Scholar]
  22. Rosenberg, Marjorie A., and Phillip M. Farrell. 2008. Predictive modeling of costs for a chronic disease with acute high-cost episodes. North American Actuarial Journal 12: 1–19. [Google Scholar] [CrossRef]
  23. Ruscone, Marta Nai, and Silvia Angela Osmetti. 2016. Modelling the Dependence in Multivariate Longitudinal Data by Pair Copula Decomposition. Basel: Springer International Publishing Switzerland. [Google Scholar]
  24. Shi, Peng. 2012. Multivariate longitudinal modeling of insurance company expenses. Insurance: Mathematics and Economics 51: 204–15. [Google Scholar] [CrossRef]
  25. Shi, Peng. 2014. Fat-tailed regression models. In Predictive Modeling Applications in Actuarial Science. Cambridge: Cambridge University Press. [Google Scholar]
  26. Shi, Peng, and Emiliano Valdez. 2014. Longitudinal modeling of insurance claim counts using jitters. Scandinavian Actuarial Journal 2014: 159–79. [Google Scholar] [CrossRef]
  27. Smith, Michael, Aleksey Min, Carlos Almeida, and Claudia Czado. 2010. Modeling longitudinal data using a pair-copula decomposition of serial dependence. Journal of the American Statistical Association 105: 1467–79. [Google Scholar] [CrossRef]
  28. Sun, Jiafeng, Edward W. Frees, and Marjorie A. Rosenberg. 2008. Heavy-tailed longitudinal data modeling using copulas. Insurance: Mathematics and Economics 42: 817–30. [Google Scholar] [CrossRef]
  29. Wood, Simon N. 2017. Generalized Additive Models: An Introduction with R, Second Edition. Boca Raon: CRC Press. [Google Scholar]
  30. Yang, Xipei. 2011. Multivariate Long-Tailed Regression With New Copulas. Ph.D. thesis, University of Wisconsin-Madison, Madison, WI, USA. [Google Scholar]
Figure 1. Fitted smooth function with respect to time.
Figure 1. Fitted smooth function with respect to time.
Risks 07 00043 g001
Figure 2. Log daily, weekly, and monthly charges (predicted versus actual out-of-sample charges).
Figure 2. Log daily, weekly, and monthly charges (predicted versus actual out-of-sample charges).
Risks 07 00043 g002
Table 1. Treatment categories (ICD-10 chapters).
Table 1. Treatment categories (ICD-10 chapters).
ChapterBlockDescription
1A00–B99Certain infectious and parasitic diseases
2C00–D48Neoplasms
3D50–D89Diseases of the blood and blood-forming organs and certain disorders
involving the immune mechanism
4E00–E90Endocrine, nutritional and metabolic diseases
5F00–F99Mental and behavioral disorders
6G00–G99Diseases of the nervous system
7H00–H59Diseases of the eye and adnexa
8H60–H95Diseases of the ear and mastoid process
9I00–I99Diseases of the circulatory system
10J00–J99Diseases of the respiratory system
11K00–K93Diseases of the digestive system
12L00–L99Diseases of the skin and subcutaneous tissue
13M00–M99Diseases of the musculoskeletal system and connective tissue
14N00–N99Diseases of the genitourinary system
15O00–O99Pregnancy, childbirth and the puerperium
16P00–P96Certain conditions originating in the perinatal period
17Q00–Q99Congenital malformations, deformations and chromosomal
abnormalities
18R00–R99Symptoms, signs and abnormal clinical and laboratory findings,
not elsewhere classified
19S00–T98Injury, poisoning and certain other consequences of external causes
20V01–Y98External causes of morbidity and mortality
21Z00–Z99Factors influencing health status and contact with health services
22U00–U99Codes for special purposes
Table 2. Explanatory variables for the number of patients model (for N t k ), number of treatments model (for M 1 , t k i , M 2 , t k i , M 3 , t k i ), and the charge amounts model (for Y 1 , t k i j , Y 2 , t k i j , Y 3 , t k i j ).
Table 2. Explanatory variables for the number of patients model (for N t k ), number of treatments model (for M 1 , t k i , M 2 , t k i , M 3 , t k i ), and the charge amounts model (for Y 1 , t k i j , Y 2 , t k i j , Y 3 , t k i j ).
Variable NameDescription
ClinicOpenIndicator variable of whether Department 1 is open
WDayA categorical variable of the weekday.
(Categories: Sun, Mon, Tue, Wed, Thr, Fri, Sat)
MonthA categorical variable of the month.
(Categories: Jan, Feb, Mar, Apr, May, Jun, Jul, Aug, Sep, Oct, Nov, Dec)
ChapterA categorical variable of the treatment category.
(Categories: shown in Table 1)
TimeNumeric variable corresponding to the current day relative to a reference time
point. In our study, the reference time point is the first day in which data are available.
Table 3. Model for number of patients and the treatments in Department 1.
Table 3. Model for number of patients and the treatments in Department 1.
Model for Number of Patients
EstimateStd. Err.
(Intercept)−0.0340.026
ClinicOpen0.0170.050
Month:20.1310.019***
Month:30.1960.018***
Month:40.1800.018***
Month:50.0790.019***
Month:6−0.1360.020***
Month:7−0.4050.021***
Month:8−0.6660.023***
Month:9−0.8950.025***
Month:10−0.8210.025***
Month:11−0.5500.024***
Month:12−0.2940.022***
Chapter:20.8240.024***
Chapter:3−1.8690.055***
Chapter:4−0.2780.031***
Chapter:50.2340.027***
Chapter:6−0.1760.030***
Chapter:70.5310.025***
Chapter:81.5010.022***
Chapter:90.3570.026***
Chapter:10−1.8010.053***
Chapter:11−1.2170.042***
Chapter:12−1.2880.043***
Chapter:13−0.8000.036***
Chapter:140.1910.027***
Chapter:15−0.4930.033***
Chapter:160.2690.027***
Chapter:17−0.4940.033***
Chapter:18−1.5840.049***
Chapter:19−2.9090.088***
Chapter:201.1740.023***
Chapter:21−2.0520.060***
Chapter:220.0780.028**
Number of Treatments (Department 1)
EstimateStd. Err.
(Intercept)−1.5500.030**
Probability of Positive Charge (Department 1)
EstimateStd. Err.
(Intercept)0.9520.004***
Charge Severity Model (Department 1)
EstimateStd. Err.
(Intercept)5.9020.115***
Chapter:2−0.2230.153
Chapter:31.1680.491*
Chapter:41.3090.216***
Chapter:51.0330.175***
Chapter:60.9360.223***
Chapter:70.6480.166***
Chapter:8−1.5840.140***
Chapter:9−0.1330.179
Chapter:10−1.2710.394**
EstimateStd. Err.
Chapter:110.7120.298*
Chapter:120.2820.419
Chapter:130.9170.232***
Chapter:140.9630.193***
Chapter:15−0.2460.229
Chapter:170.7410.223***
Chapter:181.1970.419**
Chapter:191.7101.073
Chapter:201.3990.145***
Chapter:211.0970.491*
Chapter:22−0.2660.201
Significance: ‘***’: 0.001, ‘**’: 0.01, ‘*’: 0.05, ‘.’: 0.1
Table 4. Model for the treatments at Departments 2 and 3.
Table 4. Model for the treatments at Departments 2 and 3.
Number of treatments (Department 2)
EstimateStd. Err.
(Intercept)−0.3700.005***
ClinicOpen−0.4510.022**
Probability of Positive Charge (Department 2)
EstimateStd. Err.
(Intercept)0.9490.001**
Charge Severity Model (Department 2)
EstimateStd. Err.
(Intercept)6.8260.020***
Chapter:2−1.1210.026***
Chapter:30.1990.067**
Chapter:40.4360.035***
Chapter:50.0490.030
Chapter:60.2330.034***
Chapter:7−0.3920.028***
Chapter:8−2.4790.023***
Chapter:9−1.2120.029***
Chapter:10−1.7590.067***
Chapter:11−0.2360.050***
Chapter:12−0.8350.052***
Chapter:13−0.4330.043***
Chapter:140.0040.031
Chapter:15−1.4110.038***
Chapter:16−2.9910.030***
Chapter:180.4360.059***
Chapter:190.1840.110.
Chapter:200.4910.024***
Chapter:21−0.0140.071
Chapter:22−1.3930.031***
EstimateStd. Err.
(Intercept)−1.0940.007***
ClinicOpen0.0790.024**
Probability of Positive Charge (Department 3)
EstimateStd. Err.
(Intercept)0.9480.002***
Charge Severity Model (Department 3)
EstimateStd. Err.
(Intercept)6.6250.036***
Chapter:2−0.9280.043***
Chapter:30.4560.101***
Chapter:40.6360.054***
Chapter:50.2440.048***
Chapter:60.3740.053***
Chapter:7−0.1860.045***
Chapter:8−2.2670.039***
Chapter:9−0.9780.047***
Chapter:10−1.4690.092***
Chapter:11−0.0190.075
Chapter:12−0.5320.077***
Chapter:13−0.0690.063
Chapter:140.2060.048***
Chapter:15−1.1520.058***
Chapter:16−2.8270.048***
Chapter:170.4040.058***
Chapter:180.6710.087***
Chapter:190.3730.157*
Chapter:200.6820.041***
Chapter:21−0.1240.113
Chapter:22−1.1500.050***
Significance: ‘***’: 0.001, ‘**’: 0.01, ‘*’: 0.05, ‘.’: 0.1

Share and Cite

MDPI and ACS Style

Wang, K.; Ding, J.; Lidwell, K.R.; Manski, S.; Lee, G.Y.; Esposito, E.X. Treatment Level and Store Level Analyses of Healthcare Data. Risks 2019, 7, 43. https://doi.org/10.3390/risks7020043

AMA Style

Wang K, Ding J, Lidwell KR, Manski S, Lee GY, Esposito EX. Treatment Level and Store Level Analyses of Healthcare Data. Risks. 2019; 7(2):43. https://doi.org/10.3390/risks7020043

Chicago/Turabian Style

Wang, Kaiwen, Jiehui Ding, Kristen R. Lidwell, Scott Manski, Gee Y. Lee, and Emilio Xavier Esposito. 2019. "Treatment Level and Store Level Analyses of Healthcare Data" Risks 7, no. 2: 43. https://doi.org/10.3390/risks7020043

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop