Advanced Statistical and Machine Learning Models in Non-life and Health Insurance

A special issue of Risks (ISSN 2227-9091).

Deadline for manuscript submissions: closed (20 October 2022) | Viewed by 10362

Special Issue Editors


E-Mail Website
Guest Editor
1. Data Science & Governance Department, DKV Belgium, Rue de Loxum 25, 1000 Bruxelles, Belgium
2. Department of Mathematics, Universiteit Antwerpen, Antwerp, Belgium
3. Louvain School of Statistics, Biostatistics and Actuarial Sciences, Université Catholique de Louvain-la-Neuve, Louvain-la-Neuve, Belgium
Interests: data science and machine learning; actuarial science; reserving; text analysis; image analysis; measures of the predictive ability; deep learning; Bayesian statistics

E-Mail Website
Guest Editor
Department of Mathematics, university Antwerpen, Antwerpen, Belgium
Interests: statistical data science; actuarial sciences; fraud and anomaly detection; reserving; high-dimensional analysis; robust statistics; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Allianz Benelux, Brussels, Belgium
Interests: reinforcement learning; actuarial sciences (non-life and health); fraud detection; machine learning; text mining; quantitative finance

E-Mail Website
Guest Editor
Allianz Benelux, Brussels, Belgium
Interests: non-linear optimization; actuarial sciences (non-life and health); fraud detection; telematics; strategy and execution

E-Mail Website
Guest Editor
Allianz Benelux, Brussels, Belgium
Interests: predictive models; actuarial sciences (non-life and health); data science; quantification of risk and uncertainty; telematics; NLP; fraud detection

Special Issue Information

Dear Colleagues,

In recent years, advanced statistical and machine learning models have been implemented in many fields, allowing researchers and practitioners to extract more information from the data at hand than ever before. In addition, modern advances allow for the extraction of information from previously unlikely data sources such as text and images.

This Special Issue aims to collect papers from the field of non-life and health insurance, where modern statistical and machine learning models are applied to the data typically present in an insurance company, and, if possible and/or relevant, combined with the extraction of information from ‘new’ data sources like text and images.

Dr. Robin Van Oirbeek
Prof. Dr. Tim Verdonck
Dr. Florence Guillaume
Dr. Christopher Grumiau
Dr. Mina Mostoufi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Risks is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning

  • actuarial science
  • non-life &health insurance
  • reserving
  • pricing
  • text analysis/NLP
  • image analysis
  • risk estimation
  • deep learning

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 656 KiB  
Article
Heat Equation as a Tool for Outliers Mitigation in Run-Off Triangles for Valuing the Technical Provisions in Non-Life Insurance Business
by Jan Barlak, Matus Bakon, Martin Rovnak and Martina Mokrisova
Risks 2022, 10(9), 171; https://doi.org/10.3390/risks10090171 - 28 Aug 2022
Viewed by 1415
Abstract
Estimating outstanding claims reserves in the non-life insurance business is often impaired by outlier-contaminated datasets. Widely used methods to eliminate outliers in non-life development triangles are either limiting the number of outliers by robust statistical methods or by change of development factors. However, [...] Read more.
Estimating outstanding claims reserves in the non-life insurance business is often impaired by outlier-contaminated datasets. Widely used methods to eliminate outliers in non-life development triangles are either limiting the number of outliers by robust statistical methods or by change of development factors. However, the whole estimation process is likewise adversely affected so that: (i) the total sum of all triangle payments is not correct or (ii) the difference between the original triangle and its backward estimation via the bootstrap method is ineligible. In this paper, the properties of the heat equation are examined to obtain an outlier smoothing technique for development triangles. The heat equation in two dimensions is being applied on an outlier contaminated dataset where no individual data are available. As a result, we introduce a new methodology to (i) treat outliers in non-life development triangles, (ii) keep the total sum of all triangle payments, and (iii) provide acceptable differences between the original and the backward estimated triangle. Consequently, the outlying values are eliminated and the resulting development triangle could be used as an input for any claims reserving method without a need for further robustification or change of development factors. Additionally, the research on the application of heat equation in one dimension presented in this paper enables one to employ the bootstrap method using Pearson’s residuals in cases where the method was originally inapplicable due to development factors being lower than one. Full article
Show Figures

Figure 1

20 pages, 564 KiB  
Article
Unsupervised Insurance Fraud Prediction Based on Anomaly Detector Ensembles
by Alexander Vosseler
Risks 2022, 10(7), 132; https://doi.org/10.3390/risks10070132 - 21 Jun 2022
Cited by 5 | Viewed by 2983
Abstract
The detection of anomalous data patterns is one of the most prominent machine learning use cases in industrial applications. Unfortunately very often there are no ground truth labels available and therefore it is good practice to combine different unsupervised base learners with the [...] Read more.
The detection of anomalous data patterns is one of the most prominent machine learning use cases in industrial applications. Unfortunately very often there are no ground truth labels available and therefore it is good practice to combine different unsupervised base learners with the hope to improve the overall predictive quality. Here one of the challenges is to combine base learners that are accurate and divers at the same time, where another challenge is to enable model explainability. In this paper we present BHAD, a fast unsupervised Bayesian histogram anomaly detector, which scales linearly with the sample size and the number of attributes and is shown to have very competitive accuracy compared to other analyzed anomaly detectors. For the problem of model explainability in unsupervised outlier ensembles we introduce a generic model explanation approach using a supervised surrogate model. For the problem of ensemble construction we propose a greedy model selection approach using the mutual information of two score distributions as a similarity measure. Finally we give a detailed description of a real fraud detection application from the corporate insurance domain using an outlier ensemble, we share various feature engineering ideas as well as discuss practical challenges. Full article
Show Figures

Figure 1

10 pages, 393 KiB  
Article
Variable Selection Algorithm for a Mixture of Poisson Regression for Handling Overdispersion in Claims Frequency Modeling Using Telematics Car Driving Data
by Jennifer S. K. Chan, S. T. Boris Choy, Udi Makov, Ariel Shamir and Vered Shapovalov
Risks 2022, 10(4), 83; https://doi.org/10.3390/risks10040083 - 12 Apr 2022
Cited by 1 | Viewed by 2847
Abstract
In automobile insurance, it is common to adopt a Poisson regression model to predict the number of claims as part of the actuarial pricing process. The Poisson assumption can rarely be justified, often due to overdispersion, and alternative modeling is often considered, typically [...] Read more.
In automobile insurance, it is common to adopt a Poisson regression model to predict the number of claims as part of the actuarial pricing process. The Poisson assumption can rarely be justified, often due to overdispersion, and alternative modeling is often considered, typically zero-inflated models, which are special cases of finite mixture distributions. Finite mixture regression modeling of telematics data is challenging to implement since the huge number of covariates computationally prohibits the essential variable selection needed to attain a model with desirable predictive power devoid of overfitting. This paper aims at devising an algorithm that can carry the task of variable selection in the presence of a large number of covariates. This is achieved by generating sub-samples of the data corresponding to each component of the Poisson mixture, and wherein variable selection is applied following the enhancement of the Poisson assumption by means of controlling the number of zero claims. The resulting algorithm is assessed by measuring the out-of-sample AUC (Area Under the Curve), a Machine Learning tool for quantifying predictive power. Finally, the application of the algorithm is demonstrated by using data of claim history and telematics data describing driving behavior. It transpires that unlike alternative algorithms related to Poisson regression, the proposed algorithm is both implementable and enjoys an improved AUC (0.71). The proposed algorithm allows more accurate pricing in an era where telematics data is used for automobile insurance. Full article
Show Figures

Figure 1

11 pages, 421 KiB  
Article
Approximation of Zero-Inflated Poisson Credibility Premium via Variational Bayes Approach
by Minwoo Kim, Himchan Jeong and Dipak Dey
Risks 2022, 10(3), 54; https://doi.org/10.3390/risks10030054 - 3 Mar 2022
Cited by 2 | Viewed by 2335
Abstract
While both zero-inflation and the unobserved heterogeneity in risks are prevalent issues in modeling insurance claim counts, determination of Bayesian credibility premium of the claim counts with these features are often demanding due to high computational costs associated with a use of MCMC. [...] Read more.
While both zero-inflation and the unobserved heterogeneity in risks are prevalent issues in modeling insurance claim counts, determination of Bayesian credibility premium of the claim counts with these features are often demanding due to high computational costs associated with a use of MCMC. This article explores a way to approximate credibility premium for claims frequency that follows a zero-inflated Poisson distribution via variational Bayes approach. Unlike many existing industry benchmarks, the proposed method enables insurance companies to capture both zero-inflation and unobserved heterogeneity of policyholders simultaneously with modest computation costs. A simulation study and an empirical analysis using the LGPIF dataset were conducted and it turned out that the proposed method outperforms many industry benchmarks in terms of prediction performances and computation time. Such results support the applicability of the proposed method in the posterior ratemaking practices. Full article
Back to TopTop