Statistical Methods and Machine Learning Techniques for Insurance and Risk Management Data Analytics

A special issue of Risks (ISSN 2227-9091).

Deadline for manuscript submissions: closed (31 December 2018) | Viewed by 8655

Special Issue Editor


E-Mail Website
Guest Editor
Department of Statistical Science, University College London (UCL), London, UK
Interests: quantitative risk management; insurance; computational statistics; machine learning; data analytics; econometrics; computational finance
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Risk and Insurance disciplines are facing a new era of big data analytics. This has taken multiple forms such as the growing desire to analyze individual claims records in large claims portfolios, analysis of large collections of consumer surveys and consumer preferences for insurance products (consumer analytics), telematics and usage based insurance, catastrophe modelling, weather monitoring and insurance, demographic and mortality data and large portfolios of annuities, to name but a few examples. A common theme in all of these cases is the importance of extraction of salient features, these are features that best summarize vast and complex structured data records in a manner that is more amenable to modelling and computational tasks such as those commonly identified as core to machine learning, statistics and data analytics found in settings, such as supervised machine learning, unsupervised learning, reinforcement learning, and offline and online learning environments. Extraction approaches to capture salient representations of the data, classically referred to in statistics as model summaries or sufficient and summary statistics and are presently referred to in a broader sense in machine learning as feature extraction is an important emerging challenge in insurance and risk management.

This Special Issue aims to bring together papers that tackle the challenge of innovative new machine learning and computational statistical methods for feature extraction in insurance and risk management settings that can tackle issues in such data records related to issues such as the sheer size and number of records, mixed types of structures such as video, audio, sensor observations (e.g. LIDAR, weather sensors, seismic sensors, bathimetry sensors, GPS, etc.) time-series, panel, longitudinal, survey, topological, graphical, matrix and tabular valued data records that need to be combined for modelling purposes in modern risk and insurance settings. Furthermore, issues to do with missing data and noise/outliers/miss reporting, etc., are required to be addressed in such large records. It is therefore the aim of this Special Issue to bring together collections of theoretical, methodological and practical papers all tackling a common theme that revolves around the development of robust and computationally efficient methodologies and computational approaches to feature extraction in risk and insurance settings tackling the issues just identified. Such features extraction methods should be developed with the intention for the features to then be amenable to statistical modelling tasks, such as classification, regression, clustering, dimensionality reduction, semi-supervised learning and reinforcement learning, to name but a few of the possible tasks such features should be suitable for.

Therefore, the scope of the Special Issue is to propose methodologies, computational approaches and statistical analysis of existing approaches from machine learning and computational statistics that are explored and justified in development of potentially important feature extraction methodologies in insurance and risk management settings. All papers in this Special Issue must focus the development of such methodologies and approaches within a particular specific discipline of insurance or risk management, which must first be explained in detail before development of the methods. If possible, further motivations of the proposed methods should be provided either through theoretical justifications based on existing approaches classically adopted and how the new methods will relate or enhance/improve classical approaches or through a detailed and thorough investigation of real data applications. If the latter is considered, we as that the authors provide all code and data (in sanitized form, in an open access public repository for the purpose of reproducible science).

Dr. Gareth W. Peters
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Risks is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • feature extraction
  • dimension reduction (Principle Component Analysis, Independent Component Analysis, Sparse Matrices, Functional Data Analysis, Graphical Models)
  • topological data analysis
  • functional data analysis
  • kernel methods

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 1985 KiB  
Article
Defining Geographical Rating Territories in Auto Insurance Regulation by Spatially Constrained Clustering
by Shengkun Xie
Risks 2019, 7(2), 42; https://doi.org/10.3390/risks7020042 - 17 Apr 2019
Cited by 7 | Viewed by 4015
Abstract
Territory design and analysis using geographical loss cost are a key aspect in auto insurance rate regulation. The major objective of this work is to study the design of geographical rating territories by maximizing the within-group homogeneity, as well as maximizing the among-group [...] Read more.
Territory design and analysis using geographical loss cost are a key aspect in auto insurance rate regulation. The major objective of this work is to study the design of geographical rating territories by maximizing the within-group homogeneity, as well as maximizing the among-group heterogeneity from statistical perspectives, while maximizing the actuarial equity of pure premium, as required by insurance regulation. To achieve this goal, the spatially-constrained clustering of industry level loss cost was investigated. Within this study, in order to meet the contiguity, which is a legal requirement on the design of geographical rating territories, a clustering approach based on Delaunay triangulation is proposed. Furthermore, an entropy-based approach was introduced to quantify the homogeneity of clusters, while both the elbow method and the gap statistic are used to determine the initial number of clusters. This study illustrated the usefulness of the spatially-constrained clustering approach in defining geographical rating territories for insurance rate regulation purposes. The significance of this work is to provide a new solution for better designing geographical rating territories. The proposed method can be useful for other demographical data analysis because of the similar nature of the spatial constraint. Full article
Show Figures

Figure 1

47 pages, 969 KiB  
Article
General Quantile Time Series Regressions for Applications in Population Demographics
by Gareth W. Peters
Risks 2018, 6(3), 97; https://doi.org/10.3390/risks6030097 - 13 Sep 2018
Cited by 5 | Viewed by 3935
Abstract
The paper addresses three objectives: the first is a presentation and overview of some important developments in quantile times series approaches relevant to demographic applications—secondly, development of a general framework to represent quantile regression models in a unifying manner, which can further enhance [...] Read more.
The paper addresses three objectives: the first is a presentation and overview of some important developments in quantile times series approaches relevant to demographic applications—secondly, development of a general framework to represent quantile regression models in a unifying manner, which can further enhance practical extensions and assist in formation of connections between existing models for practitioners. In this regard, the core theme of the paper is to provide perspectives to a general audience of core components that go into construction of a quantile time series model. The third objective is to compare and discuss the application of the different quantile time series models on several sets of interesting demographic and mortality related time series data sets. This has relevance to life insurance analysis and the resulting exploration undertaken includes applications in mortality, fertility, births and morbidity data for several countries, with a more detailed analysis of regional data in England, Wales and Scotland. Full article
Show Figures

Figure 1

Back to TopTop