Next Article in Journal
Interaction between Occupational and Non-Occupational Arsenic Exposure and Tobacco Smoke on Lung Cancerogenesis: A Systematic Review
Next Article in Special Issue
Evaluating Indirect Economic Losses from Flooding Using Input–Output Analysis: An Application to China’s Jiangxi Province
Previous Article in Journal
Induction of Apoptosis and Decrease of Autophagy in Colon Cancer Cells by an Extract of Lyophilized Mango Pulp
Previous Article in Special Issue
The Role of Big Data in Promoting Green Development: Based on the Quasi-Natural Experiment of the Big Data Experimental Zone
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Heterogeneous Learning of Functional Clustering Regression and Application to Chinese Air Pollution Data

1
School of Statistics, Huaqiao University, Xiamen 361021, China
2
Department of Economics, Xiamen University, Xiamen 361005, China
3
College of Mathematics and Statistics, Chongqing University, Chongqing 401331, China
*
Author to whom correspondence should be addressed.
Int. J. Environ. Res. Public Health 2023, 20(5), 4155; https://doi.org/10.3390/ijerph20054155
Submission received: 31 December 2022 / Revised: 22 February 2023 / Accepted: 23 February 2023 / Published: 25 February 2023
(This article belongs to the Special Issue The Recent Development of Environmental Management in Asia)

Abstract

:
Clustering algorithms are widely used to mine the heterogeneity between meteorological observations. However, traditional applications suffer from information loss due to data processing and pay little attention to the interaction between meteorological indicators. In this paper, we combine the ideas of functional data analysis and clustering regression, and propose a functional clustering regression heterogeneity learning model (FCR-HL), which respects the data generation process of meteorological data while incorporating the interaction between meteorological indicators into the analysis of meteorological data heterogeneity. In addition, we provide an algorithm for FCR-HL to automatically select the number of clusters, which has good statistical properties. In the later empirical study based on PM2.5 concentrations and PM10 concentrations in China, we found that the interaction between PM10 and PM2.5 varies significantly between regions, showing several types of significant patterns, which provide meteorologists with new perspectives to further study the effects between meteorological indicators.

1. Introduction

Environmental pollution is a hot global issue and has been given high attention by all governments. Among air pollutants, particulate matter (PM) poses the greatest risk to human health [1]. The Organization for Economic Cooperation and Development (OECD) estimates that air pollution will be the leading environmental cause of death by 2050. Therefore, it is important to mine the regional heterogeneity of air pollution and its internal patterns. Meteorological data such as temperature, humidity, atmospheric pressure, and pollutant concentrations are continuously changing in the atmosphere at a specific location. Unfortunately, we are unable to obtain the original curves of continuous data directly. The usual approach is to sample at a given time interval, thus obtaining time-series type data in discrete cases. Obviously, no matter how intensive our sampling time is, there is no way to avoid information loss. When performing heterogeneity mining, it is necessary to calculate the distance between meteorological data from different regions. There are generally two types of method for discrete type of meteorological data. The first one is to extract representative observation statistics from continuous time by series, e.g., mean, variance, etc. [2,3], and the second is to treat time series data according to the ordinary high-dimensional Euclidean space. The former method further produces information loss from data processing, while the latter maintains the full picture of discrete information, but there is a problem of “curse of dimensionality” due to the high dimensionality when calculating the distance of sample observations. However, the use of functional data analysis (FDA) for meteorological data can avoid these problems and bring more advantages in data processing.
First of all, FDA respects the data generation process more. It converts discrete data into functional data by interpolation or smoothing [4]. The advantage of this treatment is that it retains as much information as possible about the variation of the sample in the time domain, while also preserving the characteristics of the curve fluctuations. In addition, with Functional Principal Component Analysis (FPCA), functional data objects can be projected from time domain data to frequency domain data, providing a frequency domain perspective for time series analysis. Intuitively, the meteorological time series is decomposed into a combination of functional components, which is also consistent with the characteristics of meteorological data. For example, temperature is influenced by diurnal and seasonal variations, pollutant concentration is influenced by seasonal variations, traffic peaks, and other cyclical factors. In particular, meteorological data is often a combination of multiple trends over time. After FPCA, we can calculate the distance between sample observations based on the finite component scores, which can effectively avoid the problem of dimensional curse. Therefore, mining the heterogeneity and internal pattern of meteorological data under the perspective of FDA has natural advantages.
Heterogeneous learning is achieved through clustering methods. Clustering is the typical method to mining heterogeneity. Clustering algorithms are a classical class of unsupervised learning algorithms that cluster samples based on the similarity of their observations. There exists a wide range of applications in the mining of meteorological data, such as analysis of spatial and temporal variation of pollutants, air quality monitoring and optimization of monitoring network, and correlating pollutant concentrations with specific synoptic conditions [4]. The commonly used clustering methods to study meteorological data are Partitioning clustering [5,6,7], Hierarchical clustering [8,9,10], Fuzzy clustering [11,12,13] and Model-based clustering such as the EM method, SOM method, etc. [14,15].
The above applications and models all have a common defect, that is, they are all clustered directly based on the data itself, ignoring the potential structural information between the related data, and the heterogeneity between the clusters is insufficiently described. In fact, there are complex correlations between various air pollutants. For example, nitrogen oxides are correlated with each other [16], and there are complex correlations between the concentrations of PM10 and PM2.5 [17]. The correlation between meteorological data has the potential to optimize the clustering algorithm. Therefore, from the idea of clustering regression, we want to excavate the potential relationship between different meteorological indicators and use it as auxiliary information to guide the clustering.
Cluster regression was first mentioned by Späth [18], which has given rise to new ideas and vitality in the era of big data. Joki et al. [19] introduced the support vector machine model in machine learning into CLR (Cluster-wise linear regression), transformed the problem into an unconstrained non-smooth optimization problem, and designed a method based on an incremental algorithm and double beam method combined with the DC optimization method. Numerical experiments verify the reliability and effectiveness of the method. The results show that the method after adding a support vector machine optimizes the partitioning effect when outliers in the data. Amb et al. [20] designed a CLR algorithm based on DCA (difference of convex algorithm) and incremental method and used the quasi-Newton method to solve the problem. They found that the new method can effectively solve the CLR problem under large-scale data from the evaluation of synthetic data and real data. Da Silva and de Carvalho [21] proposed the W-CLR (Weighted Cluster-wise Linear Regression) model, which solves the possible overfitting problem of the original model and can better describe the linear relationship of subspace samples. Experiments on the W-CLR synthetic dataset and benchmark datasets validate the effectiveness of the method. In terms of the application of the model, Bagirov et al. [22] selected the monthly rainfall data from 1889 to 2014 in eight different geographic locations in Australia and proposed a clustered linear regression (CLR) method for monthly precipitation forecasting. The results show that the method has advantages over models such as multiple linear regression, neural networks, and support vector machines. Torti et al. [23] studied the heterogeneity problem from the perspective of CLR based on EU mask trade data, achieved the selection of the optimal number of clusters and the best combination through a two-step method, and obtained the optimal stable solution. However, the above research is still at the level of linear regression. With increasingly complex data and interrelationships, simple linear regression can hardly describe the potential connections between the data accurately.
This paper combines the dual advantages of FDA perspective with cluster regression and proposes a functional clustering regression heterogeneity learning method (FCR-HL). In summary, FCR-HL has the following three advantages: (1) clustering from FDA perspective, which is more suitable for the generation process of meteorological data, and greatly reduces the information loss problem of the original data. (2) The auxiliary information (i.e., the regression relationship between air pollutants) is incorporated into the clustering process to optimize the clustering results. In addition, the regression patterns within each group can be automatically identified. (3) We develop an adaptive selection process of the number of clusters, effectively avoiding the limitation of manual setting. The clustering results have a direct impact on the subsequent studies. The model we constructed can optimize the clustering results while mining the heterogeneity patterns of different groups, and thus has practical significance and application value.

2. Methods

The FCR-HL model mainly solves four problems: (1) the optimization problem of clustering: partitioning data into different clusters with the perspective of regression can incorporate more information to attain a better clustering effect, and how to solve the optimization problem is the key point. (2) Parameter estimation problem: the parameters in the regression that explain the impact of the covariate on the response variable need to be estimated within each cluster. (3) Clustering number estimation that decides how many clusters are needed is an important part for the model. (4) Iterative algorithm: it is difficult to solve the problem of both partition and parameter estimation simultaneously; our model gives an iterative process to solve the three problems mentioned above. The following will explain the solutions to the four problems, respectively.

2.1. Clustering Optimization and Parameter Estimation

Ramsay and Dalzell [24] proposed functional data analysis, which uses non-parametric ideas to fit data, and can effectively capture the continuous characteristics of data. In the functional data analysis, the functional regression model is an effective and convenient method. This paper mainly focuses on one typical functional regression models, that is, the covariates are functional data, and the response variables are scalar types, which have a functional covariate and scaler response variable:
Y i = α 0 + X i ( t ) α 1 ( t ) d t + e i , i = 1 , 2 , , n , t [ 0 , T ]
where the response variable Y i is a scalar and the vector expression is Y = ( Y 1 , Y 2 , , Y n ) , n is the observation number, and the covariate variable X i ( t ) , t [ 0 ,   T ] represents i th functional trajectory that has a bounded upper limit T. Assuming e i ~ iid   N ( 0 , σ 2 ) , the Karhunen-loeve expansion can be used for the functional covariates to obtain Equation (2):
X i ( t ) = u ( t ) + k = 1 ξ i k φ k ( t )
where u ( t ) = E [ X ( t ) ] represents the mean function of the covariate, and   φ k ( t ) is the eigenfunction corresponding to the k t h largest eigenvalue λ k of the covariance G ( s , t ) = C o v ( X ( t ) , X ( s ) ) , the eigenfunctions are orthogonal to each other, and satisfy φ k 2 ( t ) d t = 1 and φ k ( t ) φ l ( t ) d t = 0 ,   k l . Using the functional principal component analysis (FPCA), ξ i k named as the functional principal component scores of X i ( t ) u ( t ) in the direction of φ k ( t )  are obtained, which satisfy E [ ξ i k ] = 0 and V a r [ ξ i k ] = λ k . According to Formula (2), Formula (1) can be rewritten as:
Y i = α 0 + [ u ( t ) + k = 1 ξ i k φ k ( t ) ] α 1 ( t ) d t + e i β 0 + k = 1 K β k ξ i k + e i
where β 0 = α 0 + u ( t ) α 1 ( t ) d t , β k = φ k ( t ) α 1 ( t ) d t , the mean function u ( t ) of X i ( t ) is mapped to the constant parameter β 0 , and φ k ( t ) are mapped to the parameter β k . In other words, the parameter β 0 includes the mean value of Y i when X i ( t ) = 0 and the information of the mean trend of X i ( t ) , and the parameter β k stands for the effect of the k th deviations of X i ( t ) on Y i . In this way, the auxiliary information between the covariate and the response variable is reflected in the parameter β k . This paper builds the FCR-HL model based on the auxiliary information to cluster the data.
For Equation (3), the summation term is truncated at K , which is determined using the AIC criterion Li, et al. [25], which is to estimate the optimal K , which is given by minimizing the sum of the pseudo-Gaussian loglikelihood and K . Note Y = ( Y 1 , Y 2 , , Y n ) T , ξ i = ( 1 , ξ 1 , ξ 2 , , ξ K ) T ,   ξ = ( ξ 1 , ξ 2 , , ξ n ) T ,   β = ( β 0 , β 1 , β 2 , , β K ) T ,   e = ( e 1 , e 2 , , e n ) T , we rewrite Formula (2) in a matrix expression:
Y = ξ β + e
The advantage of using FPCA technology is that the infinite-dimensional functional data can be converted into low-dimensional data, and then it helps to construct a linear regression model relating to the principal component scores. On one hand, this method can reduce the computational difficulty and the algorithm complexity due to the dimensional curse. On the other hand, it can preserve the nonlinear characteristics in the covariate, which are utilized for the regression analysis. At the same time, the principal component scores estimated by FPCA have good statistical characteristics, especially the unbiasedness and consistency, which are helpful for inferring the subsequent parameter estimations discussed later.
The goal of clustering optimization in this paper is to cluster data from the perspective of the regression hyperplane. The FCR-HL model mainly has two steps of iterations: First, obtaining the parameter estimates under the given partition. Second, clustering samples based on the parameter estimates. According to the two-step iterative algorithm, the optimal regression clustering results can be found.
Firstly, given a partition, the parameters are estimated from the perspective of the regression hyperplane and with the data that has been partitioned. Compared with the random partition, it takes the relationship between the covariate and response variable as an auxiliary information for clustering. The parameters can be estimated with greater accuracy once the partition has deduced the heterogeneity between the data. It is assumed that the samples from the same partition have the following relationship:
y i m = ξ i m β m + e m , e m ~ N ( 0 , σ m 2 ) , i m C m , m = 1 , 2 , , M
where C = { C 1 ,   C 2 , ,   C M } represent the sub-populations and m = 1 M | C m | = n where | C m | is the samples size of the cluster C m , and M is the number of clusters, which may grow with sample size, y i m are the observed response data belonging to the cluster C m , ξ i m are the vector scores derived from the observed functional covariate x i ( t ) belonging to the cluster C m , and β m = ( 1 , β m 1 ,   β m 2 , , β m K ) T are the coefficients of the cluster C m .
In (5), it is necessary to first solve the unknown functional principal component scores, and then we can estimate the parameter β m . It should be noted that the estimate of the functional principal component scores can directly affect the result of the parameter estimates, considering that the PACE (principal component analysis through conditional expectation) method proposed by Yao, et al. [26] is unbiased and consistent estimation method for the functional principal component scores. The PACE method gives the estimators ξ ^ i k = E ^ ( ξ i k | X i ) where X i = ( X ( t i 1 ) ,   X ( t i 2 ) , , X ( t i n i ) ) and ξ i k and X ( t i j ) are jointly Gaussian. Then, the PACE method is used to estimate the functional principal component scores and the mean function according to Formula (2), by which the principal component score estimates ξ ^ and the estimation of the mean function u ^ ( t ) have the following convergence properties:
s u p t Τ | u ^ ( t ) u ( t ) | = O p ( 1 n h u )
lim n ξ ^ = E [ ξ | X ]   i n   p r o b a b i l i t y
where u ^ ( t ) is obtained by the local liner smoother, and h u is the bandwidth used in the local linear smoother. Formulas (6) and (7) show that u ^ ( t ) converges to u ( t ) and ξ ^ are unbiased estimates for ξ when n , which are the good statistical characteristics mentioned before. Thus, ξ can be replaced by the estimates ξ ^ as a new regression model shown in Formula (8):
y i m = ξ ^ i m β m + e i m , e i m ~ N ( 0 , σ i m 2 ) , i C m , m = 1 , 2 , , M
Based on Formula (8), the log-likelihood function can be shown in Formula (9):
l o g L n ( K , C , ( β 1 , σ 1 2 ) , ( β M , σ M 2 ) ) = 1 2 m = 1 M i C m ( l o g 2 π + l o g σ i m 2 + ( y i m ξ ^ i m β m ) 2 σ i m 2 )
It is difficult to obtain the optimal partition and the estimates of the unknown parameters in (9) just by maximizing l o g L n ( M , C , ( β 1 , σ 1 2 ) , ( β M , σ M 2 ) ) . Thus, an iterative method is proposed. Firstly, the optimization objective of clustering, fixing β m at β ^ m and σ i m 2 at σ ^ m 2 , is to maximize the log-likelihood function when the observation data ( y i ,   x i ( t ) ) belongs to the cluster:
C m ^ = a r g m a x 1 m M ( l o g L n ( M , C m , ( β ^ m , σ ^ m 2 ) ) ) = a r g m a x 1 m M { 1 2 ( l o g 2 π + l o g σ ^ m 2 + ( y i ξ ^ i m β ^ m ) 2 σ ^ m 2 ) } a r g m i n 1 m M { l o g σ ^ m 2 + ( y i m ξ ^ i m β ^ m ) 2 σ ^ m 2 }  
To solve Formula (10), the parameter estimations ( β ^ m , σ ^ m 2 ) needs to be obtained. The idea is to maximize the log-likelihood function of the data within the class. Formula (11) is the log-likelihood function of the data i C m :
l o g L n ( M , C m , ( β m , σ m 2 ) ) = i C m ( 1 2 ( l o g 2 π + l o g σ m 2 + ( y i ξ ^ i m β m ) 2 σ m 2 )
Then, the parameters β ^ m are obtained according to the maximum likelihood estimation:
β ^ m = a r g m a x β m { i C m ( 1 2 ( l o g 2 π + l o g σ i m 2 + ( y i m ξ ^ i m β m ) 2 σ i m 2 ) } = ( ξ ^ i m ξ ^ i m ) 1 ( ξ ^ i m y i m )
σ ^ m 2 = i C m ( y i ξ ^ i m β ^ m ) 2 n m ^
n m ^ = | C m ^ |
where n m ^ represents the sample size of the cluster C m ^ and σ ^ m 2 = σ ^ i m 2 for simplification. Then, parameter estimations ( β ^ m , σ ^ m 2 , C m ^ ) are brought into Formula (9) to obtain the log-likelihood function of the complete data:
l o g L n ( M , C , ( β ^ 1 , σ ^ 1 2 ) , ( β ^ M , σ ^ M 2 ) ) = 1 2 m = 1 M i C m ( l o g 2 π + l o g σ ^ m 2 + ( y i m ξ ^ i m β ^ m ) 2 σ ^ m 2 )
When fixing the partition C m , the β ^ m and σ ^ m 2 are the maximum likelihood estimators of the regression within the cluster, as shown in (12) and (13). When fixing β ^ m and σ ^ m 2 , the, the likelihood function will be maximized if the test data belongs to the cluster C m . It is noted that the log-likelihood function is a monotonically increasing function, so it can reach the local maximum if a limited number of iterations are carried out alternatively. Furthermore, the parameter estimates derived from this optimization also have good statistical characteristics. First, the principal component scores obtained by FPCA are obtained by mapping the information of the data itself to the direction of the principal component. ξ ^ are the unbiased estimates of ξ . Thus, ξ ^ and ξ can be considered as non-random variables for the response variable Y , and the maximum likelihood estimation can give estimates having good statistical characteristics, for example, the unbiasedness:
E ( β m ^ | X ) = E ( E ( β m ^ | ξ ^ ) | X ) = β m
V a r ( β m ^ | ξ ^ ) = E ( β m ^ 2 | ξ ^ ) ( E ( β ^ m | ξ ^ ) ) 2 = σ 2 ( ξ ^ T ξ ^ ) 1
where variance of β ^ m can be used to verify the significance of the parameter. Because only the variance of β ^ m is estimated correctly, the significance results of the parameter estimates are reliable.
From Formulas (6), (7), (16), and (17), it can be known that the β m ^ converges to β m in probability. Therefore, it can be ensured that the obtained optimal number of clusters converges to the real number with probability 1 when data is clustered from the perspective of regression hyperplane.
In addition, it is also noted that the estimations using maximum likelihood are based on the classical assumption that the error term in Formula (8) obeys independently and identically normal distribution. Once the assumption is broken, the maximum likelihood estimation results are problematic. Thus, when it comes to the data which violates the independently and identically normal error distribution, a robust estimation (M-estimation) scheme, a generalized maximum likelihood estimation method is given. A special case of M-estimation is the Huber distribution, which has a normal distribution at the origin and an exponential distribution at the tail. The parameter estimation can be obtained according to the Huber distribution:
β ^ m = a r g m i n { i C m ρ c ( y i m ξ ^ i m β m ) }
ρ c ( t ) = { 1 2 t 2 , | t | < c c | t | 1 2 c 2 , | t | c
where ρ c ( t ) is the error function of the Huber distribution, and c is a fixed constant. Given parameter estimates β ^ m , the optimal objective function for clustering can be obtained from sample observations ( y i m , x i m ( t ) ) :
C m ^ = a r g m i n { ρ c ( y i m ξ ^ i m β m ) }
This function ρ c ( t ) is also strictly monotonically increasing.
The partition above is under the condition of a given number of clusters, so the next step is to give an estimation method for the number of clusters.

2.2. Estimation of the Optimal Number of Clusters

After estimating model parameters and optimizing the clustering scheme, we need to discuss the estimation of the optimal number of clusters. In this paper, the information criteria is used as the clustering loss function among the iterative algorithm. Then, we can simultaneously update the identification of heterogeneity in clusters and the optimal number of clusters.
After using the FPCA, the sample data is { ( y 1 , ξ ^ 1 ) , ( y 2 , ξ ^ 2 ) , , ( y n , ξ ^ n ) } . According to the previous analysis, it is assumed that the sample is composed of M sub-populations, and the characteristics of each population are represented by the regression hyperplane determined by the parameters.
Denote the partition { C = C m , m = 1 , 2 , , M } ,   C m { m 1 , m 2 , , m n m } , and obtain the regression model of each subpopulation is:
Y C m = ξ ^ C m β m + e C m
e C m ~ N ( 0 , σ 2 I n m )
n m = | C m |
where n m is the sample size of cluster C m and n = m = 1 M n m , Y C m = ( y m 1 , y m 2 , , y m n m ) ,   ξ ^ C m = ( ξ ^ m 1 , ξ ^ m 2 , , ξ ^ m n m ) is the response variable and principal component scores belonging to C m , respectively, and ξ ^ m j = ( ξ ^ m j 1 , ξ ^ m j 2 , , ξ ^ m j K ) for j = 1 , 2 , , n m , and I n m is a n m × n m identity matrix. Notice that ξ ^ C m is a K × n m matrix, both Y C m and e C m are n m × 1 vectors. The estimation of the number of clusters adopts the information criterion method based on the maximum likelihood estimation proposed by Shao and Wu [27], which is denoted as LS-C and can be obtained by:
D n ( C ^ M ) = min C M m = 1 M Y C m ξ ^ C m β ^ m 2 + q ( M ) A n
where β ^ m is estimated by the maximization likelihood estimation in this case, q ( M ) is a strictly increasing function of M and q ( M ) = M K generally, and A n log ( n ) or A n l o g l o g ( n ) . The first part is the residual sum of squares and the second part is a penalty function relating to M and n . At the same time, Shao and Wu [27] have proved that the estimate derived by D n ( C ^ M ) will converge to the correct number of regression hyperplanes (or the number of the clusters) with probability 1 when the sample size is large enough ( n ). It is noted that the LS-C is based on the maximum likelihood estimation. Again, a robust estimation for the case that does not have an independently and identically normal error distribution. Rao, et al. [28] constructed the robust information criterion denoted as RM-C:
R n ( C ^ M ) = min C M m = 1 M i C m ρ c ( Y C m ξ ^ C m β ^ m ) + q ( M ) A n
where β ^ m is obtained by using the M-estimation in this case, and both q ( M ) and A n are same with Formula (24). By minimizing the information criteria LS-C or RM-C when the error distribution is independently identical normal or not, the number of clusters and the partition can be obtained. The advantage of the information criteria LS-C and RM-C is that the estimated number of clusters converges to the real number when the sample size is large enough, and the details can be referenced in Shao and Wu [27] and Rao, Wu and Shao [28].

2.3. Iterative Algorithm Design

In the FCR-HL model proposed in this paper, parameter estimation, clustering optimization, and the estimation of the number of clusters are all continuously updated in the iterative process, and this section will explain the iterative algorithm.
First, the residual sum of squares obtained based on maximum likelihood estimation within the cluster is recorded as RSS (residual squares sums), and the residual sum of squares obtained based on M-estimation is recorded as RRSS (robust residual squares sums):
R S S ( C M , β 1 , β 2 , , β M ) = m = 1 M Y C m ξ ^ C m β ^ m 2
R R S S ( C M , β 1 , β 2 , , β M ) = m = 1 M ρ c ( Y C m ξ ^ C m β ^ m )
Then, within the cluster RSS at each candidate M is calculated for the least square regression or the RRSS for the M-estimation based regression to approximate the local minimization, and determine the optimal cluster number by the information criteria LS-C or the RM-C, respectively.
In addition, the regression-based cluster method is easily affected by the initial partition. The global minimum of the information criterion or its good approximation can be achieved when using a good initial partition. Thus, it is necessary to determine the initial partition C 0 . Based on the idea proposed in Qian, et al. [29], we extend it to handle the functional data. The following table Algorithm 1 shows the iterative initial partition algorithm.
Algorithm 1 An iterative algorithm for initial partition.
Step 1: Using the FPCA on X ( t ) to estimate the functional principal component score ξ ^ .
Step 2: Through mapping the mean function u ( t ) and the basis function φ ( t ) to the parameter β , respectively, we build a functional regression model where the functional principal component scores are the covariates.
Step 3: Parameters are estimated by the maximum likelihood estimation or robust estimation and based on the whole data.
Step 4:
(1) Set a distance threshold d and a sample size constant c .
(2) For l = 1 , we calculate the distance between the point and the regression hyperplane obtained in Step 3. If the distance is less than the threshold d , then the point is partitioned into C 0 , 1 , otherwise the point is partitioned into C 0 , 1 c , where | C 0 , 1 | > c ,     |   C 0 , 1 c | > c , otherwise go to Step 5.
(3) For l = l + 1 , a point in the dataset i = 1 l C 0 , i c , we estimate the parameters again and calculate the new distance. If the distance is less than d , the point is partitioned into C 0 , l + 1 , otherwise into C 0 , l + 1 c , where | C 0 , l + 1 | > c , | C 0 , l + 1 c | > c , otherwise go to Step 5.
Step 5: Obtain the initial partition C 0 = { C 0 , 1 , , C 0 , l , i = 1 l C 0 , i c } .
It should be noted that the constants c and   d are set based on the data. The initial partition is an iterative hierarchical binary clustering method, which adopts a regression model such as the least square regression in each iteration. The regression is robust, having a high breakdown threshold; thus, the Algorithm 1 is highly likely to produce a reasonable initial partition. After the initial partition, the iterative algorithm of the FCR-HL model are shown in the table Algorithm 2:
Algorithm 2 The Partition iteration algorithm based on the initial partition.
Step 1: Let s = 1 , we calculate the R S S 0 or R R S S 0 of the initial partition C 0 , and the parameter estimation β ^ .
Step 2: Let s = s + 1 , we calculate the R S S or R R S S of the data ( y i , ξ ^ i ) in C 0 ,   i and in C 0 , j , j j , respectively, and we can obtain the R S S m i n or R R S S m i n , where R S S m i n < R S S 0 or R R S S m i n < R R S S 0 . Then the updated partition is C j = C j + { ( y i , x i ( t ) } ,   C i = C i { ( y i , x i ( t ) } , and let R S S 0 = R S S m i n or R R S S 0 = R R S S m i n .
Step 3: Continue to iterate Step 2 until R S S 0 or R R S S 0 no longer drops.
In summary, the parameter estimations { β ^ m , σ ^ m 2 ,   M ^ } and the partition C ^ M = { C ^ 1 ,   C ^ 2 , ,   C ^ M } are updated in the iterative algorithm. Finally, the final regression clustering result can be obtained.
In the simulation analysis and the empirical data analysis, the K-means method has been used as a comparison model as it is a representative cluster method which only utilizes the distance between observations themselves. Our model emphasizes the importance of the auxiliary information between the response and covariate and cluster data from the regression perspective to dig the heterogeneity.

3. Results

3.1. Data Simulation

3.1.1. Model Comparison Based on Heterogeneity Partitioning

We simulate data from three different groups that satisfies Y i j = α 0 j + X i j ( t ) α 1 j ( t ) d t + ε i ,   i = 1 , 2 , , 500 ;   j = 1 , 2 , 3 . The number of units is 500 for each group and t is uniformly designed on [ 0 , 1 ] . Firstly, the functional covariate is generated by X i j ( t ) = k = 1 6 ξ i k φ k ( t i j ) where ξ i k ~ N ( 0 , 1 ) and they are independent with each other k , and φ k ( t ) are the cubic spline basis functions. Then, the different coefficients in three groups are simulated by α 1 1 ( t ) = φ 1 ( t ) φ 2 ( t ) ,   α 1 2 ( t ) = 10 φ 1 ( t ) + 7 φ 2 ( t ) ,   α 1 3 ( t ) = 4 φ 1 ( t ) , where φ 1 ( t ) = 2 sin ( 2 π t ) ,   φ 2 ( t ) = 2 cos ( 4 π t ) .
After obtaining the functional principal component scores “Score1” and “Score2” of X j ( t ) that are treated as the new explanatory variables in the regression clustering model, our model can reduce the regression analysis from the infinite dimension. Since the error is identically independent distributed, we use LS-C information criterion as the selection criterion for the number of clusters, where q ( M ) = M K , A n = c l o g ( n ) where c = 2 . Adopting the FCR-HL model proposed in this paper, the information criterion is obtained as shown in Figure 1:
Figure 1 shows that LS-C reaches the minimum when K = 3 (the scale of the vertical axis is so large that the values of LS-C at K = 3 and K = 2 in Figure seem to be close, but they are not), which is the estimation of the optimal number of clusters and consistent with the number of real clusters. When K = 1 , the LS-C reaches the maximum which means that data without partition have poor performance. It encourages us to pay more attention to the clustering regression. To show the superiority of the FCR-HL model, we compare the performance of the confusion matrix with that of the K-means that is a popular and widely used cluster method in Table 1 and Table 2.
Each row in a confusion matrix represents a predicted cluster, while each column represents a real cluster. Table 1 shows that 440, 477, and 490 samples were correctly clustered into groups, respectively. The confusion matrix of the K-means clustering method in Table 2 indicates that the K-means method has a good behavior on the partition of the first group, but it cannot effectively partition the data of the second and third group. Considering the relationship between response variables and functional explanatory variables, our model shows how the regression relationships change between clusters, not the distance of the data observations themselves. The confusion matrixes show the improvement when the auxiliary information has been added into the partition process of the data. Next, we show the heterogeneity between the clusters.

3.1.2. Heterogeneous Hidden Information Mining

Traditional cluster methods, such as the K-means method, can only provide partition results, while our model can provide regression information of each cluster. In addition, when the regression analysis is carried out to the different clusters, the heterogeneity can be mined. If the regression analysis is carried out to the data without partition, the heterogeneity has been ignored and the results of the regression analysis may be exact enough or even wrong. Therefore, it is necessary to identify the heterogeneity of data with the help of FCR-HL model.
First, the regression results of the data without partition are shown in Table 3.
Table 3 shows that if the data has not been clustered, that is, assuming that all the data are from one population, the estimated parameters are all significant at the significance level 0.05, while the R 2 that explains the goodness of fitting is only 0.3371, which is relatively small. Again, it is necessary to identify the heterogeneity of the data first, and then perform regression analysis within clusters. Then, the results will be more reliable.
Using the FCR-HL model in the simulated data by setting d = 0.02 , which is adjusted adaptively, we have the regression results of the three clusters. And the regression results of three clusters are shown in the Table 4.
The parameters in the three groups are all significant at the significance level of 0.05, and the R 2 of the three groups are 0.9158, 0.9994 and 0.9981, respectively, which means that these three regressions have better performance. Therefore, it can be shown that the FCR-HL model has improved the fitting effect and obtain reliable parameter estimations.
Comparing with the results given by the K-means method, our model utilizes the relationship between the response variable and the functional explanatory variable, and incorporates this auxiliary information into the cluster process to improve the accuracy of clustering. In addition, the FCR-HL model can update the parameter estimates by updating the principal component scores and the number of clusters when new data enters into the sample.

3.2. Climate Data

Using the classic Canadian weather data, which contains the annual temperature change and rainfall information of 35 stations, the annual rainfall is used as the response variable, using the temperature as the explanatory variable to study the influence of temperature on rainfall. Figure 2 shows the temperature of each site:
From Figure 2, although the temperature at each site has a similar trend of change, there are differences in the size of the trend change and the time of the change. If we disregard these characteristics and do regression analysis on the pooled data directly, the finding may be contradictory to the truth. Moreover, the relationship between the temperature and rainfall is very important to distinguish which cluster the data belongs to. We use the FCR-HL model to partition the data and explain the impact of temperature on rainfall. The number of sites is small, the maximum likelihood parameter estimation method is no longer applicable, and the robust estimation algorithm will be used to estimate the parameters. Thus, the RM-C is calculated to obtain the optimal number K and the cluster results shown in Figure 3.
The left picture of Figure 3 is the value of RM-C under different K obtained by the iterative algorithm, and it is obvious that RM-C reaches the minimum at K = 2 , which means that the optimal number of clusters is 2. The sites belonging to the two clusters are shown in Table A3, which shows that the logarithmic annual rainfall of the sites in cluster1 are always bigger than sites in cluster2. This cluster result is convincing.
To clearly illustrate the usefulness of the FCR-HL model, the parameter estimations for the two clusters given by the FCR-HL model and parameter estimations given by the direct regression on the pooled data. The parameter beta in the picture explains the impact of the temperature on annual rainfall over time. As we can see, although the yellow curve obtained by the use of regression analysis on the pooled data is smother than both the black curve and the blue curve, which corresponds to the cluster1 and cluster2, respectively, and are obtained by the FCR-HL model, the latter two curves can better reflect the considerable difference fluctuation of the data. That is, the parameter estimations after clustering can highlight the different characteristics of the data from different sub-populations. More specifically, the parameter estimations for the cluster1 are bigger than the parameter estimations for the cluster2 when t 50 and t 230 , which indicates that the influence of temperature on the annual rainfall of the sites in cluster1 is always bigger than that of the sites in the cluster2, and the parameter estimations of the cluster1 are smaller than the parameter estimations of the cluster2 when 50 t 230 , which indicates that the influence of temperature on the annual rainfall is always smaller for the sites of cluster1 than the sites of cluster2, conversely. In addition, the black curve and blue curve have shown considerable different degrees of volatility, which indicates that the influence of temperature on annual rainfall varies between groups and also over time.
In addition, Table 5 represents the parameter estimations of the regression directly on the pooled data. Cluster1 and cluster2 stand for the regression on the partitioned data. The R2 value of the pooled data and two clusters are 0.7567, 0.5692 and 0.6868, respectively. It is noted that there are only thirty-five sites, and the sample sizes of the two clusters are not large enough to ensure the unbiasedness and consistency of both the score estimators ξ ^ and the regression estimators β ^ . Given this condition, the R2 value of the clustered data is not promoted. However, Table A3 shows the reasonability of the partitions given by FCR-HL model. Although the performance of the R2 value is not ideal for the Canadian weather data, we may still suggest adopting the proposed model in clustering and heterogeneity learning.
After analyzing the simulated data and climate data, it can be seen that the FCR-HL model improves the accuracy of clustering, and it can be seen from the results that the FCR-HL model can detect the heterogeneity information.

3.3. China Air Pollution Data

As typical pollutants in atmosphere, inhalable particulate matter such as PM10 and PM2.5 bring risk to human health [1,30] and obtain a lot of attention from researchers. In the research on urban air quality, the concentration of PM10 and PM2.5 have a significant correlation with each other. This correlation has variability across seasons and regions. This paper uses the FCR-HL model to study the heterogeneity and characteristics of air quality in different regions of China. We first obtain national air quality data from China Meteorological Data Network (https://www.resdc.cn/data.aspx?DATAID=289, accessed on 15 February 2023), and then we clean the data. Finally, we obtain PM10 and PM2.5 concentration data of 1602 stations across the country from 1 January 2019, to 31 December 2019, with a frequency of hours. The annual average PM10 concentration of each station was obtained as a response variable, and the daily average PM2.5 concentration was obtained as a functional explanatory variable. Figure 4 shows the functional representation of discrete PM2.5 concentration data:
Figure 4 shows that the PM2.5 concentration at some stations exhibit the same pattern of change over time, while there are different patterns at other stations in the same period. The coefficients of the regression are inexact if we make use of regression analysis on all the data, as different groups of stations possess different relationships between PM10 and PM2.5. This indicates that the stations need to be partitioned first. At the same time, the auxiliary information would be beneficial for clustering.
Using the FCR-HL model proposed in this paper and considering the auxiliary information between PM10 and PM2.5, we first perform functional principal component analysis on PM2.5, and build a functional regression model:
y i = ξ ^ i β + e i
where y i is the PM10 concentration for the station i , and ξ ^ i is the vector estimated scores of the PM2.5 for the station i .
To explain the advantages and necessity of the model more clearly, firstly, the coefficients over time of the functional regression without partitioning are shown in Figure 5, and then the coefficients over time of the functional regression in each cluster are shown in Figure 6.
The coefficients in Figure 5 and Figure 6 show how the impact of PM2.5 on PM10 changes over time without partitioning and under partitions, respectively. The coefficient indicates the annual average change in PM10 concentration with daily PM2.5 concentration, and their signs indicate the positive and negative aspects of the correlation. Note that the impact of PM2.5 on PM10 means that the PM10 concentration changes with the change of the PM2.5 concentration over time. In Figure 6, according to the iterative clustering algorithm, the stations are divided into 11 groups as the optimal number of clusters is estimated to be 11, and the heterogeneity partition for analyzing the relationship between PM10 and PM2.5 are also shown.
As we can see, the regression coefficients in Figure 5 are much smoother over time. Most of them are small and positive values, which consider that the influence of PM2.5 on PM10 is almost positive and small over time for all stations. However, the impact may not be exact enough without taking into account the heterogeneities between stations as mentioned above. On the contrary, the coefficients after utilizing FCR-HL model are more convincing for the influence of PM2.5 on PM10. In Figure 6, the regression coefficients for all 11 groups have entirely different characteristics. First, these regression coefficients are both positive and negative, which differ from those in Figure 5. Second, the regression coefficients of all 11 clusters have more steep variation trends than those in Figure 5. Regression coefficients differ from one cluster to another. For example, the coefficient of cluster1 shows a rise and then a fall, and includes a local maximum and a local minimum from day 0 to day 100, which means that the impact of PM2.5 on PM10 increases first and then decreases, while the coefficient of cluster 5 only decreases and only has one local minimum from day 0 to day 100, which means that the impact of PM2.5 on PM10 is always decreasing. For another example, a similar trend is observed for the coefficients of cluster1 and cluster 9 from day 0 to day 100, but the values are different, especially the local maximum and local minimum. All these differences between the 11 clusters can demonstrate the existence of heterogeneity and the importance of the heterogeneity partition. Additionally, the coefficients of each cluster can better show the varying impact over time. As for the stations of the cluster1, the impact of PM2.5 on PM10 is negative at the beginning and then positive until close to day 100, from where the impact is negative until close to day 230 and then positive until day 300, and the impact is negative on the last day. By that analogy, how the impact of PM2.5 on PM10 varies over time and how much impact of PM2.5 on PM10 at a fixed time can be obtained for the 11 clusters.
In addition, from (28), the parameters estimations for the pooled data and grouped data are shown in Table A1 and Table A2 (in Appendix A), respectively. It can be seen from Table A1 that the functional principal component scores 5 and 6 in the regression results are not significant, and the   R 2 of the regression is 0.7484. In Table A2, the results of the 11 groups divided by the FCR-HL model show that the functional principal component scores are almost significant except the score3 in the 4th group and the score6 in the 7th group, and the R 2 of all groups are all above 0.9. It is noted that the functional explanatory variable in the FCR-HL model is expanded by the Karhunen-loeve theory, after which the functional principal component scores contain the essential information of the explanatory variable. Thus, the insignificance of the functional principal component scores in Table A1 resulting in the rejection of important parameters bears witness to the inaccuracy of the regression analysis on the pooled data which has the heterogeneity feature. By contrast, almost all the parameters estimated by the FCR-HL model are significant, which means that almost all the essential auxiliary information has been widely used for clustering. Additionally, the improvement of R 2 of all the groups also testifies the efficiency of the FCR-HL model. In the end, these comparisons show that the FCR-HL model can effectively work on heterogeneity partition and mining its internal information.
The K-means method is also used to compare the performance of the R 2 with that of the FCR-HL model. The comparison is shown in Table 6.
From Table 6, it is obvious that FCR-HL model promote the goodness of the regression analysis of the PM10 on PM2.5 significantly. Considering the auxiliary information will improve the cluster results, since the distance that is the key role in clustering has been set to be the distance between the data and regression hyperplane, not between the data itself. Then, the results of the FCR-HL model help us to better understand how the PM2.5 concentrations impact the PM10 concentrations.
In summary, the PM2.5 and PM10 data in the empirical analysis section are clustered into 11 groups by using our model. On one hand, the impact of the PM2.5 on the PM10 varies over time and between groups, by which there is obvious heterogeneity between groups. On the other hand, the significance of the parameters in Table A1 and Table A2 expresses the importance of the partition to avoid the loss of essential information. In each cluster, the impact of the PM2.5 on PM10 has up-and-down fluctuations, which means that the PM10 concentration changes with the PM2.5 concentration up and down. Moreover, the coefficients in all clusters show that the impact is not always positive.

4. Discussion

This paper constructs a heterogeneity learning model from the perspective of data clustering, which can solve the problem of clustering and provide implicit structural information about heterogeneity at the same time. Combining the regression model with the clustering algorithm can not only incorporate more effective information into the clustering and improve the accuracy of the clustering, but also analyze more precisely the relationship between the explanatory variables and the explained variables in different clusters (also called as the subpopulations). In addition, because the complexity of the actual data makes the classical regression model unable to capture the continuous characteristics of the data, we need functional data analysis techniques to add the continuous characteristics of the data to improve the research of regression clustering. Based on the functional data analysis technology, this paper uses the principal component scores and then reconstructs a new regression function. Using the iterative algorithm and information criterion, we can obtain the number of clusters and parameter estimations simultaneously. Regarding the FCR-HL model, the advantages in statistics are: first, each parameter estimation is consistent; second, the iterative algorithm can give cluster results at the same time, and they can be updated when new samples are added. The advantage in the application is that it detects the heterogeneity in data and explains how the covariate impacts the response.
In addition, both data simulation and empirical results illustrate the effectiveness of the new model and its broad application prospects. Simulation, case data and empirical data are used, and the results given by the FCR-HL model are compared with the that of the well-known K-means method. The comparisons show that our model can better partition data as the regression clustering utilizes the auxiliary information to explain how differently regression performs across the clusters, while the K-means method focus on how the distances among data behave differently across the clusters.
In summary, the data for environmental research and public health, for example, the climate data, has heterogeneity. In this way, the FCR-HL model is proving to be hugely powerful. Two future directions are pointed out. (1) We will analyze other types of air pollution data and public health data to make our model more systematized and comprehensive. (2) The auxiliary information plays an important role in our model, and data is inextricably linked in a complex social network. Therefore, digging more useful information, such as the network information and text information, and then putting them into the model will improve the study of the heterogeneity learning.

5. Conclusions

In this paper, we proposed the FCR-HL model to handle air pollution data. Firstly, the starting point of the article lies in the heterogeneity learning in the data and extends it to the functional data. Secondly, we introduce the model design, parameter estimation, and the iterative algorithm. To testify the validity of our model, the famous K-means and our model are both used in the simulation and climate data, and the performance of our model is better. An empirical analysis on the air pollution data is adopted in the final. The results show that the impact of the PM2.5 on PM10 varies between clusters and over time. To sum up, the FCR-HL model captures the continuity in data itself and incorporates the auxiliary information to support multiple pieces of information, including the number of subpopulations and how the PM2.5 impact the PM10 over time. Thus, this model may provide some effective information for the policymaking department and a new perspective for research.

Author Contributions

Conceptualization, T.W.; Methodology, T.W., L.Q. and Z.W.; Formal analysis, C.D., Z.W. and C.G.; Writing—original draft, T.W., L.Q., C.D. and Z.W.; Writing—review & editing, L.Q.; Funding acquisition, T.W. All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded by Huaqiao University’s Academic Project Supported by the Fundamental Research Funds for the Central Universities (21SKGC-QT02).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. The regression results of the pooled air pollution data.
Table A1. The regression results of the pooled air pollution data.
EstimateStd. Errort ValuePr(>|t|)
(Intercept)63.64770.4041157.5192 × 10−16***
score10.07810.001267.7452 × 10−16***
score2−0.04460.0046−9.7642 × 10−16***
score30.04790.00677.2019.15 × 10−13***
score41.24180.51282.4220.0156*
score5−0.84670.5523−1.5330.1255
score61.96772.256680.8720.3834
Significance: ‘***’, ‘*’, and ‘ ’ represent significant at the significance level of 0, 0.01, and 1, respectively.
Table A2. Regression results of the clustered air pollution data given by the FCR-HL model.
Table A2. Regression results of the clustered air pollution data given by the FCR-HL model.
EstimateStd. Errort ValuePr(>|t|)
cluster1(Intercept)6.45 × 101.76 × 10−1365.78<2 × 10−16***
score18.81 × 10−25.88 × 10−4149.88<2 × 10−16***
score2−7.03 × 10−22.64 × 10−3−26.59<2 × 10−16***
score35.82 × 10−23.06 × 10−319<2 × 10−16***
score46.592.34 × 10−128.22<2 × 10−16***
score5−4.32.17 × 10−1−19.8<2 × 10−16***
score6−1.44 × 101.11−12.96<2 × 10−16***
cluster2(Intercept)55.07510.1337411.858<2 × 10−16***
score10.07890.0005160.485<2 × 10−16***
score2-0.050.0019−26.169<2 × 10−16***
score30.00580.00232.5730.011*
score43.32710.182118.266<2 × 10−16***
score5−3.95630.2132−18.555<2 × 10−16***
score62.17750.75352.890.0044**
cluster3(Intercept)60.44120.2281264.932<2 × 10−16***
score10.09740.0008123.178<2 × 10−16***
score2−0.04680.0032−14.712<2 × 10−16***
score3−0.1610.0032−50.531<2 × 10−16***
score4−6.12580.2505−24.454<2 × 10−16***
score5−7.89310.2726−28.958<2 × 10−16***
score66.32061.19525.2885.50 × 10−7***
cluster4(Intercept)104.08142.054550.661<2 × 10−16***
score10.03150.00535.9811.30 × 10−6***
score2−0.2590.0148−17.474<2 × 10−16***
score30.01670.02910.5740.57
score420.16991.691811.9224.11 × 10−13***
score57.60241.66224.5747.26 × 10−5***
score652.446.32178.2952.27 × 10−9***
cluster5(Intercept)6.12 × 101.62 × 10−1376.952<2 × 10−16***
score13.28 × 10−27.63 × 10−442.908<2 × 10−16***
score23.97 × 10−22.02 × 10−319.622<2 × 10−16***
score3−2.03 × 10−22.40 × 10−3−8.4322.08 × 10−14***
score49.82 × 10−12.37 × 10−14.1515.41 × 10−5***
score59.922.71 × 10−136.566<2 × 10−16***
score6−2.99 × 109.93 × 10−1−30.08<2 × 10−16***
cluster6(Intercept)6.55 × 101.58 × 10−1415.781<2 × 10−16***
score16.05 × 10−26.30 × 10−496.018<2 × 10−16***
score2−9.49 × 10−22.13 × 10−3−44.605<2 × 10−16***
score31.09 × 10−22.10 × 10−35.1946.6 × 10−7***
score4−7.361.80 × 10−1−40.973<2 × 10−16***
score51.49 × 10−12.27 × 10−165.489<2 × 10−16***
score6−5.20 × 109.65 × 10−1−53.847<2 × 10−16***
cluster7(Intercept)74.77280.2767270.233<2 × 10−16***
score10.03880.00139.286<2 × 10−16***
score20.14130.003145.309<2 × 10−16***
score3−0.01970.0042-4.7197.42 × 10−6***
score410.54980.273138.627<2 × 10−16***
score59.77650.322730.291<2 × 10−16***
score60.59241.74180.340.734
cluster8(Intercept)77.37130.3616213.98<2 × 10−16***
score1−0.04490.0015−30.61<2 × 10−16***
score20.35510.004874.57<2 × 10−16***
score3−0.14360.0059−24.25<2 × 10−16***
score4−7.04490.48−14.68<2 × 10−16***
score529.84190.595350.13<2 × 10−16***
score6−70.26092.4218−29.01<2 × 10−16***
cluster9(Intercept)6.29 × 101.69 × 10−1372.187<2 × 10−16***
score15.10 × 10−26.71 × 10−475.997<2 × 10−16***
score2−2.14 × 10−22.41 × 10−3−8.9012.45× 10−15***
score39.14 × 10−22.57 × 10−335.527<2 × 10−16***
score4−6.332.25 × 10−1−28.095<2 × 10−16***
score55.512.26 × 10−124.375<2 × 10−16***
score6−5.27 × 101.03 × 10−51.265<2 × 10−16***
cluster10(Intercept)6.10 × 101.15 × 10−1530.873<2 × 10−16***
score17.02 × 10−23.82 × 10−4183.837<2 × 10−16***
score21.00 × 10−21.53 × 10−36.5734.07× 10−10***
score3−3.93 × 10−22.21 × 10−3−17.808<2 × 10−16***
score4−5.85 × 10−11.59 × 10−1−3.6840.0003***
score59.67 × 10−11.61 × 10−16.0138.35 × 10−9***
score6−4.00× 107.40 × 10−1−54.035<2 × 10−16***
cluster11(Intercept)7.46 × 101.81 × 10−1411.65<2 × 10−16***
score19.24 × 10−24.03 × 10−4229.33<2 × 10−16***
score2−9.89 × 10−22.14 × 10−3−46.2<2 × 10−16***
score31.28 × 10−13.33 × 10−338.49<2 × 10−16***
score4−1.28 × 102.44 × 10−1−52.26<2 × 10−16***
score58.202.68 × 10−130.61<2 × 10−16***
score64.40 × 109.94 × 10−144.22<2 × 10−16***
Significance: ‘***’, ‘**’, ‘*’, and ‘ ’ represent significant at the significance level of 0, 0.01, and 1, respectively.
Table A3. Two clusters for the Canadian weather case by using the FCR-HL model.
Table A3. Two clusters for the Canadian weather case by using the FCR-HL model.
Sites
cluster1St. Johns(3.17), Halifax(3.16), Sydney(3.17), Yarmouth(3.10), Charlottvl(3.08), Fredericton(3.05), Scheffervll(2.90), Arvida(2.95), Bagottville(2.97), Quebec(3.08), Sherbrooke(3.05), Montreal(2.97), Ottawa(2.96), Toronto(2.89), London(2.98), Thunderbay(2.85), Vancouver(3.06), Victoria(2.93), Pr. George(2.78), Pr. Rupert(3.41)
cluster2Winnipeg(2.71), The Pas(2.65), Churchill(2.61), Regina(2.57), Pr. Albert(2.61), Uranium Cty(2.56), Edmonton(2.67), Calgary(2.60), Kamloops(2.43), Whitehorse(2.43), Dawson(2.52), Yellowknife(2.43), Iqaluit(2.62), Inuvik(2.42), Resolute(2.16)
The values in the brackets are the logarithmic annual rainfall of the sites.

References

  1. Ho, H.C.; Wong, M.S.; Yang, L.; Shi, W.; Yang, J.; Bilal, M.; Chan, T.-C. Spatiotemporal influence of temperature, air quality, and urban environment on cause-specific mortality during hazy days. Environ. Int. 2018, 112, 10–22. [Google Scholar] [CrossRef] [PubMed]
  2. Adhikari, A.; Yin, J. Short-term effects of ambient ozone, PM2.5, and meteorological factors on COVID-19 confirmed cases and deaths in Queens, New York. Int. J. Environ. Res. Public Health 2020, 17, 4047. [Google Scholar] [CrossRef]
  3. Wibawa, B.S.S.; Maharani, A.T.; Andhikaputra, G.; Putri, M.S.A.; Iswara, A.P.; Sapkota, A.; Sharma, A.; Syafei, A.D.; Wang, Y.-C. Effects of Ambient Temperature, Relative Humidity, and Precipitation on Diarrhea Incidence in Surabaya. Int. J. Environ. Res. Public Health 2023, 20, 2313. [Google Scholar] [CrossRef]
  4. Ramsay, J.O.; Silverman, B.W. Functional Data Analysis; Springer: New York, NY, USA, 2005. [Google Scholar]
  5. Falkena, S.K.; de Wiljes, J.; Weisheimer, A.; Shepherd, T.G. Detection of interannual ensemble forecast signals over the North Atlantic and Europe using atmospheric circulation regimes. Q. J. R. Meteorol. Soc. 2022, 148, 434–453. [Google Scholar] [CrossRef]
  6. Wu, X.; Ding, Y.; Zhou, S.; Tan, Y. Temporal characteristic and source analysis of PM2.5 in the most polluted city agglomeration of China. Atmos. Pollut. Res. 2018, 9, 1221–1230. [Google Scholar] [CrossRef]
  7. Zhan, C.-C.; Xie, M.; Fang, D.-X.; Wang, T.-J.; Wu, Z.; Lu, H.; Li, M.-M.; Chen, P.-L.; Zhuang, B.-L.; Li, S.; et al. Synoptic weather patterns and their impacts on regional particle pollution in the city cluster of the Sichuan Basin, China. Atmos. Environ. 2019, 208, 34–47. [Google Scholar] [CrossRef]
  8. Dechpichai, P.; Jinapang, N.; Yamphli, P.; Polamnuay, S.; Injan, S.; Humphries, U. Multivariable Panel Data Cluster Analysis of Meteorological Stations in Thailand for ENSO Phenomenon. Math. Comput. Appl. 2022, 27, 37. [Google Scholar] [CrossRef]
  9. Qiao, Z.; Wu, F.; Xu, X.; Yang, J.; Liu, L. Mechanism of spatiotemporal air quality response to meteorological parameters: A national-scale analysis in China. Sustainability 2019, 11, 3957. [Google Scholar] [CrossRef] [Green Version]
  10. Tshehla, C.; Djolov, G. Source profiling, source apportionment and cluster transport analysis to identify the sources of PM and the origin of air masses to an industrialised rural area in Limpopo. Clean Air J. 2018, 28, 54–66. [Google Scholar] [CrossRef]
  11. Gutiérrez-Álvarez, I.; Aroba, J.; Martin, J.; Adame, J.; Bolivar, J. Use of a fuzzy qualitative model to reanalyze radon relationship with atmospheric variables in a coastal area near a NORM repository. Environ. Technol. Innov. 2022, 28, 102619. [Google Scholar] [CrossRef]
  12. Jinpeng, W.; Yang, Z.; Xin, G.; Xin, Z. A hybrid predicting model for the daily photovoltaic output based on fuzzy clustering of meteorological data and joint algorithm of GAPS and RBF neural network. IEEE Access 2022, 10, 30005–30017. [Google Scholar] [CrossRef]
  13. Song, X.; Wang, K.; Zhou, L.; Chen, Y.; Ren, K.; Wang, J.; Zhang, C. Multi-factor mining and corrosion rate prediction model construction of carbon steel under dynamic atmospheric corrosion environment. Eng. Fail. Anal. 2022, 134, 105987. [Google Scholar] [CrossRef]
  14. Chen, X.; Yang, J. Urban climate monitoring network design: Existing issues and a cluster-based solution. Build. Environ. 2022, 214, 108959. [Google Scholar] [CrossRef]
  15. Zhang, S.; Chen, Y.; Luo, Y.; Liu, B.; Ren, G.; Zhou, T.; Martinez-Villalobos, C.; Chang, M. Revealing the circulation pattern most conducive to precipitation extremes in Henan Province of North China. Geophys. Res. Lett. 2022, 49, e2022GL098034. [Google Scholar] [CrossRef]
  16. Crutzen, P.J. The influence of nitrogen oxides on atmospheric ozone content. In Paul J. Crutzen: A Pioneer on Atmospheric Chemistry and Climate Change in the Anthropocene; Springer: Cham, Switzerland, 2016; pp. 108–116. [Google Scholar]
  17. Franceschi, F.; Cobo, M.; Figueredo, M. Discovering relationships and forecasting PM10 and PM2.5 concentrations in Bogotá, Colombia, using artificial neural networks, principal component analysis, and k-means clustering. Atmos. Pollut. Res. 2018, 9, 912–922. [Google Scholar] [CrossRef]
  18. Späth, H. Algorithmus 39. Klassenweise lineare Regression. Computing 1979, 22, 367–373. [Google Scholar] [CrossRef]
  19. Joki, K.; Bagirov, A.M.; Karmitsa, N.; Mäkelä, M.M.; Taheri, S. Clusterwise support vector linear regression. Eur. J. Oper. Res. 2020, 287, 19–35. [Google Scholar] [CrossRef]
  20. Amb, A.; St, B.; Ec, C. Incremental DC optimization algorithm for large-scale clusterwise linear regression—ScienceDirect. J. Comput. Appl. Math. 2020, 389, 113323. [Google Scholar]
  21. da Silva, R.A.; de Carvalho, F.D.A. Weighted Clusterwise Linear Regression based on adaptive quadratic form distance. Expert Syst. Appl. 2021, 185, 115609. [Google Scholar] [CrossRef]
  22. Bagirov, A.M.; Mahmood, A.; Barton, A. Prediction of monthly rainfall in Victoria, Australia: Clusterwise linear regression approach. Atmos. Res. 2017, 188, 20–29. [Google Scholar] [CrossRef]
  23. Torti, F.; Riani, M.; Morelli, G. Semiautomatic robust regression clustering of international trade data. Stat. Methods Appl. 2021, 30, 863–894. [Google Scholar] [CrossRef]
  24. Ramsay, J.O.; Dalzell, C. Some tools for functional data analysis. J. R. Stat. Soc. Ser. B Methodol. 1991, 53, 539–561. [Google Scholar] [CrossRef]
  25. Li, Y.; Wang, N.; Carroll, R.J. Selecting the number of principal components in functional data. J. Am. Stat. Assoc. 2013, 108, 1284–1294. [Google Scholar] [CrossRef] [PubMed]
  26. Yao, F.; Müller, H.-G.; Wang, J.-L. Functional data analysis for sparse longitudinal data. J. Am. Stat. Assoc. 2005, 100, 577–590. [Google Scholar] [CrossRef]
  27. Shao, Q.; Wu, Y. A consistent procedure for determining the number of clusters in regression clustering. J. Stat. Plan. Inference 2005, 135, 461–476. [Google Scholar] [CrossRef]
  28. Rao, C.R.; Wu, Y.; Shao, Q. An M-estimation-based procedure for determining the number of regression models in regression clustering. J. Appl. Math. Decis. Sci. 2007, 2007, 37475. [Google Scholar] [CrossRef] [Green Version]
  29. Qian, G.; Wu, Y.; Ferrari, D.; Qiao, P.; Hollande, F. Semisupervised clustering by iterative partition and regression with neuroscience applications. Comput. Intell. Neurosci. 2016, 2016, 4037380. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Pui, D.Y.; Chen, S.-C.; Zuo, Z. PM2.5 in China: Measurements, sources, visibility and health effects, and mitigation. Particuology 2014, 13, 1–26. [Google Scholar] [CrossRef]
Figure 1. LS-C values over the candidate K for the simulation data.
Figure 1. LS-C values over the candidate K for the simulation data.
Ijerph 20 04155 g001
Figure 2. Temperature data for 35 sites (one color stands for one site).
Figure 2. Temperature data for 35 sites (one color stands for one site).
Ijerph 20 04155 g002
Figure 3. RM-C values over the candidate K   (the left panel) and coefficients for both the pooled data and the clustered data (the right panel).
Figure 3. RM-C values over the candidate K   (the left panel) and coefficients for both the pooled data and the clustered data (the right panel).
Ijerph 20 04155 g003
Figure 4. The PM2.5 concentrations at all sites (one color stands for one station).
Figure 4. The PM2.5 concentrations at all sites (one color stands for one station).
Ijerph 20 04155 g004
Figure 5. Regression parameter estimation without partitioning.
Figure 5. Regression parameter estimation without partitioning.
Ijerph 20 04155 g005
Figure 6. Parameter estimation of the different partitions given by the FCR-HL models.
Figure 6. Parameter estimation of the different partitions given by the FCR-HL models.
Ijerph 20 04155 g006
Table 1. Confusion Matrix of the simulation data based on the FCR-HL model.
Table 1. Confusion Matrix of the simulation data based on the FCR-HL model.
Real Cluster
Cluster1Cluster2CLUSTER3
Predicted clustercluster144098
cluster2194772
cluster34114490
Table 2. Confusion Matrix of the simulation data based on the K-means model.
Table 2. Confusion Matrix of the simulation data based on the K-means model.
Real Cluster
Cluster1Cluster2Cluster3
Predicted cluster cluster1500 246 241
cluster20129127
cluster30125132
Table 3. Regression Results for the Pooled Data.
Table 3. Regression Results for the Pooled Data.
EstimateStd. Errort ValuePr(>|t|)
(Intercept)−0.037640.28243−0.1330.894
Score10.223660.0207510.781<2 × 10−16***
Score2−1.276930.05027−25.401<2 × 10−16***
Significance: ‘***’ represent significant at the significance level of 0.
Table 4. Regression Results based on the FCR-HL Model.
Table 4. Regression Results based on the FCR-HL Model.
EstimateStd. Errort ValuePr(>|t|)
cluster1(Intercept)0.05380.02082.5840.0101*
Score10.31070.004963.228<2 × 10−16***
Score20.35250.008740.321<2 × 10−16***
cluster2(Intercept)−0.07040.0222−3.1660.00164**
Score13.20980.0037858.3<2 × 10−16***
Score2−2.15980.0069−312.43<2 × 10−16***
cluster3(Intercept)−0.07820.0201−3.8990.000109***
Score10.01690.000918.151<2 × 10−16***
Score2−1.26420.0024−536.697<2 × 10−16***
Significance: ‘***’, ‘**’, ‘*’ represent significant at the significance level of 0, 0.001 and 0.01, respectively.
Table 5. The regression results for both the pooled data and the clustered data given by the FCR-HL model.
Table 5. The regression results for both the pooled data and the clustered data given by the FCR-HL model.
EstimateStd. Errort ValuePr(>|t|)
pool data(Intercept)2.81480.0252111.63100.0000***
score10.00160.00027.97000.0000***
score20.00160.00072.43500.0211*
score3−0.00620.0014−4.41000.0001***
score4−0.00540.0027−1.98600.0562.
group1(Intercept)2.84760.072939.08602 × 10−16***
score10.00170.00091.94200.0711
score20.00210.00102.03000.0605*
score3−0.00610.0022−2.79800.0135*
score4−0.01290.0085−1.51100.1516
group2(Intercept)2.58030.042161.24100.0000***
score10.00040.00031.41300.1879
score2−0.00120.0007−1.85600.0931
score3−0.00290.0015−1.87900.0896
score4−0.00450.0020−2.23000.0498*
Significance: ‘***’, ‘*’, ‘.’, and ‘ ’ represent significant at the significance level of 0, 0.01, 0.05, and 1, respectively.
Table 6. The performance of the R 2 of the K-means method and FCR-HL model.
Table 6. The performance of the R 2 of the K-means method and FCR-HL model.
Cluster R 2
K-meanscluster10.1761
cluster20.5314
cluster30.1686
cluster40.7081
cluster50.4111
cluster60.7845
FCR-HL modelcluster10.9939
cluster20.9947
cluster30.9939
cluster40.9734
cluster50.9723
cluster60.9941
cluster70.9873
cluster80.9916
cluster90.9893
cluster100.9955
cluster110.9983
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, T.; Qin, L.; Dai, C.; Wang, Z.; Gong, C. Heterogeneous Learning of Functional Clustering Regression and Application to Chinese Air Pollution Data. Int. J. Environ. Res. Public Health 2023, 20, 4155. https://doi.org/10.3390/ijerph20054155

AMA Style

Wang T, Qin L, Dai C, Wang Z, Gong C. Heterogeneous Learning of Functional Clustering Regression and Application to Chinese Air Pollution Data. International Journal of Environmental Research and Public Health. 2023; 20(5):4155. https://doi.org/10.3390/ijerph20054155

Chicago/Turabian Style

Wang, Tingting, Linjie Qin, Chao Dai, Zhen Wang, and Chenqi Gong. 2023. "Heterogeneous Learning of Functional Clustering Regression and Application to Chinese Air Pollution Data" International Journal of Environmental Research and Public Health 20, no. 5: 4155. https://doi.org/10.3390/ijerph20054155

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop