Next Article in Journal
Machine Learning Modeling of Wine Sensory Profiles and Color of Vertical Vintages of Pinot Noir Based on Chemical Fingerprinting, Weather and Management Data
Next Article in Special Issue
Laboratory Evaluations of Correction Equations with Multiple Choices for Seed Low-Cost Particle Sensing Devices in Sensor Networks
Previous Article in Journal
Direct Catalytic Fuel Cell Device Coupled to Chemometric Methods to Detect Organic Compounds of Pharmaceutical and Biomedical Interest
Previous Article in Special Issue
Developing of Low-Cost Air Pollution Sensor—Measurements with the Unmanned Aerial Vehicles in Poland
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Long-Term Evaluation and Calibration of Low-Cost Particulate Matter (PM) Sensor

1
Department of Mechanical Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Korea
2
Air Quality Analysis and Control Center, Seoul Metropolitan Research Institute of Public Health and Environment, 30, Janggunmaeul 3-gil, Gwacheon-si, Gyeonggi-do, Seoul 08826, Korea
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(13), 3617; https://doi.org/10.3390/s20133617
Submission received: 28 April 2020 / Revised: 15 June 2020 / Accepted: 24 June 2020 / Published: 27 June 2020
(This article belongs to the Special Issue Air Quality and Sensor Networks)

Abstract

:
Low-cost light scattering particulate matter (PM) sensors have been widely researched and deployed in order to overcome the limitations of low spatio-temporal resolution of government-operated beta attenuation monitor (BAM). However, the accuracy of low-cost sensors has been questioned, thus impeding their wide adoption in practice. To evaluate the accuracy of low-cost PM sensors in the field, a multi-sensor platform has been developed and co-located with BAM in Dongjak-gu, Seoul, Korea from 15 January 2019 to 4 September 2019. In this paper, a sample variation of low-cost sensors has been analyzed while using three commercial low-cost PM sensors. Influences on PM sensor by environmental conditions, such as humidity, temperature, and ambient light, have also been described. Based on this information, we developed a novel combined calibration algorithm, which selectively applies multiple calibration models and statistically reduces residuals, while using a prebuilt parameter lookup table where each cell records statistical parameters of each calibration model at current input parameters. As our proposed framework significantly improves the accuracy of the low-cost PM sensors (e.g., RMSE: 23.94 → 4.70 μ g/m 3 ) and increases the correlation (e.g., R 2 : 0.41 → 0.89), this calibration model can be transferred to all sensor nodes through the sensor network.

1. Introduction

Particulate matter (PM) is classified by size bins of maximum aerodynamic diameter (e.g., PM10 < 10 μ m, PM2.5 < 2.5 μ m, and PM1.0 < 1 μ m). Exposure to PM is regarded as a major health risk and it causes various diseases from respiratory and cardiovascular diseases to neurodevelopmental disorders and mental disorders [1]. According to recent reviews, it globally affects a mortality rate of up to 4.2 million deaths per year [2,3]. The collection and analysis of PM concentration data is now being major interest of government and non-government organizations because of such an effect on public health. Meanwhile PM concentration features spatial and temporal fluctuation due to their aerodynamic nature, hence enabling higher spatiotemporal resolution of the PM concentration data is also being increasingly important. However, maintaining such high resolution with a government-grade air monitoring station is nearly impossible by the matter of cost. Additionally, their sampling interval is rather long, at the cost of the data quality. Because of the above facts, low-cost light scattering PM sensor have been widely used for a practical alternative of the air monitoring station in dense sensor deployment [4]. Even though these sensors still have a major challenge on data quality, they have overwhelming advantages of less expensive price, more compact size, and faster update rate [5,6]. As a result, many countries have densely deployed the low-cost sensor in the smart city [7,8,9]. As of April 2020, there are 40 government-operating beta attenuation monitor (BAM) stations in Seoul releasing information to the public every hour [10]. Additionally, approximately 3500 light-scattering PM equipment have been deployed in Korean major cities by leading telecommunication companies [11,12] and have continuously increased spatiotemporal resolution, as shown in Figure 1. As the importance of low-cost sensors has been increasing, more research is being conducted to evaluate and calibrate low-cost light scattering sensors.
Evaluation of low-cost sensors was analyzed under various climate and weather conditions over the world from a day to longer than a year [13,14,15]. Additionally, these studies have several aims, such as environmental effect analysis [16], newly developed sensor validation [17], and calibration performance evaluation. We built four kinds of rough prototypes for briefly checking sample-to-sample variability (PMSA003, PMS7003 (Plantower Inc., Beijing, China [18]), SEN0177 (DFRobot Inc., Shanghai, China [19]), and HPMA115s0 (Honeywell Sensing Inc., Charlotte, NC, USA [20])). Subsequently, we chose PMS7003 and developed a muti-sensor platform for further long-term evaluation. We describe performance limitation on the raw signal of low-cost sensors that have been identified by co-locating them with governmental BAM for about 7.5 months in Section 3.2. Plus, we compared theperformance between raw signal and calibrated signals under various environmental explanatory variables, sampling intervals, and calibration methods.
Based on the previous research of low-cost PM sensors [13,14,15,16,17], the low-cost sensor has limited accuracy and it requires a calibration procedure in order to boost accuracy. The most common calibration methods on PM2.5 calibration are a linear calibration accounting for two-thirds of total calibration cases according to a technical report from the Joint Research Center of the European Commission [21] (univariate linear regression (ULR)—46%/multivariate linear regression (MLR)—22%). As such, linear regression (LR) is widely used for PM calibration, since it is a simple and powerful method. However, LR sometimes generates an under-fitting problem when the true function of data is not sufficient to fit the linear function approximation. For example, MLR suffers severe performance degradation under a high humidity environment [22]. On the other hand, non-linear calibration is quite free from the problem, but it is required to avoid an over-fitting problem by selecting an appropriate order of function approximation.
Beyond the cases of a single calibration model, sequentially combined calibration models were studied. Lin et al. 2018 introduced a two-phase calibration model while using Akaike information criterion (AIC) and random forests (RF). As a first phase, several linear models are created by selecting subsets from the entire input variable space based on the AIC index. After that, RF is used to learn the residual of the linear models [23]. However, RF uses the aggregation of randomized models with several decision trees and their results are averaged in the regression problem; it is usually good at avoiding over-fitting problems, but it might present lower accuracy due to the averaged result from several decision trees. Cordero et al. 2018 obtained the calibrated PM value through the linear model to generate the difference of the raw PM value. Subsequently, a non-linear calibration among RF, support vector machine (SVM), and artificial neural networks (ANN) is performed using the difference and the input variables [24]. However, their dataset was small and the training dataset and test dataset were shared with the k-fold cross-validation method.
This paper introduces a novel combined calibration method that selects the most accurate model from models for each sampling. This combined calibration differs from the cited methods in dividing the entire input variable space into segmented cells and applying the best model among multiple models for each cell. Besides, we proposed additional procedures to reduce the residuals probabilistically by managing the sum of residuals that are generated by the selected model in each cell. This combined calibration is named segmented model and residual treatment calibration (SMART calibration). The performance of this SMART calibration method was analyzed with raw data and compared with not only other state-of-the-art calibration methods, but also other study group’s calibrated results based on 16 month-duration datasets [25]. The comparison results show that our proposed method offers better accuracy than counterparts.
Our contribution can be summarized, as follows:
  • Field evaluation of low-cost PM2.5 sensor in Seoul, Korea has been executed and analyzed. These were under several conditions, such as environmental explanatory variables (humidity/temperature/ambient light), sampling intervals (5 min/1 h/24 h), and calibration methods (linear/non-linear/SMART calibration).
  • A novel combined calibration method has been introduced to increase low-cost sensor accuracy. The performance was compared to other calibration methods. This calibration method can also be applied to an upcoming future dataset with the previously generated models.
The next sections are structured, as follows. Section 2 describes the overall method of this research including data collection, data preprocessing, and data calibration. Section 3 presents the results and discussion. It covers the result of the experiments and explains the analysis of the result. Section 4 summarizes this paper and explains the potential use cases.

2. Methods

This section is written for describing the overall procedures of evaluation and calibration on low-cost sensors. It includes data collection (Section 2.1), data preprocessing (Section 2.2), data calibration methods (Section 2.3), and metric information (Section 2.4). Figure 2 shows the overall procedures for sensor evaluation and calibration. A multi-sensor platform has been developed and co-located with the governmental BAM in the government station (Dongjak-gu, Seoul, Korea) to evaluate low-cost light scattering PM2.5 sensor. The data have been collected for around 7.5 months (15 January 2019–4 September 2019). The following subsections will explain more information on several procedures we executed.

2.1. Data Collection

In this section, the sensor configuration and deployment information on the low-cost sensor and reference system is described.

2.1.1. Multi-Sensor Platform—Low-Cost Light Scattering PM Sensor

We developed prototypes and roughly evaluated the repeatability of signal and the sample-to-sample variability to select a proper low-cost sensor among four kinds of commercial low-cost sensors. Based on this analysis, PMS7003 (Plantower Inc., Beijing, China [18]) was chosen and the configuration and design of the multi-sensor system development proceeded for long-term evaluation and calibration.Detailed information of prototypes evaluation is further described in Appendix C. The selected PM sensor and other environmental sensors were built together as a multi-sensor platform, as shown in Figure 3a.
Three low-cost PM sensors are mounted on a single multi-sensor platform to identify sample variation among three low-cost sensor samples. It also includes environmental sensors of humidity, temperature, and ambient light to analyze and calibrate the environmental impact on the measurement of PM. Data collection of each sensor module in low level is performed through Arduino Due, and communication with sensor network in high level is implemented through Raspberry Pi 3B+, as shown in Figure 3b. Data are measured and stored at 1-s sampling intervals and configured to be transmitted to users via wired LAN or Wi-Fi.

2.1.2. Governmental BAM—High-End PM Monitoring Station

In Korea, BAM is the only regulatory reference that received a formal approval from the Korean Ministry of Environment. As a reference to the experiment, the PM711 model (Kimoto Inc., Osaka, Japan [26]) was selected because it has a relatively fast sampling interval (5 min) compared to a sampling interval (1 h) of other BAM as shown in Figure 4b. Five min. sampled output may be less accurate than 1 h averaged output since 5 min. sampling interval data is the data source of 1 h averaged output. This equipment consists of two separate racks of monitoring systems for PM2.5 and PM10 measurements. It features a high accuracy, since it includes a sampling stabilizer, such as particle separator (PM2.5 impactor and PM10 impactor) and environment controllers of temperature, humidity, and air-flow to stably supply PM.

2.2. Data Preprocessing

In this step, different data sampling intervals of two equipment were matched so that the data from the multi-sensor platform can be directly compared with the data from the governmental BAM. Data were excluded from data preprocessing if any intermittent data were observed from sensor modules. The data of the multi-sensor platform was averaged with a 5-min. fixed window. The preconditioned data were used to build the linear/non-linear calibration model, such as MLR, MLP, and SMART calibration, and to perform the actual calibration with the prebuilt model in the next step. To build and evaluate the calibration model, the dataset was constructed in two ways for comparison, as shown in Figure 5. One is sampling in a sequential manner (hereinafter sequential) and the other is a random manner (hereinafter shuffled) under various separating ratio (unless otherwise stated, 80% of the total datasets were randomly selected to construct a training dataset and the remaining 20% was used as a test dataset). Data preprocessing was done via Matlab R2018b [27] and Python 3. Pandas [28], the state-of-the-art Python data manipulation library, was also utilized for data preprocessing.

2.3. Data Calibration

In this paper, calibration doesn’t mean any correction for the observed data in the training dataset. The calibration means an estimation for the unseen data in the training dataset. PM2.5 (low-cost sensor), humidity, temperature, and ambient light were selected as explanatory variables, and PM2.5 (BAM) was selected as a response variable. The influence of each explanatory variable was separately analyzed in Section 3.3. The calibration methods were analyzed in three ways: linear, non-linear, and SMART calibration. Data calibration was performed via Python 3 libraries (pandas [28], keras [29], sklearn [30] and tensorflow [31]).

2.3.1. Linear Calibration

Based on multivariate linear regression (MLR), we selected PM (low-cost sensor), humidity, and temperature of the multi-sensor platform as explanatory variables and chose PM (governmental BAM) as the response variable. The least-square method was applied with the chosen coefficients as shown in Table 1 (all the p-values for each coefficient were all less than 0.00001 and are omitted hereinafter.)
y ^ = w 0 + i = 1 N w i x i y ^ : P M ̲ c a l i b r a t e d , w 0 : i n t e r c e p t , w i : c o e f f i c i e n t , x i : i n p u t v a r i a b l e ̲ m e a s u r e d

2.3.2. Nonlinear Calibration

Non-linear calibration was performed based on a multilayer perceptron (MLP) from the neural network and it consists of an input layer, an output layer, and hidden layers. The calibration is performed by making an appropriate sum of weights between neurons existing in each layer, as shown in Figure 6. The sum of each weight passes a non-linear activation function, rectified linear unit (ReLU), to generate a non-linear model. ReLU activation is explained in Equation (2). PM2.5 (low-cost sensor), humidity, and temperature from the multi-sensor platform were preprocessed and used as input variables in the input layer. PM2.5 (BAM) from the governmental station was used as output variables in the output layer. Hyperparameters were manually chosen under several trials, as shown in Table 2.
y ^ = W 3 m a x ( 0 , W 2 m a x ( 0 , W 1 x ) ) y ^ : P M ̲ c a l i b r a t e d , W i : w e i g h t m a t r i x , x : i n p u t v a r i a b l e ̲ m e a s u r e d

2.3.3. SMART Calibration (Combined Calibration)

In this section, we introduce a SMART calibration algorithm, which selectively maps most probabilistically appropriate models given multiple linear/non-linear calibration models. LR is the most representative methodology for finding a best-fit line for the approximation and estimation. However, the LR is usually too simple to correctly fit the true function of complex data. And the best-fit line is highly affected by non-linearity, outliers, and data range. Meanwhile, non-linear calibration can optimally generate a model which has lower prediction error of training dataset as the model complexity increases more. However, in this case, a prediction error of the test dataset is largely generated in case the model is overfitted. This is well known disadvantage of non-linear calibration (limitations of linear and non-linear calibration are further described in Appendix A).
Each model has its “weak spot” in their domain due to the above nature of the linear/non-linear calibration models. For instance, LR has its weak spot in the non-linear region of the domain, and MLP has weak spot in the overfitted region. The SMART calibration method has been developed to improve this limitation. Figure 7 shows the overall procedures of model build and model selection. Firstly, two training models and residual maps are generated with training dataset in model build step. Secondly, a prevailing model map is constructed by comparing residual maps. Subsequently, the prevailing model map can be utilized in the model selection step.
In more detail, the residual map that divides a full range of explanatory variable space (e.g., temperature and humidity) into segmented small area cells is generated, as shown in Figure 8. Every residual of training data is allocated to a corresponding partitioned cell of residual maps. The distribution of residuals in each cell of a residual map are assumed as a Gaussian, since residual is the error of the estimator. Each cell has its probability density function (PDF), which is expressed by its average and standard deviation. This information is stored in residual maps. For each cell, a prevailing calibration model is defined by comparing the residual maps of the linear and non-linear models. Every prevailing calibration model of each cell is stored in a prevailing model map. Once a prevailing model map is completed through a whole training dataset, the corresponding input cell of test dataset calibrates their data with a predefined suitable model and averaged residual, as shown in Figure 9 (Procedures of SMART calibration are further described in Appendix D). Figure 7, Figure 8 and Figure 9 are examples of explanations and the number and type of calibration models are not limited in MLR and MLP. SMART calibration has good features on the simpleness of procedures and the compatibility of several models since it is the hierarchical calibration model. As it depends on the consistency of estimators, the number of data in each cell is increased when the accuracy of SMART calibration is increased. Additionally, it has good performance with a high bias model, but it cannot outperform when SMART has only high variance models, since SMART calibration selects model according to variance of data in segmented cell.

2.4. Metric Information

Four key metrics were used to analyze the performance as shown in Table 3. The analysis index used mean absolute error (MAE), mean squared error (MSE), root mean squared error (RMSE), and R 2 (coefficient of determinant). RMSE is excluded hereinafter, because it can be calculated by MSE. In some analysis cases, slope, intercept, mean and standard deviation, quartile, and Pearson’s correlation coefficient are also used.

3. Results and Discussions

This section is written for describing preliminary analysis (Section 3.1) by varying explanatory variables and sampling interval conditions. Subsequently, we compare the performance of SMART calibration under several conditions, such as before calibration (Section 3.2), after calibration (Section 3.3), other calibrations methods (Section 3.4), and a previous similar study (Section 3.5).

3.1. Preliminary Analysis

3.1.1. Performance Characteristics: Explanatory Variables

The low-cost sensor features cost-effectiveness, lightweight, rapid, and continuous measurements, but it has a limitation on their accuracy. Accordingly, this low-cost sensor generally excludes any sampling stabilizer for PM size, humidity, temperature, or flow control. As a result, the low-cost sensor is directly affected by the surrounding environment. In particular, the influence of humidity and temperature has been continuously researched by several research groups, and calibration models that are based on meteorological parameters are introduced as Equations (3) and (4) [22,32].
y ^ = β 1 + β 2 ρ 2 ( 1 ρ ) y + β 0
y ^ = α 1 y + α 2 t + α 0
y ^ : P M ̲ c a l i b r a t e d , α i : c o e f f i c i e n t , y : P M ̲ m e a s u r e d , ρ : R H ̲ m e a s u r e d , t = t e m p . ̲ m e a s u r e d .
In this section, short-term analysis for the effects of humidity, temperature, and ambient light on PM concentration was performed, and long-term analysis for the effects of humidity and temperature was executed while applying linear and non-linear calibration. As a result, we found that the humidity and temperature is the important variable on PM concentration calibration.

Performance Characteristics: Explanatory Variables, Short-Term Analysis (45 Days)

The experimental data from 18 July 2019 to 4 September 2019 were analyzed, since the storage of data on the ambient light sensor was executed in this limited period. This period was summer in Korea and the summer climate of Korea is characterized by high temperatures and high humidity. As previously researched in Equation (3), high humidity features high non-linearity of the calibration function. In our result, the non-linear calibration had a relatively smaller error than the linear calibration, as shown in Table 4.
The comparison of the uncalibrated raw PM signal and the calibrated PM signal expressed a significant improvement (e.g., MAE of MLP: 9.78 → 3.55 μ g/m 3 ), and the calibration, including the PM raw signal with humidity signal showed remarkable improvement (e.g., MAE of MLP: 3.55 → 2.99 μ g/m 3 ). In the case of calibrations, including temperature and ambient light, the improvement was insignificant. The long-term analysis was performed in the next section on the influence of PM, humidity, and temperature.

Performance Characteristics: Explanatory Variables, Long-Term Analysis (7.5 Months)

The experimental data from 15 January 2019 to 4 September 2019 were analyzed in Table 5. Similar to short-term analysis, the uncalibrated raw PM signal and the calibrated PM signal (e.g., MAE: 15.87 → 4.21 μ g/m 3 ), and the calibration, including raw PM signal with humidity signal (e.g., MAE: 4.21 → 4.04 μ g/m 3 ) showed a significant improvement. The performance by humidity signal under the short-term analysis was highly improved where the high humidity region accounted for the majority, whereas, under the long-term, analysis was slightly improved. However, the performance was highly improved by adding temperature, especially for non-linear calibration cases (e.g., MAE: 4.04 → 3.52 μ g/m 3 ).

3.1.2. Performance Characteristics: Sampling Interval

In this section, the 5 min. sampling interval was converted into one hour and 24 h sampling interval to compare with other previous studies. Most of the PM researchers analyzed sensor performance under one hour or 24 h of sampling interval, because the high-end BAM as a reference-grade instrument was used in hourly sampling intervals. Especially, Met One BAM-1020 (Met One instrument Inc., Grants Pass, OR, USA [33]), a US EPA [34] certified equipment, was used in many previous studies [14,25].
Non-overlapping sliding windows were applied for one hour or 24 h of sampling intervals. MAE decreased with longer sampling intervals, since more aggregated data reduced data variation, as shown in Table 6. In the case of the 24 h sampling interval, R 2 , which indicates proportional variance for response variables was lowered. This lowered R 2 is derived from reduced data range by aggregation. This can be calculated from the R 2 equation in Table 3 or explained by Figure 10.

3.2. Comparative Analysis: The Low-Sensor and Governmental BAM (Before Calibration)

The performance of the low-cost sensor was analyzed by comparing raw signals from the sensor platform and the reference signal from the governmental BAM (hereinafter three low-cost sensors’ raw signals are described as Raw (a/b/c), and the BAM signal is remarked as BAM in Tables and Figures). Figure 11 shows the correlation between three low-cost sensors and the BAM. Additionally, their correlation coefficient, evaluation metrics, and statistic summary are listed in Appendix B (Table A1 and Table A2).
R 2 of low-cost sensors with BAM was expressed as 0.416, 0.546, and 0.417. However, R 2 among low-cost sensors expressed a very strong positive correlation coefficient, with 0.937, 0.994, and 0.933. It is possible to expect the effectiveness of the performance improvement via the calibration due to a very strong correlation coefficient with BAM output. Additionally, high R 2 among the low-cost sensors in the commonplace indicates that a common calibration model can be shared under logged condition. The data distribution expresses the overall difference between the low-cost sensor and the BAM, as shown in Figure 11. The reproducibility among the low-cost sensors looked high with a very tight output span, but the reproducibility between the low-cost sensors with the BAM output looked low with a wide output span.

3.3. Comparative Analysis: The Low-Cost Sensor and Governmental BAM (After Calibration)

MLR, MLP, and SMART calibration were executed to evaluate the performance by following the methods in Section 2 with a PM sensor instead of three PM sensors. All of the described results from this subsection were only calculated by the test dataset, since the training dataset was used for calibration model generation. Figure 12 shows the correlation between low-cost sensors and BAM. Additionally, their correlation coefficient, evaluation metrics, and statistic summary are listed in Appendix B (Table A3 and Table A4).
The means and standard deviations in 38.12 ± 31.18 μ g/m 3 (raw signal), 23.13 ± 13.74 μ g/m 3 (MLR), 22.7 ± 13.12 μ g/m 3 (MLP), and 23.09 ± 13.85 μ g/m 3 (SMART calibration) were obtained and compared with 23.10 ± 14.84 μ g/m 3 (BAM). The normalized mean bias error declined from 65% to 1.7% and standard deviation decreased from 110% to 11.6% by applying MLP calibration models. R 2 were observed as 0.41 (raw signal), 0.84 (MLR), 0.86 (MLP), and 0.89 (SMART calibration), respectively. By these results, the calibration significantly improves the performance of the low-cost sensors.
As shown in Figure 13 and Table 7, several calibration results were analyzed by applying different data preprocessing conditions. Our dataset was analyzed by a shuffled method as well as a sequential method, since Korea has four distinct seasons and 7.5 months collected dataset was experienced through the limited climate and season. The shuffled dataset features a higher R 2 than the sequential dataset. On the other hand, the sequential dataset features a lower error in MAE and MSE than the calibration result of the dataset under the shuffled condition. Appendix E further describes more information on several shuffled methods on successive hourly or daily data chunk size.
This calibration can also be applied to an upcoming future dataset with the previously generated calibration models under the sequential method. As an example, the sequential datasets from the raw signal, the SMART calibration signal, and the government BAM’s signal were plotted, as shown in Figure 14. For detailed information, a training dataset was constructed with the sequential condition from 15 January 2019 to 8 August 2019 and their calibration model was created. After that, the test dataset was built from 8 August 2019 to 4 September 2019 and the previously derived model from the training dataset was applied. As a result, the test dataset confirms a very similar BAM output (e.g., MAE = 2.79, MSE = 14.02, and R 2 = 0.76).

3.4. Comparative Analysis: Other Calibration Methods

The SMART calibration method was compared with other regression methods, such as lasso regularization, ridge regularization, and polynomial linear regression (PLR). Additionally, we applied state-of-the-art ensemble learning methods such as random forests (RF), extreme gradient boosting (XGB), and light gradient boosting (LGB). The hyperparameters of these methods were exhaustively searched over specified hyperparameters. A cross-validated grid search algorithm was applied in order to optimize hyperparameter and more information of the hyperparameter grid is further described in Appendix F. SMART calibration parameters were also customized with an increased cell size of the residual map and another calibration model. Several dataset ratios under the sequential method were applied for the data precondition method. Our calibration method expressed the smallest MAE and MSE among twelve calibration methods, as shown in Figure 15 and Table 8.

3.5. Comparative Analysis: Previous Similar Study

The SMART calibration result was compared with the latest results from a similar study because we could not get a long-term dataset of other research under similar conditions [25]. The study had a field test for 16 months in North Carolina, USA by comparing a commercial product (PA-II (Purple Air Inc., Draper, UT, USA [36])) with a BAM 1020 (Met One instrument Inc., Grants Pass, OR, USA [33]). This study included a long-term performance evaluation and a calibration under 1 h sampling interval basis. 90% training dataset and 10% test dataset by the shuffled (random) method was conducted in data preprocessing. MLR with raw PM signal, humidity, and temperature was applied for their calibration method.
Before the calibration, the results of the other group study were superior, thanks to a factory calibration under product manufacturing, as shown in Table 9. After the calibration, our group’s shuffled dataset showed higher R 2 than the other group study and our group’s sequential dataset with SMART calibration was superior in all performance aspects.

4. Conclusions

The low-cost PM sensor was evaluated and it was calibrated with co-located governmental BAM in the urban air monitoring station (Dongjak-gu, Seoul, Korea). The performance of the low-cost PM sensor was analyzed using the analysis metrics of MAE, MSE, RMSE, R 2 , slope, intercept, mean, standard deviation, and quartile. The means and standard deviations in the raw signal of the low-cost sensor and BAM output were 38.15 ± 31.29 and 23.10 ± 14.84 μ g/m 3 , with around 65% normalized mean bias error. Additionally, a comparison of calibration methods, such as MLR, MLP, and SMART calibration, was performed. The means and standard deviations in the SMART calibration of the low-cost sensor and the BAM output were 23.09 ± 13.85 and 23.01 ± 14.74 μ g/m 3 with around 0.35% normalized mean bias error. When the raw signal and calibrated signals of the low-cost sensor were compared to the figures from BAM output by applying correlation index, R 2 , increased correlations between the low-cost sensor and the BAM output were observed as 0.41 (raw signal), 0.82 (LR), 0.84 (MLR), 0.83 (MLP), and 0.89 (SMART calibration). Furthermore, this calibration model was verified with the possibility of being applied to future datasets. These results explain the fact that calibration is highly required when low-cost sensors are used for high accuracy sensing.
A sample-to-sample variability of the low-cost sensors was evaluated among three co-located low-cost sensors. The sensors were very strongly correlated having an extremely high correlation coefficient ranging from 0.985 to 0.997 . Based on this finding, a calibration model can be continuously updated and improved by co-locating a single multi-sensor platform with BAM and it can be transferred toward all nodes in a sensor network to calibrate the entire nodes. This approach is the base concept of an online calibration for low-cost sensors. For future studies, a mobile node that is converted from the co-located multi-sensor platform travels among all of the nodes in the sensor network by performing an offline calibration of slope and intercept of each node. This successive calibration is named Hybrid Calibration, which features both an entire online calibration and an individual offline calibration.

Author Contributions

H.L.: platform improvement, conceptualization, validation, data conditioning/analysis/visualization, writing manuscript text, interpretation of results. J.K.: prototype build/validation/analysis, platform design, software design, interpretation of results. S.K.: platform build/deployment, investigation. Y.I.: govenmental BAM data sharing, investigation. S.Y.: writing–review and editing. D.L.: funding acquisition, supervision, project administration. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by City of Seoul through Seoul Urban Data Science Laboratory Project (Grant No. 0660-20170004) administered by Seoul National University Big Data Institute.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Limitation on Linear/Nonlinear Approxiamation—Anscombe’s Quartet, Bias & Variance Trade-Off

Figure A1 shows Anscombe’s quartet which intuitively describes the error on LR [37]. The four datasets have the same best-fit line slope, intercept, and R 2 even though the data are very different. To solve this ambiguousness, we properly compared calibration effectiveness with other metrics in Section 2.4.
Figure A1. Anscombe’s quartet—dataset I (simple linear)/dataset II (nonlinear)/dataset III (linear with outlier)/dataset IV (a high-leverage point). Four datasets have the same mean, variance, Pearson correlation coefficient, R2, slope, and intercept of the best-fit line (Figure quoted from [38]).
Figure A1. Anscombe’s quartet—dataset I (simple linear)/dataset II (nonlinear)/dataset III (linear with outlier)/dataset IV (a high-leverage point). Four datasets have the same mean, variance, Pearson correlation coefficient, R2, slope, and intercept of the best-fit line (Figure quoted from [38]).
Sensors 20 03617 g0a1
As shown in Figure A2, it is important to avoid the generation of underfitting or over-fitting models and to generate an appropriate trade-off model to reduce total error.
Figure A2. Prediction error of training and test dataset according to model complexity. A medium level of model complexity minimizes the prediction error of the test dataset (Figure quoted from [39]).
Figure A2. Prediction error of training and test dataset according to model complexity. A medium level of model complexity minimizes the prediction error of the test dataset (Figure quoted from [39]).
Sensors 20 03617 g0a2

Appendix B. Additional Figures and Tables

This appendix section includes more detailed table information.
Table A1. Correlation coefficient and metrics of low-cost sensor and governmental BAM (before calibration).
Table A1. Correlation coefficient and metrics of low-cost sensor and governmental BAM (before calibration).
Raw(a)Raw(b)Raw(c)BAM
Raw(a)1.000slope = 0.837
intercept = 1.969
R 2 = 0.937
MAE = 4.700
slope = 0.998
intercept = 0.003
R 2 = 0.994
MAE = 1.583
slope = 0.436
intercept = 6.457
R 2 = 0.416
MAE = 15.816
Raw(b)0.9871.000slope = 1.163
intercept = -1.335
R 2 = 0.933
MAE = 4.737
slope = 0.512
intercept = 5.732
R 2 = 0.546
MAE = 11.952
Raw(c)0.9970.9851.000slope = 0.435
intercept = 6.526
R 2 = 0.417
MAE = 15.712
BAM0.9190.9160.9181.000
Table A2. Descriptive statistic summary of low-cost sensor and governmental BAM (before calibration).
Table A2. Descriptive statistic summary of low-cost sensor and governmental BAM (before calibration).
Raw(a)Raw(b)Raw(c)BAM
No. of samples36911.0036911.0036911.0036911.00
Mean38.1533.8938.0723.10
STD31.2926.5231.3214.84
Min0.000.000.000.00
25%16.5315.0716.7113.00
50%28.9526.2628.9420.00
75%49.6045.3149.3928.00
Max215.42179.73225.46115.00
Table A3. Correlation coefficient and metrics of low-cost sensor and governmental BAM (after calibration).
Table A3. Correlation coefficient and metrics of low-cost sensor and governmental BAM (after calibration).
RawMLRMLPSMARTBAM
Raw1.000---slope = 0.434
intercept = 6.458
R 2 = 0.41
MAE = 15.87
MSE = 573.23
MLR0.9891.000--slope = 0.996
intercept = -0.028
R 2 = 0.84
MAE = 4.00
MSE = 29.90
MLP0.9720.9821.000-slope = 1.062
intercept = -1.086
R 2 = 0.86
MAE = 3.52
MSE = 23.88
SMART0.9540.9640.9791.000slope = 1.008
intercept = −0.258
R 2 = 0.89
MAE = 3.32
MSE = 22.06
BAM0.9190.9290.9450.9471.000
Table A4. Descriptive statistic summary of low-cost sensor and governmental BAM (after calibration).
Table A4. Descriptive statistic summary of low-cost sensor and governmental BAM (after calibration).
RawMLRMLPSMARTBAM
No. of samples7382.007382.007382.007382.007382.00
Mean38.1223.1322.7023.0923.01
STD31.1813.7413.1213.8514.74
Min0.002.972.15−6.500.00
25%16.4213.8114.2413.9413.00
50%28.9719.5219.5119.8920.00
75%49.6628.5227.6428.1128.00
Max210.93100.82104.4898.55115.00

Appendix C. Prototype Build/Validation

This experiment was performed to obtain basic data before developing a sensor platform. Through this experiment, we had information about accuracy comparison by the sensors. Four kinds of low-cost sensors—PMSA003 (Plantower Inc., Beijing, China [18]), PMS7003 (Plantower Inc.), SEN0177 (DFRobot Inc., Shanghai, China [19]), and HPMA115s0 (Honeywell Sensing Inc., Charlotte, NC, USA [20])—were selected as candidates and built three each to roughly evaluate their performance. These sensors were compared their outputs with co-located governmental BAM from 23 July 2018 to 25 July 2018 (Figure A3). We evaluated the correlation plot and the correlation coefficient of the sensors as shown in Figure A4 and Table A5. Data from four types of sensors was 5 min. averaged and calibrated by the output of BAM and humidity as shown in Figure A5. As a result, PMS7003 was selected from candidates for further system construction on the multi-sensor platform because it has high repeatability and low sample-to-sample variability in a coefficient of determinant among three homogeneous sensors and linearity with BAM.
Figure A3. Information on governmental BAM and test environment. The BAM is located at 426, Hakdong-ro, Gangnam-gu, Seoul, Korea and operated by the Seoul research institute of public health and environment.
Figure A3. Information on governmental BAM and test environment. The BAM is located at 426, Hakdong-ro, Gangnam-gu, Seoul, Korea and operated by the Seoul research institute of public health and environment.
Sensors 20 03617 g0a3
Figure A4. Correlation plot between inter/hetero sensors.
Figure A4. Correlation plot between inter/hetero sensors.
Sensors 20 03617 g0a4
Table A5. Comparison of the coefficient of determination with three sensors of four kinds each. After calculating the coefficient of determination for each combination of three sensors for each sensor type, the worst value was selected and calculated. (e.g., pmsa003-a, sen0177-c).
Table A5. Comparison of the coefficient of determination with three sensors of four kinds each. After calculating the coefficient of determination for each combination of three sensors for each sensor type, the worst value was selected and calculated. (e.g., pmsa003-a, sen0177-c).
SensorPMSA003PMS7003SEN0177HPMA115S0
PMSA0030.987---
PMS70030.9830.994--
SEN01770.8790.8780.882-
HPMA115S00.9180.9100.9210.994
Figure A5. Prototypes output analysis.
Figure A5. Prototypes output analysis.
Sensors 20 03617 g0a5

Appendix D. Procedures of SMART Calibration

This is an example of two calibration models with N input variables and three layers of MLP. It is possible to increase the number of models/input variables/layers.
[Training dataset]
  • Build a calibration model (a or b).
    • MLR: y ^ = w 0 + i = 1 N w i x i
    • MLP(ReLU activation): y ^ = W 3 m a x ( 0 , W 2 m a x ( 0 , W 1 x ) )
  • Segment each input space (i x j matrix)
  • Calculate residuals of each cell (in i x j matrix) according to corresponding data and generate a residual map of the training dataset. (n calibration models)
    k = 1 n ϵ k [ i j ] = k = 1 n ( y k [ i j ] y ^ k [ i j ] )
  • repeat 1–3 steps for the other model.
  • Compare residual maps for each cell and build a prevailing model map.
    prevailing model: selected by m i n ( σ ϵ [ i j ] , M L R , σ ϵ [ i j ] , M L P )
[Test dataset]
6.
Infer test data from the prevailing model
y ˜ [ i j ] = y ^ [ i j ] , p r e v a i l i n g m o d e l
7.
Infer test data from residuals of the prevailing model
if σ ϵ [ i j ] , p r e v a i l i n g m o d e l < σ ϵ b o u n d
y ˜ [ i j ] = y ˜ [ i j ] 1 n k = 1 n ϵ k [ i j ] , p r e v a i l i n g m o d e l

Appendix E. Data Preprocessing Methods—More on Shuffled Methods

The dataset was preprocessed and analyzed in several shuffled methods by selecting successive hourly or daily data chunk size, as shown in Table A6 and Figure A6.
Table A6. Metric analysis for data preprocessing methods (shuffled - hourly/sequential - daily).
Table A6. Metric analysis for data preprocessing methods (shuffled - hourly/sequential - daily).
Dataset RatioMetricShuffled - HourlyShuffled - Daily
PM OnlyPM+Humidity+TempPM OnlyPM+Humidity+Temp
RawLRMLRMLPSMARTRawLRMLRMLPSMART
70%/30%MAE14.714.333.993.653.5715.644.384.013.563.68
MSE527.4136.0430.7226.8226.09580.0534.3429.2023.9428.73
R 2 0.460.800.830.840.870.440.820.860.890.88
80%/20%MAE14.094.273.923.603.5416.864.554.253.573.76
MSE490.6135.7730.2725.8225.81694.7038.7633.4924.3930.03
R 2 0.460.790.830.850.870.410.830.860.890.88
90%/10%MAE14.143.923.753.453.4118.994.854.483.853.77
MSE535.5029.9626.7325.2125.09842.9644.0336.9726.1926.88
R 2 0.440.840.860.880.880.390.840.870.900.90
95%/5%MAE14.884.023.903.663.7114.975.394.874.754.99
MSE607.2033.9731.7529.2530.89605.0862.8751.2246.6563.89
R 2 0.390.820.830.840.850.510.730.790.790.69
Figure A6. Comparison plot by data preprocessing methods: shuffled - hourly (top)/shuffled - daily (bottom).
Figure A6. Comparison plot by data preprocessing methods: shuffled - hourly (top)/shuffled - daily (bottom).
Sensors 20 03617 g0a6

Appendix F. Grid Search CV Methods

This is all list of hyperparameter grids information.
  • [Common params] = ‘cross validations’:[10], ‘random state’:[0], ‘scoring’:[MSE]
  • Lasso params = ‘alpha’:[0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 20, 50, 100]
  • Ridge params = ‘alpha’:[0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 20, 50, 100, 200]
  • DT params = ‘max depth’:[4,6, 8,12,16], ‘min samples split’:[8, 16, 24, 32]
  • RF params =‘n estimators’: [100, 200, 500], ‘max depth’: [6, 8,12], ‘min samples split’: [8, 16, 24], ‘min samples leaf’: [8,12,18]
  • GB params = ‘n estimators’: [100, 200, 500], ‘learning rate’: [0.05, 0.1, 0.2]
  • XGB params = ‘n estimators’: [100, 200, 500], ‘learning rate’: [0.05, 0.1, 0.2], ‘colsample bytree’: [0.3,0.5,0.7,1], ‘subsample’:[0.3,0.5,0.7,1], ‘n jobs’:[−1]
  • LGB params = ‘n estimators’:[100, 200, 500], ‘learning rate’:[0.05, 0.1,0.2], ‘colsample bytree’: [0.5,0.7,1], ‘subsample’: [0.3,0.5,0.7,1], ‘num leaves’: [2,4,6], ‘reg lambda’: [10], ‘n jobs’: [−1]

References

  1. Lee, S.; Lee, W.; Kim, D.; Kim, E.; Myung, W.; Kim, S.Y.; Kim, H. Short-term PM 2.5 exposure and emergency hospital admissions for mental disease. Environ. Res. 2019, 171, 313–320. [Google Scholar] [CrossRef] [PubMed]
  2. Burnett, R.; Chen, H.; Szyszkowicz, M.; Fann, N.; Hubbell, B.; Pope, C.A.; Apte, J.S.; Brauer, M.; Cohen, A.; Weichenthal, S.; et al. Global estimates of mortality associated with longterm exposure to outdoor fine particulate matter. Proc. Natl. Acad. Sci. USA 2018, 115, 9592–9597. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. World Health Organization (WHO). RHN Workshop on Environment and Health (Air Pollution and Active Mobility), Ljubljana, Slovenia, 30 November 2018; p. 39. Available online: https://www.euro.who.int/en/about-us/networks/regions-for-health-network-rhn/activities/network-updates/rhn-workshop-on-environment-and-health-air-pollution-and-active-mobility-at-the-11th-european-public-health-conference (accessed on 26 June 2020).
  4. Motlagh, N.H.; Petaja, T.; Kulmala, M.; Trachoma, S.; Lagerspetz, E.; Nurmi, P.; Li, X.; Varjonen, S.; Mineraud, J.; Siekkinen, M.; et al. Toward Massive Scale Air Quality Monitoring. IEEE Commun. Mag. 2020, 58, 54–59. [Google Scholar] [CrossRef]
  5. Morawska, L.; Thai, P.K.; Liu, X.; Asumadu-Sakyi, A.; Ayoko, G.; Bartonova, A.; Bedini, A.; Chai, F.; Christensen, B.; Dunbabin, M.; et al. Applications of low-cost sensing technologies for air quality monitoring and exposure assessment: How far have they gone? Environ. Int. 2018, 116, 286–299. [Google Scholar] [CrossRef] [PubMed]
  6. Rai, A.C.; Kumar, P.; Pilla, F.; Skouloudis, A.N.; Di Sabatino, S.; Ratti, C.; Yasar, A.; Rickerby, D. End-user perspective of low-cost sensors for outdoor air pollution monitoring. Sci. Total Environ. 2017, 607–608, 691–705. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Gao, Y.; Dong, W.; Guo, K.; Liu, X.; Chen, Y.; Liu, X.; Bu, J.; Chen, C. Mosaic: A low-cost mobile sensing system for urban air quality monitoring. In Proceedings of the IEEE INFOCOM 2016—The 35th Annual IEEE International Conference on Computer Communications, San Francisco, CA, USA, 10–14 April 2016. [Google Scholar] [CrossRef]
  8. BALZ MAAG. Air Quality Sensor Calibration and Its Peculiarities. Ph.D. Thesis, ETH Zurich, Zürich, Switzerland, 2019. [CrossRef]
  9. Maag, B.; Zhou, Z.; Saukh, O.; Thiele, L. SCAN: Multi-Hop Calibration for Mobile Sensor Arrays. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2017, 1, 1–21. [Google Scholar] [CrossRef]
  10. Air Korea from Goverment. Available online: https://www.airkorea.or.kr/eng/currentAirQuality?pMENU_NO=68 (accessed on 18 December 2019).
  11. Every Air from SK Telecom. Available online: https://www.onestore.co.kr/userpoc/apps/view?pid=0000745074 (accessed on 18 December 2019).
  12. Air map Korea from KT. Available online: https://iot.airmapkorea.kt.com/info/ (accessed on 18 December 2019).
  13. Bulot, F.M.; Johnston, S.J.; Basford, P.J.; Easton, N.H.; Apetroaie-Cristea, M.; Foster, G.L.; Morris, A.K.; Cox, S.J.; Loxham, M. Long-term field comparison of multiple low-cost particulate matter sensors in an outdoor urban environment. Sci. Rep. 2019, 9, 1–13. [Google Scholar] [CrossRef] [PubMed]
  14. Mukherjee, A.; Stanton, L.G.; Graham, A.R.; Roberts, P.T. Assessing the utility of low-cost particulate matter sensors over a 12-week period in the Cuyama valley of California. Sensors 2017, 17, 1805. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Liu, H.Y.; Schneider, P.; Haugen, R.; Vogt, M. Performance assessment of a low-cost PM 2.5 sensor for a near four-month period in Oslo, Norway. Atmosphere 2019, 10, 41. [Google Scholar] [CrossRef] [Green Version]
  16. Kim, S.; Park, S.; Lee, J. Evaluation of performance of inexpensive laser based PM2.5 sensor monitors for typical indoor and outdoor hotspots of South Korea. Appl. Sci. 2019, 9, 1947. [Google Scholar] [CrossRef] [Green Version]
  17. Mukherjee, A.; Brown, S.G.; Mccarthy, M.C.; Pavlovic, N.R.; Stanton, L.G.; Snyder, J.L.; Andrea, S.D.; Hafner, H.R. Measuring Spatial and Temporal PM2.5 Variations in Sacramento, California, Communities Using a Network of Low-Cost Sensors. Sensors 2019, 19, 4701. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Plantower Inc. Available online: http://www.plantower.com/en/list/?118_1.html (accessed on 3 January 2020).
  19. DFRobot Inc. Available online: https://www.dfrobot.com/product-1272.html?search=sen0177&description=true (accessed on 3 January 2020).
  20. Honeywell Inc. Available online: https://sensing.honeywell.com/hpma115s0-xxx-particulate-matter-sensors (accessed on 3 January 2020).
  21. Karagulian, F.; Gerboles, M.; Barbiere, M.; Kotsev, A.; Lagler, F.; Borowiak, A. Review of Sensors for air Quality Monitoring; Publications Office of the European Union: Luxembourg, 2019. [Google Scholar] [CrossRef]
  22. Crilley, L.R.; Shaw, M.; Pound, R.; Kramer, L.J.; Price, R.; Young, S.; Lewis, A.C.; Pope, F.D. Evaluation of a low-cost optical particle counter (Alphasense OPC-N2) for ambient air monitoring. Atmos. Meas. Tech. 2018, 11, 709–720. [Google Scholar] [CrossRef] [Green Version]
  23. Lin, Y.; Dong, W.; Chen, Y. Calibrating Low-Cost Sensors by a Two-Phase Learning Approach for Urban Air Quality Measurement. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2018, 2, 1–18. [Google Scholar] [CrossRef]
  24. Cordero, J.M.; Borge, R.; Narros, A. Using statistical methods to carry out in field calibrations of low cost air quality sensors. Sens. Actuators B 2018, 267, 245–254. [Google Scholar] [CrossRef]
  25. Magi, B.I.; Cupini, C.; Francis, J.; Green, M.; Hauser, C. Evaluation of PM2.5 measured in an urban setting using a low-cost optical particle counter and a Federal Equivalent Method Beta Attenuation Monitor. Aerosol Sci. Technol. 2019, 54, 1–13. [Google Scholar] [CrossRef]
  26. Kimoto Inc. Available online: https://www.kimoto-electric.co.jp/english/product/air/700.html#lineup (accessed on 3 January 2020).
  27. Matlab R2018b. Available online: https://www.mathworks.com/ (accessed on 3 January 2020).
  28. Pandas. Available online: https://pandas.pydata.org/ (accessed on 3 January 2020).
  29. Keras. Available online: https://keras.io/ (accessed on 3 January 2020).
  30. Scikit-Learn. Available online: https://scikit-learn.org/stable/index.html (accessed on 3 January 2020).
  31. Tensorflow. Available online: https://www.tensorflow.org/ (accessed on 3 January 2020).
  32. Zheng, T.; Bergin, M.H.; Johnson, K.K.; Tripathi, S.N.; Shirodkar, S.; Landis, M.S.; Sutaria, R.; Carlson, D.E. Field evaluation of low-cost particulate matter sensors in high-and low-concentration environments. Atmos. Meas. Tech. 2018, 11, 4823–4846. [Google Scholar] [CrossRef] [Green Version]
  33. Metone Inc. Available online: https://metone.com/products/bam-1020 (accessed on 3 January 2020).
  34. US EPA. Available online: https://www.epa.gov/ (accessed on 3 January 2020).
  35. Alexander, D.L.J.; Tropsha, A.; Winkler, D.A. Beware of R2: Simple, Unambiguous Assessment of the Prediction Accuracy of QSAR and QSPR Models. J. Chem. Inf. Model. 2015, 55, 1316–1322. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Purple Air Inc. Available online: https://www2.purpleair.com/products/purpleair-pa-ii (accessed on 3 January 2020).
  37. Anscombe, F.J. Graphs in Statistical Analysis. Am. Stat. 1973, 27, 17–21. [Google Scholar]
  38. Anscombe’s Quartet. Available online: https://en.wikipedia.org/wiki/Anscombe%27s_quartet (accessed on 26 June 2020).
  39. Nordhausen, K. The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition by Trevor Hastie, Robert Tibshirani, Jerome Friedman; Springer: New York, NY, USA, 2009; pp. 37–38. [Google Scholar] [CrossRef]
Figure 1. Comparison of deployment density by the responsible organization in Seoul. A circle indicates the location of equipment with Korea air quality index (AQI) of PM2.5. (a) By government (BAM); (b) By a company (Light-scattering) [12].
Figure 1. Comparison of deployment density by the responsible organization in Seoul. A circle indicates the location of equipment with Korea air quality index (AQI) of PM2.5. (a) By government (BAM); (b) By a company (Light-scattering) [12].
Sensors 20 03617 g001
Figure 2. Overall procedures for sensor evaluation and calibration.
Figure 2. Overall procedures for sensor evaluation and calibration.
Sensors 20 03617 g002
Figure 3. Information on multi-sensor platform. (a) Picture of platform; (b) Configuration of submodules.
Figure 3. Information on multi-sensor platform. (a) Picture of platform; (b) Configuration of submodules.
Sensors 20 03617 g003
Figure 4. Information on governmental BAM station. (a) Outside; (b) Inside. It’s located at 6, Sadang-ro 16a-gil, Dongjak-gu, Seoul, Korea, and operated by the Seoul research institute of public health and environment. Inlets of BAM (red circle) and Multi-sensor platform (blue circle) are located together.
Figure 4. Information on governmental BAM station. (a) Outside; (b) Inside. It’s located at 6, Sadang-ro 16a-gil, Dongjak-gu, Seoul, Korea, and operated by the Seoul research institute of public health and environment. Inlets of BAM (red circle) and Multi-sensor platform (blue circle) are located together.
Sensors 20 03617 g004
Figure 5. Default data separation methods for the training dataset and test dataset. 20% of the training dataset is used for the validation dataset to prevent the over-fitting calibration model. A shuffled method is controlled by a fixed random seed to compare the performance between calibration algorithms.
Figure 5. Default data separation methods for the training dataset and test dataset. 20% of the training dataset is used for the validation dataset to prevent the over-fitting calibration model. A shuffled method is controlled by a fixed random seed to compare the performance between calibration algorithms.
Sensors 20 03617 g005
Figure 6. The architecture of a fully connected neural network. An input layer in red feeds explanatory variables and an output layer in green feeds response variable. Based on hyperparameter, the weight matrix (parameter) is built.
Figure 6. The architecture of a fully connected neural network. An input layer in red feeds explanatory variables and an output layer in green feeds response variable. Based on hyperparameter, the weight matrix (parameter) is built.
Sensors 20 03617 g006
Figure 7. Overall procedures for SMART calibration. (e.g., MLR and multilayer perceptron (MLP) model).
Figure 7. Overall procedures for SMART calibration. (e.g., MLR and multilayer perceptron (MLP) model).
Sensors 20 03617 g007
Figure 8. Residual maps and a prevailing model map for SMART calibration. Residual map#1 from the linear model (top left) and residual map#2 from the nonlinear model (bottom left) are merged into a prevailing model map (right). Residuals under high humidity and low-temperature condition are indicated in red dotted circles. In this region, the linear model has higher expectations of residuals than the nonlinear model.
Figure 8. Residual maps and a prevailing model map for SMART calibration. Residual map#1 from the linear model (top left) and residual map#2 from the nonlinear model (bottom left) are merged into a prevailing model map (right). Residuals under high humidity and low-temperature condition are indicated in red dotted circles. In this region, the linear model has higher expectations of residuals than the nonlinear model.
Sensors 20 03617 g008
Figure 9. Prevailing model map and prevailing model map (segmented cell). A cell (Blue box) is segmented by the allocated inputs, and it has means and standard deviations of calibration models and BAM. The cell offers the prevailing model and its residual for the allocated input.
Figure 9. Prevailing model map and prevailing model map (segmented cell). A cell (Blue box) is segmented by the allocated inputs, and it has means and standard deviations of calibration models and BAM. The cell offers the prevailing model and its residual for the allocated input.
Sensors 20 03617 g009
Figure 10. A characteristic of R 2 according to widen data range. The black dot alone has R 2 = 0.38, while the black dot and red triangle have R 2 = 0.72. However, their regression line and RMSE are the same (Figure quoted [35]). This descriptive statistic is also required when R 2 is used as a metric.
Figure 10. A characteristic of R 2 according to widen data range. The black dot alone has R 2 = 0.38, while the black dot and red triangle have R 2 = 0.72. However, their regression line and RMSE are the same (Figure quoted [35]). This descriptive statistic is also required when R 2 is used as a metric.
Sensors 20 03617 g010
Figure 11. Comparison between low-cost sensors and governmental BAM (before calibration).
Figure 11. Comparison between low-cost sensors and governmental BAM (before calibration).
Sensors 20 03617 g011
Figure 12. Comparison between low-cost sensors and governmental BAM (after calibration).
Figure 12. Comparison between low-cost sensors and governmental BAM (after calibration).
Sensors 20 03617 g012
Figure 13. Comparison plot by data preprocessing methods—shuffled (top) and sequential (bottom).
Figure 13. Comparison plot by data preprocessing methods—shuffled (top) and sequential (bottom).
Sensors 20 03617 g013
Figure 14. Comparison of output plot between low-cost sensor and governmental BAM–SMART calibration signal (green line)/BAM signal (blue line)/raw signal (red line).
Figure 14. Comparison of output plot between low-cost sensor and governmental BAM–SMART calibration signal (green line)/BAM signal (blue line)/raw signal (red line).
Sensors 20 03617 g014
Figure 15. Comparison plot by metrics (sequential). GridsearchCV (10) found best hyperparameters as below. PLR (degree:2)/Lasso (alpha:5)/Ridge (alpha:100)/DT—decision tree (max depth = 12, min samples split =16)/RF (max depth = 6, min samples leaf = 8, min samples split = 24, n estimators = 500)/GB (learning rate = 0.05, n estimators = 200)/XGB (colsample bytree = 1, learning rate = 0.05, n estimators = 200, subsample = 0.3)/LGB (colsample bytree = 0.5, learning rate = 0.05, n estimators = 500, num leaves = 4, reg lambda = 10, subsample = 0.3).
Figure 15. Comparison plot by metrics (sequential). GridsearchCV (10) found best hyperparameters as below. PLR (degree:2)/Lasso (alpha:5)/Ridge (alpha:100)/DT—decision tree (max depth = 12, min samples split =16)/RF (max depth = 6, min samples leaf = 8, min samples split = 24, n estimators = 500)/GB (learning rate = 0.05, n estimators = 200)/XGB (colsample bytree = 1, learning rate = 0.05, n estimators = 200, subsample = 0.3)/LGB (colsample bytree = 0.5, learning rate = 0.05, n estimators = 500, num leaves = 4, reg lambda = 10, subsample = 0.3).
Sensors 20 03617 g015
Table 1. Chosen coefficients of multivariate linear regression (MLR) (80%—training dataset, 20%—test dataset, shuffled method, 5 min. sampling interval condition).
Table 1. Chosen coefficients of multivariate linear regression (MLR) (80%—training dataset, 20%—test dataset, shuffled method, 5 min. sampling interval condition).
Raw(a)HumidityTemperatureIntercept
0.4470 0.0581 0.0329 8.2511
Table 2. Hyperparameter of MLP (80%—training dataset, 20%—test dataset, shuffled method, 5 min. sampling interval condition).
Table 2. Hyperparameter of MLP (80%—training dataset, 20%—test dataset, shuffled method, 5 min. sampling interval condition).
Hidden LayerNeurons/LayerEpochBatchActivationDropout RateLearning RateOptimizer
22420032ReLU0.20.005Adam
Table 3. Metrics for performance analysis.
Table 3. Metrics for performance analysis.
MAEMSERMSER 2
1 N i = 1 N | y i y i ^ | 1 N i = 1 N ( y i y i ^ ) 2 1 N i = 1 N ( y i y i ^ ) 2 1 ( y i y i ^ ) 2 ( y i ^ y ¯ ) 2
y : P M ̲ r e f e r e n c e , y ^ : P M ̲ c a l i b r a t e d , y ¯ : P M ̲ m e a n o f r e f e r e n c e .
Table 4. Comparison of calibration performance by input variables (short-term: 80%—training dataset, 20%—test dataset, shuffled method, 5 min. sampling interval condition).
Table 4. Comparison of calibration performance by input variables (short-term: 80%—training dataset, 20%—test dataset, shuffled method, 5 min. sampling interval condition).
Input VariablesLinear - ULR/MLRNonlinear - MLP
MAEMSER 2 MAEMSER 2
[uncalibrated] Raw PM9.78216.890.529.78216.890.52
[ calibrated] Raw PM3.6924.440.783.5523.120.80
[ calibrated] Raw PM + Humidity3.1118.720.842.9916.690.84
[ calibrated] Raw PM + Temp3.2219.560.833.1118.390.83
[ calibrated] Raw PM + Light3.3921.400.813.2318.970.84
[ calibrated] Raw PM + Humidity + Temp3.1118.700.842.9516.910.83
[ calibrated] Raw PM + Humidity + Light3.0918.610.842.9917.010.83
[ calibrated] Raw PM + Temp + Light3.1919.250.833.1018.150.83
[ calibrated] Raw PM + Humidity + Temp + Light3.0818.410.842.9316.760.83
Table 5. Comparison of calibration performance by input variables (long-term: 80%—training dataset, 20%—test dataset, shuffled method, 5 min sampling interval condition)
Table 5. Comparison of calibration performance by input variables (long-term: 80%—training dataset, 20%—test dataset, shuffled method, 5 min sampling interval condition)
Input VariablesLinear - ULR/MLRNonlinear - MLP
MAEMSER 2 MAEMSER 2
[uncalibrated] Raw PM15.87573.230.4115.87573.230.41
[ calibrated] Raw PM4.2833.790.824.2133.790.79
[ calibrated] Raw PM + Humidity4.0130.130.844.0432.150.77
[ calibrated] Raw PM + Humidity + Temp.4.0029.900.843.5223.880.86
Table 6. Comparison of performance by sampling intervals (5 min/1 h/24 h: 80%—training dataset, 20%—test dataset, shuffled method).
Table 6. Comparison of performance by sampling intervals (5 min/1 h/24 h: 80%—training dataset, 20%—test dataset, shuffled method).
Sampling IntervalMetricRawLRMLPSMART
5 minMAE15.874.003.523.32
MSE573.2329.9023.8822.06
R 2 0.410.840.860.89
1 hMAE14.723.683.293.51
MSE486.2625.2221.2925.75
R 2 0.410.850.880.86
24 hMAE12.332.712.922.68
MSE299.5521.7229.6221.99
R 2 0.370.770.750.77
Table 7. Metric analysis for data preprocessing methods (shuffled/sequential—5 min sampling interval condition).
Table 7. Metric analysis for data preprocessing methods (shuffled/sequential—5 min sampling interval condition).
Dataset RatioMetricShuffledSequential
PM OnlyPM+Humidity+TempPM OnlyPM+Humidity+Temp
RawLRMLRMLPSMARTRawLRMLRMLPSMART
70%/30%MAE15.684.253.983.653.298.923.543.603.603.32
MSE563.9033.4529.6125.2721.80182.3121.9922.4923.7021.56
R 2 0.410.820.840.830.890.470.660.660.580.61
80%/20%MAE15.874.284.003.523.329.063.362.912.972.79
MSE573.2333.7929.9023.8822.06196.3518.7014.8415.2014.02
R 2 0.410.820.840.860.890.410.710.760.660.76
90%/10%MAE15.84.344.063.473.2311.673.622.862.842.80
MSE570.134.8030.7622.7020.85311.9021.3114.7315.0614.05
R 2 0.420.810.840.870.900.330.760.830.820.82
95%/5%MAE15.444.404.093.643.3510.073.632.833.192.74
MSE549.6436.5331.9624.6322.48194.5419.4413.3415.9212.75
R 2 0.420.800.830.860.880.180.570.720.600.74
Table 8. Metric analysis of various calibration methods (sequential method, 5 min sampling interval condition).
Table 8. Metric analysis of various calibration methods (sequential method, 5 min sampling interval condition).
Data
Set
Ratio
MetricRawLRMLRMLPSMARTPLRLassoRidgeDTRFGBXGBLGB
70%/
30%
MAE8.923.543.603.603.323.313.403.604.003.003.002.983.15
MSE182.3121.9922.4923.7021.5619.2120.7122.4930.1816.6916.4516.2617.82
R 2 0.470.660.660.580.610.650.680.660.690.780.770.770.74
80%/
20%
MAE9.063.362.912.972.792.942.922.913.392.852.882.792.84
MSE196.3518.7014.8415.2014.0214.8014.9814.8421.2414.4314.5813.8014.26
R 2 0.410.710.760.660.760.750.750.760.710.770.780.780.79
90%/
10%
MAE11.673.622.862.842.802.872.852.863.812.952.852.852.95
MSE311.9021.3114.7315.0614.0514.6714.6114.7326.7415.1114.7814.7115.54
R 2 0.330.760.830.820.820.830.830.830.720.810.830.840.83
95%/
5%
MAE10.073.632.833.192.742.812.802.833.332.862.842.862.89
MSE194.5419.4413.3415.9212.7513.2113.0113.3419.2013.3713.8814.0714.14
R 2 0.170.570.720.600.740.720.710.720.670.710.750.740.74
Table 9. Performance comparison by group & method.
Table 9. Performance comparison by group & method.
CategoryMetricOther Group – Shuffled
(MLR)
Our Group – Shuffled
(SMART)
Our Group – Sequential
(SMART)
Before
calibration
MAE5.815.111.4
RMSE7.523.117.3

After
calibration
MAE3.23.42.8
RMSE4.14.83.7
R 2 0.570.890.81

Share and Cite

MDPI and ACS Style

Lee, H.; Kang, J.; Kim, S.; Im, Y.; Yoo, S.; Lee, D. Long-Term Evaluation and Calibration of Low-Cost Particulate Matter (PM) Sensor. Sensors 2020, 20, 3617. https://doi.org/10.3390/s20133617

AMA Style

Lee H, Kang J, Kim S, Im Y, Yoo S, Lee D. Long-Term Evaluation and Calibration of Low-Cost Particulate Matter (PM) Sensor. Sensors. 2020; 20(13):3617. https://doi.org/10.3390/s20133617

Chicago/Turabian Style

Lee, Hoochang, Jiseock Kang, Sungjung Kim, Yunseok Im, Seungsung Yoo, and Dongjun Lee. 2020. "Long-Term Evaluation and Calibration of Low-Cost Particulate Matter (PM) Sensor" Sensors 20, no. 13: 3617. https://doi.org/10.3390/s20133617

APA Style

Lee, H., Kang, J., Kim, S., Im, Y., Yoo, S., & Lee, D. (2020). Long-Term Evaluation and Calibration of Low-Cost Particulate Matter (PM) Sensor. Sensors, 20(13), 3617. https://doi.org/10.3390/s20133617

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop