Next Article in Journal
Scheduling of Air Conditioning and Thermal Energy Storage Systems Considering Demand Response Programs
Previous Article in Journal
Ammonia Emission Characteristics and Emission Factor of Municipal Solid Waste Incineration Plant
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Methodology to Increase the Accuracy of Particulate Matter Predictors Based on Time Decomposition

by
Paulo S. G. de Mattos Neto
1,
Manoel H. N. Marinho
2,
Hugo Siqueira
3,
Yara de Souza Tadano
4,
Vivian Machado
5,
Thiago Antonini Alves
5,*,
João Fausto L. de Oliveira
2 and
Francisco Madeiro
6
1
Departamento de Sistemas de Computação, Centro de Informática, Universidade Federal de Pernambuco (UFPE), Recife (PE) 50670-901, Brazil
2
Polytechnic School of Pernambuco, University of Pernambuco (UPE), Recife (PE) 50720-001, Brazil
3
Department of Electronics, Federal University of Technology–Parana (UTFPR), Ponta Grossa (PR) 84017-220, Brazil
4
Department of Mathematics, Federal University of Technology–Parana (UTFPR), Ponta Grossa (PR) 84017-220, Brazil
5
Department of Mechanical Engineering, Federal University of Technology–Parana (UTFPR), Ponta Grossa (PR) 84017-220, Brazil
6
Centro de Ciências e Tecnologia, Universidade Católica de Pernambuco (UNICAP), Recife (PE) 50050-900, Brazil
*
Author to whom correspondence should be addressed.
Sustainability 2020, 12(18), 7310; https://doi.org/10.3390/su12187310
Submission received: 25 June 2020 / Revised: 27 August 2020 / Accepted: 31 August 2020 / Published: 7 September 2020
(This article belongs to the Section Environmental Sustainability and Applications)

Abstract

:
Particulate matter (PM) is one of the most harmful air pollutants to human health studied worldwide. In this scenario, it is of paramount importance to monitor and predict PM concentration. Artificial neural networks (ANN) are commonly used to forecast air pollution levels due to their accuracy. The use of partition on prediction problems is well known because decomposition of time series allows the latent components of the original series to be revealed. It is a matter of extracting the “deterministic” component, which is easy to predict the random components. However, there is no evidence of its use in air pollution forecasting. In this work, we introduce a different approach consisting of the decomposition of the time series in contiguous monthly partitions, aiming to develop specialized predictors to solve the problem because air pollutant concentration has seasonal behavior. The goal is to reach prediction accuracy higher than those obtained by using the entire series. Experiments were performed for seven time series of daily particulate matter concentrations (PM2.5 and PM10–particles with diameter less than 2.5 and 10 micrometers, respectively) in Finland and Brazil, using four ANNs: multilayer perceptron, radial basis function, extreme learning machines, and echo state networks. The experimental results using three evaluation measures showed that the proposed methodology increased all models’ prediction capability, leading to higher accuracy compared to the traditional approach, even for extremely high air pollution events. Our study has an important contribution to air quality prediction studies. It can help governments take measures aiming air pollution reduction and preparing hospitals during extreme air pollution events, which is related to the following United Nations sustainable developments goals: SDG 3—good health and well-being and SDG 11—sustainable cities and communities.

Graphical Abstract

1. Introduction

According to the World Health Organization [1], nine out of ten people are exposed to high air pollution levels. In this sense, particulate matter (PM) is considered one of the most harmful air pollutants, since it can be deposited in the lungs, reaching the alveoli and bloodstream [2]. It can also cause severe cardiopulmonary diseases, lung cancer, even death [3,4]. Many epidemiological studies have already shown the association between particulate matter concentration and cardiopulmonary diseases, or even mortality [3,4,5,6,7]. In recent years, approximately 7 million deaths were caused by fine particles, which present an aerodynamic diameter of less than 2.5 micrometers (PM2.5) [1]. Monitoring and forecasting PM concentrations in the air are essential to the population, health institutions, and governments worldwide [1].
In this context, we highlight the forecasting systems based on artificial neural networks (ANNs), due to the excellent performances demonstrated in the literature [8,9,10,11,12,13,14,15]. These systems generally employ two data-driven approaches for ANN adjustment: the use of only previous (historical) data of PM concentration [12,13,16,17] or the use of the historical data of PM jointly with related features, such as temperature, relative humidity, wind speed and direction, and others [8,9,10,11,14,18,19,20,21].
Traditionally, systems based on ANN use only one forecasting model for the entire series [8,9,10,12,13,14,15,16]. However, some works [19,20,21,22,23,24,25,26,27,28] address a set of predictors for a specific part of the series. Based on the literature, the latter approach has not been employed to predict particulate matter time series [29,30,31,32]. The use of partition on prediction problems is well known because decomposition of time series reveals the latent components of the original series. It is a matter of extracting the “deterministic” component, making it easier to predict the random components. However, there is no evidence of its use to air pollution forecasting [33,34,35]. Yet, it has been used in other scenarios, such as hydrology [36], finance [22,23,25], energy [37], and natural phenomena [24].
This paper introduces a different time series forecasting approach, which consists of decomposing the series in contiguous (non-overlapping) partitions (time windows). The decomposition is performed for generating sub-series from the series under analysis, aiming to obtain a prediction accuracy higher than the one obtained with the conventional approach, using the entire series. This process is conducted to get an average coefficient of variation (CV) lower than the CV calculated in the whole series. This proposal is based on two premises: (i) PM concentration time series have specific patterns in different epochs of the year [38,39,40,41], and (ii) a set of predictors modeled for specific patterns present into time windows reaches better results of accuracy than a single forecasting model for the entire series [22,23,24,25,26].
We used seven time-series of daily PM levels comprising data from Helsinki–Finland, and São Paulo, Campinas, and Ipojuca–Brazil to assure the robustness of the proposed approach. Each series is decomposed into twelve sub-series, one for each month of the year. Subsequently, a forecasting method is specifically modeled for each case. In this paper, we employed four artificial neural network architectures: multilayer perceptron (MLP), radial basis function network (RBF), extreme learning machines (ELM), and echo state networks (ESN), considering the traditional and the monthly approach.
The rest of this paper is organized as follows: Section 2 introduces the proposed methodology; Section 3 depicts the case studies; Section 4 presents the computational results and critical analysis; and Section 5 shows the main conclusions.

2. Proposed Methodology

The methodology can be divided into two phases: training and testing. Figure 1 depicts the general framework of the first phase. In this case, the dataset is divided into n sub series (partitions), such that the partitions attend a given statistical condition. Then n prediction models are trained, one for each of the n partitions.
In the first step of Figure 1, a seasonality test is performed to determine the series’ seasonal period. In this work, we used Friedman’s test and Kruskal-Wallis test [42]. For example, suppose we assume that the PM time series presents similar patterns in a monthly fashion based on the statistic tests. In that case, a forecasting model is employed to perform daily predictions in a specific month. The objective is to build models specialized for each month, since similar patterns may be present in the same month. These models are selected based on the current prediction month.
A statistical test is adopted using the coefficient of variation (CV) analysis, which should answer the question: “Is the mean CV calculated from partitions smaller than the CV of the entire series?” We adopted this step, given the existence of a relationship between predictability and CV demonstrated by [43,44,45]. The purpose of employing a seasonality test is to find time series partitions with lower CV values than the entire time series.
Figure 2 depicts the framework of the testing phase. The first part consists of determining the partition used for the prediction purpose. For partition j, the system uses the Fj model, which was previously designed (in the training phase) precisely to predict the j-th sub series (in the j-th partition). The definition of the number of subseries is based on the seasonality tests above mentioned and the mean CV values, in which the value n = 12 achieved the best results. Furthermore, n = 12 also represents the seasonal component of the series. Thus, if the prediction is performed in March, then the F3 model is used; if it is performed in December, the F12 model is used.
Different scenarios used this monthly approach. Ballini et al. [46] used fuzzy inference systems for identifying two hydrological processes and their use in generating synthetic monthly inflow sequences. Luna and Ballini [37] addressed monthly electrical energy demand forecasting to an energy enterprise in Brazil’s Southeastern region. At the same time, Siqueira et al. [27,28] proposed a similar methodology for a seasonal streamflow series forecasting released to hydroelectric plants. In such studies, the lags (inputs) of the models were the previous month data. For example, to predict May 2010, the entries were April, March, and February samples from 2010. However, there are essential differences between the method introduced in this work and the approaches mentioned above:
  • A seasonality test is conducted to find a suitable seasonal period for the series;
  • The calculation of the CV is employed to determine which partitions present similar patterns;
  • MLP, RBF, ESN, and ELM are used to assess the proposed methodology.
It is remarkable that in the traditional approach, a forecasting model needs to learn the statistical fluctuations of nonlinear mapping for all months of the year. In the proposed approach, despite having less data to adjust, it only needs to learn the month’s statistical fluctuations in evaluation.

2.1. Partition Creation and Calculation of the Coefficient of Variation

Figure 3 shows the partitioning scheme proposed in this work. The training set is composed of daily records of m full years. The 12 partitions are created by grouping the data of each month in a different setting. Thus, Partition1 corresponds to the training data only of January from Year 1 to Year m, Partition2 is the training data of February from Year 1 to Year m, and so on.
As each partition contains data specific to a particular month, it is expected that similar temporal patterns are clustered. Consequently, one expects a favorable scenario to present lower values of the CV in the partitions in comparison to the CV in the entire series. Since low/high values of the CV point to low/high data fluctuations, a correspondingly high/low predictability is expected, some works in the literature, e.g., [43,44,45,47,48,49], used this measure in different forecasting applications.
Kahn [44] addressed the CV as a measure of predictability in the sales support and operations planning process. Fatichi et al. [49] used it to investigate the inter-annual variability of precipitation on a global scale. Bahra [48] used a market’s implied risk-neutral density (RND) function measure based on the CV as a volatility forecasting tool.

2.2. Forecasting Models Used in the Proposed Approach

The ANNs have been widely used in time series forecasting, including air pollution prediction and control [50,51,52,53]. In this work, we used four different neural models in the framework depicted in Figure 1: multilayer perceptron (MLP), radial basis function network (RBF), extreme learning machines (ELM), and echo state networks. These models were selected due to their extensive use in time series forecasting [8,9,10,11,12,13,14,15,16,28,46,54,55,56]. These ANN architectures can approximate any nonlinear, continuous, limited, and differentiable function, being universal approximators [57].

2.2.1. Multilayer Perceptron

The MLP is a feedforward artificial neural network (ANN) with multiple layers of artificial neurons (nodes) [27]. The artificial neurons of layer i are entirely connected to the subsequent layer’s nodes (i + 1).
This architecture can have three or more layers [57]:
  • Input layer: transmits the input signal to the hidden layers;
  • Hidden (intermediate) layers: these set of neurons performs a nonlinear transformation, mapping the input signal to another space;
  • Output layer: this layer receives the signal from the hidden layers and generates a combination to form the network’s output. Often this process is based on linear combinations.
In this work, we addressed MLP with just one hidden layer. Let u be the vector containing the inputs, b the bias vector, w k n i the intermediate layer weights. The variable n = 1, …, N is the index of each input, k = 1, …, K is related to each neuron, and w 1 n 0 are the weights of the output neuron. The output response of the network is given by Equation (1):
y = f s [ k = 1 K w 1 k 0 ( f ( n = 1 N w k n i u n + b ) ) ] ,
in which f is the activation function of the hidden neurons and fs a linear activation function of the output layer.
The training process of an ANN is defined as the steps to find its best set of weights to solve the task. In an MLP, the most usual way to adjust the network is to use an iterative process, resorting the solution of an unrestricted nonlinear optimization algorithm. The MLP architecture used in this work has three layers, and the training process is based on the modified scaled conjugate gradient, a second-order method [27].

2.2.2. Radial Basis Function Network

The radial basis function network (RBF) is also a widely known ANN framework, with two neuron layers: hidden and output layers. The first performs an input-output mapping, based on kernel (activation) functions of radial basis. In the hidden layer, each neuron presents two free parameters, a center, and a dispersion. The most used function is the Gaussian, given by Equation (2):
φ ( u ) = e ( u c ) 2 2 σ 2 ,
where c is the center of the Gaussian function, and σ2 is its variance.
The second layer conducts a combination of the previous layer’s outputs, in many cases, using a linear approach [57].
The training step of an RBF is performed following two stages. Initially, the weights of the hidden layer are adjusted, calculating their centers and dispersions. This task is performed using non-supervised clustering methods, like the K-Medoids. The second step is accomplished tuning the weights of the output layer. One can use a linear procedure to combine the outputs of the hidden layer. In this work, we address the Moore-Penrose pseudoinverse [58]. This operation ensures the minimum mean squared error (MSE) in the output. This solution is present in Equation (3):
W o u t = ( X h i d T X h i d ) 1 X h i d T d ,
where Xhid is the matrix containing the outputs of the hidden layer for the training set, Wout the weights of the output layer, and d the desired output.

2.2.3. Extreme Learning Machines

Extreme learning machines (ELM) [55] are single layer feedforward neural networks, with high similarity with the traditional MLP. The main difference between them is the training since, in this case, the hidden layers are not adjusted during the training process [28]. In this phase, just the output layer is tuned [27]. Therefore, its training processes require less computational effort than the previous. As the MLP, the ELM is a universal approximator.
Consider u the vector of inputs and b the bias. The output of the hidden layer is given by Equation (4):
x h = f h ( W h u + b ) ,
where Wh is the matrix containing the hidden layer weights and fh(.) is the matrix of the activation functions of these neurons.
The output of the network is given by Equation (5):
y = W o u t x n h ,
in which Wout is matrix with the weights of the output layer.
The training process is summarized in finding the weights of the output layer, while W is randomly chosen. The most usual way to solve this task is applying the same Moore-Penrose pseudoinverse operator depicted in the RBF case.

2.2.4. Echo State Networks

Echo state networks (ESN) are recurrent ANN, since they present feedback loops of information in the hidden layer, named dynamic reservoir here [56]. This characteristic can be an advantage, especially in problems with temporal dependence between the samples. As the ELM, the training process of an ESN just adjusts the output layer.
Jaeger [56] proved the possibility of settting in advance the weights of the neurons in the reservoir, and kept unchanged during the training, under specific conditions [27,28].
The output of the reservoir xn+1 is given by Equation (6):
x n + 1 = f ( W i n u n + 1 + W x n ) ,
being un+1 the input signal, Win the matrix containing the weights of the input layer, W the weights of the reservoir that presents the feedback connections’ synaptic weights and f(.) the respective activation functions.
The output y of the network is as in Equation (7):
y n + 1 = W o u t x n + 1 ,
where Wout contains the weights of the output layer.
There are some manners to generate the weights of the reservoir. In this work, we address the proposal from Jaeger [56], presented in Equation (8):
W = { 0.4   with   probability   0.025 0.4   with   probability   0.025 0.0   with   probability   0.950 .
Once again, we use the Moore-Penrose pseudoinverse operator to train the network [59,60,61].

3. Case Studies

Aiming to analyze if the proposed methodology brings gains to any studied location, we chose databases from two countries with specific climate and emission sources (Brazil and Finland). The case studies consist of predicting PM10 and PM2.5 to four different areas, with distinct characteristics. Helsinki (Kallio station—PM10 and PM2.5, and Vallila station—PM10), São Paulo city (Tietê station—PM10 and PM2.5), Campinas city (PM10) (São Paulo State, Brazil), and Ipojuca city, Pernambuco State, Brazil (Ipojuca station—PM10).
São Paulo and Campinas cities are in Southeast Brazil. São Paulo city is the capital of São Paulo state and the most populous city in Brazil, with 12,252,023 inhabitants, 969.32 km2 urban area, out of the total area—1521.11 km2—and demographic density of 7398.26 inhabitants per km2 [62]. It is the leading business and commercial, the financial center of South America with a Human Development Index of 0.805 [59,63]. Campinas city is the third most populous municipality in São Paulo state, Brazil, occupying 797.6 km2, with an urban area of 238.2 km2, and 1,194,094 inhabitants, demographic density of 1359.60 inhabitants per km2 [62]. Campinas and São Paulo have similar climates, with hot and rainy long summers and shorter winters with mild temperatures. The temperature ranges from 13 °C to 29 °C, rarely reaching values below 9 °C and above 33 °C [64].
Ipojuca city, Pernambuco state, Brazil, is located in the Brazilian Northeast, metropolitan region of Pernambuco state capital—Recife. It is a coastal city with 96,204 inhabitants and a demographic density of 152.98 inhabitants per km2 [62]. With a hot and windy climate, the temperature ranges from 22 °C to 32 °C, rarely reaching values below 21 °C and above 34 °C [64].
Helsinki is the capital and the most populous city of Finland, with 655,276 inhabitants and an urban area of 1,268,296 km2 [62]. It is the most important center of Finland for politics, education, finance, culture, and research in Finland. Helsinki weather differs significantly from the other studied cities, with temperature ranging from −8 °C to 21 °C, rarely reaching values below −19 °C and above 26 °C [64].
The demographic, business, and meteorological characteristics of each studied city are distinct and will ensure the proposed methodology is relevant to any location. Additionally, they present variable emission sources.
Figure 4 and Figure 5 show each station’s location from Brazil and Finland, respectively. São Paulo and Campinas are mainly affected by vehicular emissions. São Paulo station is near a ring road, being influenced mainly by heavy-duty emissions. In contrast, Campinas station is located near one of the city’s main avenues (urban area) with predominantly light-duty emissions. Ipojuca station is near a petrol refinery (industrial area) and port of Suape, a different emission pattern than Campinas and São Paulo. Beyond the demographic, business, and meteorological differences from Helsinki to Brazil, Kallio station is an urban background, and Vallila is in the city downtown, with high traffic [13,16,21]. We highlight the data from Finland were addressed in important investigations about air pollution forecasting [13,16,21].
Table 1 shows the number of samples and studied period for each station considered in this work. Additionally, we present the data source for each one. It is relevant to highlight the time range for Ipojuca station, leaving some months (May and June) with data for a year only. The time range is variable for each station, to comprise a different database and analyze the robustness of the proposed approach. The Kallio and Vallila time series were used, even with the old-time range, as the literature’s remarkable works [10,14,65] used it.
Table 2 shows the mean, standard deviation, maximum and minimum values of PM10 and PM2.5 concentrations for each station. There is high variability between databases. With maximum PM10 levels ranging from 78.7 to 138 µg/m3, and the means varying from 16.7 to 38 µg/m3. We have only two PM2.5 available databases, but with distinct values. São Paulo station has higher levels of PM2.5 than Kallio. This information shows the diversity of our data that is important to test the proposed methodology’s robustness. Figure A1, Figure A2, Figure A3, Figure A4, Figure A5, Figure A6 and Figure A7 in Appendix A show each station and pollutant concentrations during the studied period to illustrate the databases’ temporal variability.

4. Computational Simulations

Before applying the methodology, the database is divided into three sets:
  • Test set, for performance evaluation: it is composed of the last 10 samples of each month (for example, in January, we used the data from 22nd to 31st);
  • Validation set, to avoid the excessive adjustment (only to the MLP): formed by the last 10 samples of the remainders of each month, excepting the test (in January, we used the data from 12nd to 21st);
  • Training, to adjust the free parameters of the neural models: the remainder samples.
We standardized the test set to all months in 10 samples, since for Ipojuca series, there are two cases in which just one month of daily data (April and May 2016) is available. In this sense, we have 30 samples to adjust and evaluate the predictor’s performances to the decomposed series.
For all experiments, we normalized the dataset in the interval [−1, +1]. It is mandatory for neural models due to the saturation provided by the hyperbolic tangent activation functions [57].
The models’ performance was determined based on five measurements, depicted in Equations (9)–(13) [69,70]:
  • Mean squared error (MSE):
    M S E = 1 N s t = 1 N s ( x t x ^ t ) 2
    where x t is the observed sample in time t, x ^ t is the prediction, and Ns the number of predicted samples.
  • Mean absolute error (MAE):
    M A E = 1 N s t = 1 N s | x t x ^ t |
  • Mean absolute percentage error (MAPE):
    M A P E = 100 × 1 N s t = 1 N s | x t x ^ t x t |
  • Root mean squared error (RMSE):
    R M S E = ( 1 N s t = 1 N s ( x t x ^ t ) 2 )
  • Index of agreement (IA):
    I A = 1 ( t = 1 N s ( x t x ^ t ) 2 t = 1 N s ( | x ^ t x ¯ | + | x t x ¯ | ) 2 )
    in which x ¯ is the average of the observed series.
For the MSE and MAPE measures, the lower their values, the more accurate the model was. The IA varies in the range of [0, 1], and the higher its value, the better is the accuracy.
We performed 30 simulations for each PM series. The best configuration was selected based on the highest MSE value in the validation set. For all experiments, the models were evaluated in the one-step-ahead forecasting from the testing set. We highlight that the number of inputs, corresponding to time lags of the time series [71], is established from the wrapper method [72], considering the MSE as the evaluation function [73]. We considered up to 10 lags.
Table 3 shows the calculation of the coefficient of variation (CV) for all series regarding different scenarios: without a partition (whole series), with an annual partition (separated by year), and monthly partition. The values in bold indicate the months in which the CV is smaller than the CV of the series without partition or annual partition. Note that Campinas series presents just two years.
The CV (Table 3) has a reduction tendency when using the monthly partition. Comparing the CV of monthly partition to annual and without partition, we can observe that it showed lower values from 6 to 11 months. Additionally, the mean coefficient of variation (right column of Table 3) decreased when we employed the monthly partition to all studied time series, which means our approach may lead to better forecasting performances.

4.1. Results

Table 4, Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10 show the results achieved in terms of MSE for the seven previously mentioned stations, separated by month. The subscript “PP” means the results of the proposed partition–monthly decomposition (12 predictors), while “TR” represents the traditional approach (only one predictor to the whole series). For the sake of comparison, we also performed the linear autoregressive model (AR), from the box and Jenkins family, and a simple Naive persistent method (“PERS”) [74]. The best performances by month are highlighted in bold. The decomposition was used for all three sets (training, validation, and test). Also, Table A1, Table A2, Table A3, Table A4, Table A5, Table A6 and Table A7 in Appendix A present the values for MAE, MAPE, RMSE, and IA.
When using the proposed approach, 12 models are adjusted using the training approach. In this case, the samples are separated per month. On the other hand, the traditional approach uses just one predictor and all training samples.
The results regarding Kallio PM10 (Table 4) allow an interesting observation about error metrics. There is a divergence among the best performance (MSE, MAE, MAPE, RMSE, or IA). Although this behavior does not happen very often, to October, we can note that the best MSE, RMSE and IA were achieved by the RBFPP, the best MAE was from the ESNpp, while the smallest MAPE belongs to the ELMPP (see Table A1). In this case, we choose MSE as the primary metric, as it is minimized during the ANNs training process, and in the adjustment of the AR model [28,63,75]. Note that the RMSE is directly proportional to the MSE.
The comparison between the traditional and the monthly approach reveals that the decomposition increased the predictors’ performance for almost all months, except for the ELM. In this sense, only April showed the best result for ELMTR. The other months had mainly RBFPP reaching the smallest errors (seven months), and MLPPP and ESNPP in two cases each.
Figure 6 shows the real and predicted values for the Kallio PM10 series, considering the best predictor, the RBFPP, the correspondent to the traditional approach (RBFTR), and the samples from the test set. The best prediction capability of the monthly approach is clear. It is important to highlight the monthly approach improved the prediction of some extreme events of high air pollutants level.
Table 5 shows the results for Kallio PM2.5. The best results related to Kallio PM2.5 for each month showed different behavior than Kallio PM10. Again, the ELMTR overcame the others for one month (January), while the RBFPP, MLPPP, and ESNPP reached the best MSE for 4, 4, and 3 months, respectively. The monthly proposal was superior to the traditional method.
We noticed again a tendency of reduction in the error comparing distinct approaches for the same architecture. Despite the ELMTR presenting the best overall result for one month, the ELMPP showed smaller MSE for eight months, overcoming the traditional methodology. We highlight that the RBFTR gave the worse errors, an intriguing finding since its counterpart RBFPP was one of the bests. Once again, we observed a discrepancy regarding the best predictors and error metrics in Table A2. Seven times, the best IA did not match with the MSE and RMSE, while MAE five times, and MAPE six times.
Figure 7 presents the best prediction in the test set for RBFPP and MLPPP, and the same ANNs using the traditional approach (RBFTR and MLPTR). Again, it is clear the best performance of the monthly approach. Additionally, extremely high and low air pollution events were well predicted.
Table 6 summarizes the results of the Vallila PM10. The predictors’ performances to the Vallila station were favorable to the monthly approach, concerning the MSE. The RBFPP was the one reaching the smallest MSE more times (four months).
Since RBFPP and ESNPP achieved the best MSE in four months and considering the number of best results and months, we present their output responses in Figure 8, with the same to the traditional approach (RBFTR and ESNTR). It is interesting to observe the differences between the proposed and the traditional approaches. While the proposed methodology presented an adequate prediction capability to the whole test set, the original approach fits well for specific samples (some days of March, for example), but have significant discrepancies in others (beginning of September and October, for example).
Table 7 shows the errors achieved for the São Paulo PM10. The behavior found for the São Paulo PM10 was very similar to those related to the Vallila one. The monthly approach reached all the best errors based on the MSE metric, with the RBFPP presenting the best results five times, MLPPP four times, and ESNPP three times.
Figure 9 presents the predictions achieved by the RBFPP in the São Paulo PM10 time series, compared to the RBFTR and the original data. In this time series, we can observe that the RBFPP architecture could not lead to small prediction errors to all the air pollution peaks and valleys, probably due to external factors that affect air pollution compared to previous time series. However, it still has a better prediction performance than the traditional approach.
Table 8 shows the results for the São Paulo PM2.5 series. Once again, the results presented 100% of the best values of MSE favorable to the monthly approach, highlighting the ESNPP and the MLPPP, which were the best in 4 cases, RBFPP (three cases), and ELMPP (one case). It is the first time the RBF was not the best in most scenarios, showing the importance of applying different ANNs.
Figure 10 shows the MLPPP and MLPTR results compared to the original data to the São Paulo PM2.5 test set.
Table 9 has the results for Campinas PM10. Analyzing the metrics in Table 9, we can observe that the traditional approach led to the smallest errors for June (ESNTR) and August (ELMTR). The monthly proposal (RBFPP—four, ESNPP—five, and ELMPP—one) was the best to the remaining ten months. As in the São Paulo PM2.5 time series, the ESN found the best monthly performances.
Figure 11 presents the predictions found by the ESNPP and ESNTR in the test set compared to the original data.
Finally, Table 10 reveals the results for the last station considered in this work, Ipojuca PM10. The traditional approach achieved the smallest errors only for April (ESNTR). To the other months, the ESNPP showed the best values for MSE, five times. The MLPPP was the best for three months, RBFPP for two months, and the ELMPP only one month. In most cases, the general performances of the neural models were improved using the monthly approach. Additionally, in some instances, the best results considering distinct metrics did not match.
Figure 12 presents the forecasts achieved by the ESNPP, compared to the ESNTR and the original data of the Ipojuca PM10 time series. Again, the monthly approach seems to have a good prediction power.
Lastly, it is worth mentioning that we applied Friedman’s Test to the MSE of the test sets for all experiments, to verify if the results were significantly different [43]. The p-values ranged from 5.2771 × 10−12 to 0.0112. Therefore, we can assume the hypothesis that a change in the predictor leads to different results.

4.2. Discussion

The computational results presented in Table 4, Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10 allow some essential remarks. Initially, we elaborated Table 11, which shows how many times each model achieved the best performance by month. The goal is to analyze if the month decomposition brings gains to any neural network performance, independently of the characteristics of the studied time series.
Considering the 12 months of the seven test sets, we have 84 scenarios. The main observation is that in 79 out of 84 months (94%), the monthly approach achieved the best overall results, a clear indication that the use of disaggregated series and 12 predictors instead of only one model to the whole series is an advantage.
Comparing the same neural architecture (RBFPP and RBFTR) using the traditional and the monthly approach is favorable to our proposal in almost all cases. It is essential to highlight that we used seven time series from four different cities of two countries to distinct years, with diverse characteristics.
With regard to the neural models, the RBFPP and ESNPP presented similar results concerning the number of best monthly results found. For Kallio PM10 and São Paulo PM10, the RBF stands out. For Kallio PM2.5, both RBF and MLP showed the best results and for Vallila PM10, the RBF, and the ESN. The other three time series were favorable to the ESN.
We cannot state which the best predictor for such a task is. Therefore, in related problems, we should test both RBF and ESN models. Related works that used the disaggregated method, like in streamflow series [27,28,58], indicated that the ELM overcame its organized counterpart (MLP), but it was the worse in this work. On the other hand, in [59] and [63], the fully trained approaches were the winner. It seems clear that the evaluation of some distinct neural architectures is essential in time series forecasting tasks. In addition, the neural approaches overcame the linear AR model and the persistence model, except for the February of Kallio PM10 time series.
In this work, we followed the premises found in other investigations of prioritizing MSE [27,28,59,63], since it penalizes higher errors. However, some studies pointed to other error metrics. In [76], the authors advocate the use of MAE. In this sense, we calculated the number of times our proposition achieved the best values for all metrics. For each case, we found: (i) MAE—89%; (ii) MAPE—86%; RMSE—94%; IA—61%. These results allowed us to ensure the gains in applying the new methodology based on decomposition.
During extreme events of high air pollution, the health systems collapse due to overcrowding, and the government needs to take quick measures to assure treatment to the whole population. To this end, the monthly approach was robust to all considered time series, but the power of prediction may differ depending on each time series behavior.
Lastly, PM emissions are mainly anthropogenic, then life cycle assessment (LCA) thinking may help to better understand their impacts on human health. One scientific challenge of the LCA community is the regionalization of life cycle impact assessment (LCIA) methods, as the source location and its surrounding conditions are paramount to a reliable and accurate result [77,78]. In this sense, our study may assist the development and/or regionalization of PM formation methods in LCIA context.

5. Conclusions

This paper proposed a methodology based on monthly decomposition to PM concentration forecasting. We applied 12 independent predictors in this approach, each one responsible for predicting the data from each month. The traditional methodology just uses one predictor for the whole series.
Four different neural network architectures, a linear model, and a persistence model were addressed using both methodologies. Our approach based on the time series disaggregation improved air pollutants concentration forecasting with the best capability to predict air pollution in 94% of the scenarios analyzed. It is important to highlight our approach’s robustness, since the predictors increased their performance compared to the traditional method, for various databases from different locations and pollutants.
Regarding the best forecasting approach, the RBF and ESN neural networks achieved the smallest errors in most cases, followed by the MLP, when using the new decomposition approach. However, we cannot state which ANN is the adequate neural model, reinforcing the necessity of testing various architectures. On the other hand, the ANNs overcame the AR model and the Persistence approach.
Considering the United Nations sustainable development goals (SDG) [79], which are to change the world, our study has an important contribution to foresee air quality and then help governments to take measures aiming air pollution reduction and preparing hospitals during events of extreme air pollution, which is related to SDG 3-good health and well-being and SDG 11-sustainable cities and communities.
Future works should include the use of variable-sized time windows in the time series decomposition, the use of selection and combination of predictors, and the application of other computational intelligence approaches, such as deep learning, hybrid methods, and ensembles. Additionally, exogenous variables as meteorological information could be addressed as an input of the models [80], and other time series or even variable sizes of the training, validation, and test sets.

Author Contributions

Conceptualization: P.S.G.d.M.N., M.H.N.M., H.S., Y.d.S.T., and F.M.; methodology: P.S.G.d.M.N., M.H.N.M., H.S., Y.d.S.T., T.A.A., and F.M.; software: P.S.G.d.M.N., H.S., Y.d.S.T., and T.A.A.; validation: H.S., P.S.G.d.M.N., J.F.L.d.O., and M.H.N.M.; formal analysis: P.S.G.d.M.N., M.H.N.M., T.A.A. and J.F.L.d.O.; investigation: Y.S.T., V.M., and T.A.A.; resources: M.H.N.M.; writing—original draft preparation: P.S.G.d.M.N., M.H.N.M., J.F.L.d.O., H.S., Y.d.S.T., T.A.A., V.M., and F.M.; writing—review and editing: H.S., Y.d.S.T., T.A.A., V.M., and F.M.; illustrations: T.A.A.; visualization: P.S.G.d.M.N., T.A.A., and J.F.L.d.O.; supervision: F.M., and M.H.N.M.; project administration: M.H.N.M.; funding acquisition: H.S. and M.H.N.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by CNPq grant number 40558/2018-5; and Araucaria Foundation grant number #51497. And the APC was funded by DIRPPG-UTFPR-PG.

Acknowledgments

The authors would like to thank CAPES and FACEPE for supporting this research.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

In Table A1, Table A2, Table A3, Table A4, Table A5, Table A6 and Table A7 it is presented the errors’ metrics for all series addressed. For each month, we highlighted in bold the metrics of the model that attained the best MSE value, and in shade of gray, the best value of each metric.
Table A1. Evaluation measures to the Kallio PM10. The best values by month are highlighted in bold.
Table A1. Evaluation measures to the Kallio PM10. The best values by month are highlighted in bold.
MetricJanFebMarAprMayJunJulAugSepOctNovDec
MLPPPMSE33.7954.9653.7227.2538.3310.8229.727.695.2054.478.2415.19
MAE4.106.105.814.044.902.354.392.201.383.512.083.01
MAPE27.7217.7048.6937.9828.6416.0016.5230.1913.3117.8925.9427.18
RMSE5.817.417.335.226.193.295.452.772.287.382.873.90
IA0.780.910.720.940.630.790.820.960.820.700.590.78
RBFPPMSE21.4135.1338.0624.4730.089.0831.144.654.7334.216.8315.68
MAE3.445.494.603.944.032.304.281.901.823.652.113.46
MAPE20.8618.4524.5435.9123.7617.0318.6921.1114.0818.5722.6839.49
RMSE4.635.936.174.955.483.015.582.162.185.852.613.96
IA0.860.960.880.940.730.830.880.980.890.870.790.74
ELMPPMSE29.4279.2859.1031.8639.4215.0946.6614.109.0942.577.5820.39
MAE4.167.706.304.705.032.815.943.022.812.981.923.58
MAPE23.7023.6251.0744.6130.5719.9925.8246.8023.8612.1025.3443.85
RMSE5.428.907.695.646.283.896.833.753.016.522.754.52
IA0.700.860.690.920.600.700.590.900.620.750.630.60
ESNPPMSE23.5956.5045.5232.8427.318.8837.147.575.2436.397.4816.81
MAE3.876.505.424.894.402.364.612.181.602.851.953.24
MAPE25.6019.9049.9841.4923.3717.6023.0632.3014.2015.6124.9839.44
RMSE4.867.526.755.735.232.986.092.752.296.032.744.10
IA0.780.920.780.930.780.810.810.960.850.810.650.69
ARPPMSE24.3733.4876.0430.0534.3111.8637.2511.618.1641.067.2720.95
MAE3.604.986.984.595.092.644.542.612.294.331.883.73
MAPE26.1815.2067.2636.4533.5322.0927.2035.9320.5832.8423.6443.16
RMSE4.945.798.725.485.863.446.103.412.866.412.704.58
IA0.800.960.580.950.730.780.850.940.780.790.710.57
MLPTRMSE36.3775.8253.3425.5544.5113.8035.5510.8510.2652.3711.1125.56
MAE4.967.386.574.265.232.675.302.512.773.713.054.14
MAPE29.1922.1844.7038.4228.5918.0225.8337.1223.9319.7130.2941.97
RMSE6.038.717.305.056.673.715.963.293.207.243.335.06
IA0.630.870.730.950.590.750.790.940.590.680.720.59
RBFTRMSE38.8278.1346.9522.3747.5415.4032.9012.9012.2857.2013.4526.61
MAE5.037.496.454.005.032.864.812.833.123.833.374.29
MAPE29.4522.5141.2735.5727.1420.1423.8742.5026.4519.0632.9444.71
RMSE6.238.846.854.736.893.925.743.593.507.563.675.16
IA0.620.860.790.950.550.720.820.920.530.680.670.51
ELMTRMSE31.6274.9339.1716.7938.9014.1835.609.097.5052.729.9422.00
MAE4.337.255.613.514.672.725.072.212.143.372.683.75
MAPE23.7321.8034.5229.7925.5918.5723.0936.0018.2017.1427.5238.47
RMSE5.628.666.264.106.243.775.973.012.747.263.154.69
IA0.690.870.830.970.630.750.780.940.730.710.730.61
ESNTRMSE31.4876.6149.0122.4651.3614.1533.7011.197.9354.4010.3922.07
MAE4.197.516.393.845.872.614.822.702.604.042.663.77
MAPE26.0522.7339.0435.0331.7517.5323.2140.6321.9121.7826.3442.46
RMSE5.618.757.004.747.173.765.813.352.827.383.224.70
IA0.750.870.810.950.590.770.810.930.690.680.730.62
ARTRMSE103.18116.57173.42107.56113.7933.0051.4023.0020.8483.7532.8227.27
MAE8.249.5011.508.568.844.276.023.634.025.034.744.34
MAPE37.9425.7057.6652.6832.9822.4930.2756.5136.0224.1629.0446.94
RMSE10.1610.8013.1710.3710.675.747.174.804.579.155.735.22
IA0.830.900.880.580.860.880.830.840.510.720.870.65
PERSMSE34.8757.2678.3232.1544.9216.4336.3215.5416.7472.9115.2432.72
MAE4.196.646.884.975.593.565.093.403.455.573.414.70
MAPE27.9521.5941.5338.9533.0225.1826.4645.9428.1230.3235.6740.43
RMSE5.907.578.855.676.704.056.033.944.098.543.905.72
IA0.760.920.760.940.680.790.850.920.540.730.570.62
Table A2. Evaluation measures to the Kallio PM2.5. The best values by month are highlighted in bold.
Table A2. Evaluation measures to the Kallio PM2.5. The best values by month are highlighted in bold.
MetricJanFebMarAprMayJunJulAugSepOctNovDec
MLPPPMSE6.4113.5129.9112.907.031.7614.241.322.5532.710.9914.01
MAE1.752.994.232.222.171.083.280.911.323.280.843.21
MAPE21.6917.0946.9419.3630.2519.2720.7319.7019.6844.5513.9481.17
RMSE2.533.685.473.592.651.333.771.151.605.720.993.74
IA0.910.950.470.810.560.850.780.980.870.700.860.49
RBFPPMSE6.4317.2229.8610.824.842.2316.101.406.278.171.499.18
MAE2.183.233.982.731.851.293.161.012.232.220.982.29
MAPE26.7617.2640.8330.8024.9521.9419.6121.7035.5345.8314.3260.88
RMSE2.544.155.463.292.201.494.011.182.502.861.223.03
IA0.840.930.620.810.770.790.720.980.410.950.830.73
ELMPPMSE5.0625.5027.4318.226.953.6312.083.417.6933.361.9416.64
MAE1.954.173.833.312.261.512.971.392.133.131.113.56
MAPE25.4727.1345.1532.6730.4128.0418.3334.9032.2945.7318.9787.76
RMSE2.255.055.244.272.641.903.481.852.775.781.394.08
IA0.900.900.440.650.610.570.850.940.500.700.650.42
ESNPPMSE8.4515.0224.6111.346.752.204.822.995.5028.271.9512.93
MAE2.443.163.622.382.071.121.721.241.743.251.272.96
MAPE32.4622.2039.2223.4427.7918.7510.7529.0130.7044.0221.2771.69
RMSE2.913.884.963.372.601.482.201.732.355.321.403.60
IA0.830.950.510.830.760.810.950.960.620.790.710.66
ARPPMSE11.3916.7356.5122.507.172.7311.292.167.2830.481.8514.34
MAE2.702.875.483.472.101.422.801.172.193.421.143.29
MAPE37.1021.8366.7935.8530.4824.1917.5628.1134.5958.3220.1270.35
RMSE3.384.097.524.742.681.653.361.472.705.521.363.79
IA0.780.940.370.700.730.730.850.970.480.750.640.62
MLPTRMSE9.3536.0743.1820.217.584.0920.302.589.5937.352.1220.92
MAE2.355.204.583.372.041.733.441.272.413.440.904.13
MAPE26.0731.4553.3632.3823.5627.2120.6231.7734.3946.3112.5494.29
RMSE3.066.016.574.502.752.024.511.613.106.111.454.57
IA0.800.830.420.650.540.730.680.950.440.680.770.42
RBFTRMSE11.22227.99162.36379.879748.123.7629977.903.2517.7245.033.3723.05
MAE2.5611.067.619.9932.881.4881.321.363.334.081.434.39
MAPE27.3486.81108.48109.85405.4322.21564.6334.1749.4554.5620.62102.25
RMSE3.3515.1012.7419.4998.731.94173.141.804.216.711.834.80
IA0.730.240.160.270.010.770.000.940.510.500.710.35
ELMTRMSE4.9531.7542.7317.927.333.5612.722.238.0634.142.1518.79
MAE1.954.724.632.872.031.583.001.162.273.100.963.86
MAPE21.9829.2153.0125.8323.9524.3319.1629.3733.4345.4313.5289.46
RMSE2.235.646.544.232.711.893.571.502.845.841.474.34
IA0.880.880.410.680.660.760.830.960.490.710.810.47
ESNTRMSE11.8625.8340.3521.537.984.1511.883.519.1635.851.8421.25
MAE2.594.014.273.492.251.632.791.512.463.290.773.96
MAPE33.9527.4649.9233.2527.9823.9318.5838.1537.5647.2210.8393.57
RMSE3.445.086.354.642.822.043.451.873.035.991.364.61
IA0.810.910.410.670.700.760.870.930.430.710.840.44
ARTRMSE58.30188.61217.1692.7130.9917.0179.737.1060.99106.9310.9834.32
MAE5.9110.8910.487.385.173.457.522.146.025.942.615.06
MAPE47.5130.60122.9147.2041.3334.8222.8669.8862.4378.8524.03333.31
RMSE7.6413.7314.749.635.574.128.932.677.8110.343.315.86
IA0.870.950.110.790.890.780.960.880.750.780.930.62
PERSMSE16.4022.7767.2629.099.084.7616.153.0214.4644.132.4733.33
MAE2.733.906.054.002.541.723.311.482.984.571.184.87
MAPE32.1628.3075.7941.7433.8328.2822.6333.8248.5766.5918.1493.74
RMSE4.054.778.205.393.012.184.021.743.806.641.575.77
IA0.780.930.330.630.620.660.830.960.430.750.720.43
Table A3. Evaluation measures for Vallila PM10. The best values by month are highlighted in bold.
Table A3. Evaluation measures for Vallila PM10. The best values by month are highlighted in bold.
MetricJanFebMarAprMayJunJulAugSepOctNovDec
MLPPPMSE42.0676.1375.6430.5827.4223.1617.796.4074.7473.5816.4322.69
MAE5.056.366.554.704.393.683.371.936.036.093.394.21
MAPE28.3930.4415.7728.1538.0529.0914.3123.0352.1557.9936.6937.35
RMSE6.498.738.705.535.244.814.222.538.658.584.054.76
IA0.580.780.900.960.770.610.780.980.690.620.750.83
RBFPPMSE18.7559.0880.9045.6522.8423.9114.084.3337.8435.4812.6717.96
MAE3.215.156.605.424.024.063.281.794.715.172.883.50
MAPE17.9628.0415.9230.7531.8331.6313.9923.3431.3645.2725.8739.08
RMSE4.337.698.996.764.784.893.752.086.155.963.564.24
IA0.880.890.880.940.810.720.830.980.840.870.870.82
ELMPPMSE47.7786.75180.8768.3720.5525.8821.1811.3975.7775.677.8527.25
MAE5.456.738.846.693.684.143.462.205.625.542.574.35
MAPE31.2432.8222.9839.0928.3735.3714.4731.0151.9852.4524.1049.00
RMSE6.919.3113.458.274.535.094.603.388.708.702.805.22
IA0.530.780.620.900.860.580.800.950.690.680.920.70
ESNPPMSE34.1531.77176.2783.4325.2813.419.927.1359.5934.9514.9217.91
MAE4.214.5210.587.544.062.982.551.995.454.832.953.45
MAPE29.1617.8228.8546.0430.3422.8411.3025.3448.7443.3032.5438.72
RMSE5.845.6413.289.135.033.663.152.677.725.913.864.23
IA0.800.950.750.870.810.810.910.970.720.820.780.82
ARPPMSE48.3374.74199.8158.9231.8444.5210.638.0184.5484.5415.1926.91
MAE5.416.1810.705.594.375.392.662.146.666.663.224.10
MAPE34.3930.9332.5426.6135.1540.4412.5327.6273.1773.1732.4743.21
RMSE6.958.6514.147.685.646.673.262.839.199.193.905.19
IA0.610.840.680.940.700.470.910.970.530.530.800.73
MLPTRMSE60.01107.66260.4338.5134.0440.0226.007.5475.8575.859.7221.69
MAE5.457.8211.274.984.955.124.281.815.595.592.154.01
MAPE30.0334.3927.3129.6531.9433.9518.0024.5455.4555.4518.0439.34
RMSE7.7510.3816.146.215.836.335.102.758.718.713.124.66
IA0.530.680.450.950.800.600.690.970.620.620.930.80
RBFTRMSE58.1684.31242.6659.8023.1756.1030.128.3090.2290.2212.7921.92
MAE5.456.639.745.973.966.004.401.876.506.502.714.04
MAPE29.8129.8321.0233.6927.4939.7617.5825.6262.2162.2123.0741.10
RMSE7.639.1815.587.734.817.495.492.889.509.503.584.68
IA0.560.770.410.920.850.460.590.960.620.620.900.79
ELMTRMSE138.8177.371208.7735.6498.8378.60143.617.1388.2288.2215.8213.04
MAE8.777.0121.174.098.596.349.291.816.456.453.092.95
MAPE50.8027.9674.3724.0157.4945.1440.0023.8167.4567.4528.0127.19
RMSE11.788.8034.775.979.948.8711.982.679.399.393.983.61
IA0.480.790.320.960.650.590.660.970.610.610.890.89
ESNTRMSE71.34115.27239.2350.1044.8449.5125.528.6798.7998.7911.2620.20
MAE5.878.4410.935.275.816.073.742.076.676.672.503.79
MAPE37.3738.9128.0230.0838.7141.8017.3327.8571.1071.1020.2940.06
RMSE8.4510.7415.477.086.707.045.052.949.949.943.364.49
IA0.600.760.660.940.750.500.830.960.550.550.920.80
ARTRMSE340.26390.361562.82223.77170.67176.45126.2429.43158.63158.6336.3243.24
MAE14.1415.0828.6812.3311.0110.878.663.908.538.534.665.70
MAPE52.3144.4634.8248.3935.6350.3419.6863.15218.30218.3023.1359.52
RMSE18.4519.7639.5314.9613.0613.2811.245.4212.5912.596.036.58
IA0.810.890.890.650.890.690.960.860.580.580.840.69
PERSMSE73.55105.38332.5756.2246.0858.5423.3810.83135.86135.8613.3429.02
MAE5.707.8912.354.945.936.443.912.738.938.932.674.43
MAPE36.2338.8232.0326.3541.4645.7318.0433.7082.3682.3622.2437.55
RMSE8.5810.2718.247.506.797.654.843.2911.6611.663.655.39
IA0.620.800.560.940.730.460.840.960.590.590.900.80
Table A4. Evaluation measures for São Paulo PM10. The best values by month are highlighted in bold.
Table A4. Evaluation measures for São Paulo PM10. The best values by month are highlighted in bold.
MetricJanFebMarAprMayJunJulAugSepOctNovDec
MLPPPMSE83.1825.4335.1140.6688.3786.1796.4575.08226.3719.7655.8218.78
MAE8.084.584.754.727.638.208.547.878.882.876.133.06
MAPE24.7817.8620.3713.9327.0717.3418.3432.2885.169.7022.8517.52
RMSE9.125.045.936.389.409.289.828.6615.054.457.474.33
IA0.840.710.870.950.840.850.870.950.860.930.610.77
RBFPPMSE99.127.2734.0543.4278.3198.9739.8061.14370.7025.0338.719.96
MAE8.192.154.545.687.227.665.336.5711.993.985.082.76
MAPE23.267.6420.2117.1622.8317.669.8826.51133.3011.6518.3815.01
RMSE9.962.705.846.598.859.956.317.8219.255.006.223.16
IA0.760.920.810.920.850.810.950.960.720.870.640.90
ELMPPMSE88.2016.7334.6262.27107.2893.21101.08135.57241.6527.6751.6916.74
MAE8.213.374.126.178.988.068.5110.3810.723.786.203.02
MAPE24.4413.6914.9117.4832.0921.2215.2640.43108.5210.7322.0417.21
RMSE9.394.095.887.8910.369.6510.0511.6415.555.267.194.09
IA0.820.730.810.860.780.820.880.890.820.860.640.77
ESNPPMSE73.619.3121.0268.2587.1896.1371.7939.88260.8848.3735.9615.69
MAE7.182.583.666.897.128.756.715.1210.705.614.612.87
MAPE21.3711.6214.3419.2628.4119.0313.3418.4996.8516.3817.5216.72
RMSE8.583.054.588.269.349.808.476.3216.156.956.003.96
IA0.870.910.900.880.880.810.920.970.800.680.720.79
ARPPMSE75.7613.2725.0981.8995.36131.43126.04103.74304.8446.4039.0417.09
MAE7.612.663.886.707.839.909.858.5811.675.405.203.51
MAPE24.5712.0315.8320.2731.4422.0219.9438.09110.4815.7419.2718.95
RMSE8.703.645.019.059.7711.4611.2310.1917.466.816.254.13
IA0.860.830.870.830.850.740.850.920.790.670.670.84
MLPTRMSE115.9427.9731.68139.21125.65136.33231.53170.03284.37120.30100.2717.80
MAE9.424.444.5310.119.428.2713.5211.1610.918.138.683.75
MAPE29.8817.2817.9227.9534.9416.9124.1745.27109.3123.7132.6619.73
RMSE10.775.295.6311.8011.2111.6815.2213.0416.8610.9710.014.22
IA0.780.580.860.690.770.680.620.860.790.280.460.85
RBFTRMSE133.4126.4770.20119.44192.56225.62256.58180.61316.1898.14103.7416.50
MAE10.034.447.179.8612.3414.2614.4011.1610.637.618.873.36
MAPE29.2517.1828.3928.4242.8732.8826.2037.63108.9621.4832.1817.88
RMSE11.555.148.3810.9313.8815.0216.0213.4417.789.9110.194.06
IA0.750.550.670.740.610.350.620.850.820.180.210.85
ELMTRMSE124.1528.8532.10135.68118.77125.79207.64151.44266.64110.7392.6414.38
MAE9.524.664.699.689.567.5012.9410.578.937.558.253.41
MAPE30.2318.2418.2327.3933.1413.9823.7542.9396.9621.7531.1517.45
RMSE11.145.375.6711.6510.9011.2214.4112.3116.3310.529.623.79
IA0.770.610.860.710.790.750.640.880.830.340.500.89
ESNTRMSE102.9813.8928.01136.71150.38171.41220.79172.95270.9774.31103.0115.37
MAE8.762.544.7210.2710.029.9013.0310.8711.387.238.573.40
MAPE27.6311.4518.9932.0736.3026.9523.5738.10100.1822.0633.0217.28
RMSE10.153.735.2911.6912.2613.0914.8613.1516.468.6210.153.92
IA0.830.850.910.750.700.740.640.850.800.580.450.86
ARTRMSE138.6452.1755.57191.96221.42306.05496.17408.73562.43189.19174.3926.71
MAE10.325.876.3411.2112.4914.7820.5316.8115.6810.8711.374.31
MAPE30.2817.9917.5728.6838.4422.4127.0775.7689.6724.0132.6317.18
RMSE11.777.227.4513.8614.8817.4922.2720.2223.7213.7513.215.17
IA0.860.910.930.830.840.950.880.810.480.850.780.79
PERSMSE131.4432.4460.26148.14215.47183.56231.51127.88435.96138.17119.0821.86
MAE10.305.256.829.8612.0611.6712.018.9712.578.748.643.81
MAPE33.0120.6030.0829.9448.2529.5624.0829.47117.9527.0735.0620.77
RMSE11.465.707.7612.1714.6813.5515.2211.3120.8811.7510.914.68
IA0.810.670.820.760.740.690.750.920.790.320.460.80
Table A5. Evaluation measures for São Paulo PM2.5. The best values by month are highlighted in bold.
Table A5. Evaluation measures for São Paulo PM2.5. The best values by month are highlighted in bold.
MetricJanFebMarAprMayJunJulAugSepOctNovDec
MLPPPMSE10.302.757.992.4643.7636.3549.6449.4665.7720.4810.346.13
MAE2.421.262.091.264.714.686.006.035.793.602.631.76
MAPE12.049.0917.966.1430.2817.0522.0237.9465.8116.6116.5214.61
RMSE3.211.662.831.576.626.037.057.038.114.533.212.48
IA0.960.880.890.990.810.590.830.910.920.640.860.81
RBFPPMSE27.553.1610.348.8234.7121.2746.3447.5096.5313.5617.094.80
MAE3.551.462.502.444.673.995.305.577.922.873.611.64
MAPE16.1310.7820.5912.4027.5915.1622.9537.20100.3914.7924.5414.14
RMSE5.251.783.222.975.894.616.816.899.823.684.132.19
IA0.890.880.850.940.840.800.830.890.780.830.590.83
ELMPPMSE15.714.269.1820.4059.6053.0240.5254.9336.9921.2518.956.26
MAE3.231.672.212.796.235.845.645.754.803.263.601.83
MAPE15.4312.4718.5715.8437.1921.5319.6937.6553.9914.9822.9816.54
RMSE3.962.063.034.527.727.286.377.416.084.614.352.50
IA0.930.810.860.850.740.530.870.880.940.610.520.76
ESNPPMSE26.912.5810.5618.6930.1136.6534.2239.4767.4816.1615.207.35
MAE4.341.292.743.494.305.314.434.055.583.173.091.80
MAPE21.278.7323.1917.5527.6922.3519.2226.1660.8214.3321.6917.44
RMSE5.191.613.254.325.496.055.856.288.214.023.902.71
IA0.870.900.860.860.900.700.910.930.900.750.730.67
ARPPMSE26.652.8212.1718.5545.3352.5958.1161.25100.7519.4116.536.99
MAE4.221.403.073.664.986.026.316.587.323.643.401.96
MAPE22.1010.1226.4519.4928.6925.1126.4642.0278.8116.8022.9717.49
RMSE5.161.683.494.316.737.257.627.8310.044.414.072.64
IA0.870.860.800.870.810.570.720.880.850.670.710.74
MLPTRMSE39.485.3614.2138.1761.9470.72131.3678.94108.4830.3040.5110.96
MAE4.971.962.864.866.237.169.146.986.204.525.402.34
MAPE25.3913.9120.6425.3234.6428.2137.5942.8478.9121.6538.3420.30
RMSE6.282.323.776.187.878.4111.468.8910.425.506.363.31
IA0.810.790.850.660.750.260.530.830.830.590.410.70
RBFTRMSE39.146.5219.4831.3983.8455.69107.5267.57100.2538.4646.339.52
MAE5.182.123.624.617.226.158.626.305.864.636.262.57
MAPE24.4715.1626.4623.7341.6222.5733.5635.8275.6221.9542.8021.49
RMSE6.262.554.415.609.167.4610.378.2210.016.206.813.08
IA0.790.660.770.720.670.490.610.860.860.340.210.71
ELMTRMSE33.935.4813.5539.5445.8854.51109.7164.24104.5133.8940.0010.29
MAE4.671.772.705.015.646.288.356.775.404.335.362.62
MAPE22.9812.8518.9626.1031.4923.8633.2739.0570.1919.9037.1821.12
RMSE5.822.343.686.296.777.3810.478.0210.225.826.323.21
IA0.840.770.840.640.830.540.570.870.860.500.440.75
ESNTRMSE27.695.5413.1740.3159.8890.64116.5876.0595.3635.8141.8011.45
MAE4.211.942.964.525.787.859.286.416.654.535.632.64
MAPE20.9712.7222.7124.1333.2732.6235.1739.6879.9121.7039.6622.75
RMSE5.262.353.636.357.749.5210.808.729.775.986.463.38
IA0.880.830.870.670.740.430.530.830.840.590.460.69
ARTRMSE33.366.7519.4258.8689.8297.71291.64153.85241.7774.6679.3716.97
MAE4.622.173.395.557.768.0213.539.5510.106.907.443.05
MAPE22.4713.3021.8624.8935.5026.2234.5961.07100.0923.7538.9619.22
RMSE5.782.604.417.679.489.8817.0812.4015.558.648.914.12
IA0.870.910.810.820.820.710.830.790.510.870.730.75
PERSMSE30.938.2122.3842.97100.3379.20127.2475.87146.6348.6856.5611.55
MAE4.382.203.914.677.527.638.866.796.595.096.342.58
MAPE23.7016.3131.0926.1945.5929.3338.3230.8982.7125.2847.2522.36
RMSE5.562.874.736.5610.028.9011.288.7112.116.987.523.40
IA0.880.740.800.710.720.400.640.880.830.420.320.64
Table A6. Evaluation measures for Campinas PM10. The best values by month are highlighted in bold.
Table A6. Evaluation measures for Campinas PM10. The best values by month are highlighted in bold.
MetricJanFebMarAprMayJunJulAugSepOctNovDec
MLPPPMSE14.8711.5064.0333.6791.4659.0076.03302.2742.7137.5432.5243.23
MAE3.012.576.924.867.386.266.9314.585.464.714.575.67
MAPE13.978.7123.6215.2315.9218.7012.4343.7420.1812.1917.5927.07
RMSE3.863.398.005.809.567.688.7217.396.546.135.706.57
IA0.840.800.760.810.890.900.370.590.980.700.820.83
RBFPPMSE7.849.4231.8428.78104.3074.8944.87276.7494.1333.3932.2067.83
MAE2.332.714.615.028.157.555.0213.927.634.384.556.97
MAPE11.029.2916.9415.8515.5523.949.4941.2432.2211.3021.0935.39
RMSE2.803.075.645.3610.218.656.7016.649.705.785.678.24
IA0.910.740.840.820.860.850.670.640.930.770.650.62
ELMPPMSE10.619.4847.1436.8388.6297.0843.59359.9238.8459.0529.6847.04
MAE2.642.225.654.946.938.504.9215.715.266.374.165.88
MAPE11.517.9121.4615.3815.5727.529.2448.0320.8816.5019.7328.88
RMSE3.263.086.876.079.419.856.6018.976.237.685.456.86
IA0.890.700.730.770.920.770.640.560.980.660.670.81
ESNPPMSE16.1914.8943.8020.9946.39107.1738.97238.3939.7140.4517.7436.03
MAE3.423.435.683.815.778.695.0112.095.284.313.525.13
MAPE16.0512.4520.5612.0412.9328.548.8139.2819.8810.5915.2324.07
RMSE4.023.866.624.586.8110.356.2415.446.306.364.216.00
IA0.790.490.770.890.960.770.770.650.980.720.860.85
ARPPMSE14.8212.0479.4121.5871.48118.8656.20384.1279.9560.5927.0245.84
MAE3.583.136.744.166.008.975.8915.747.646.394.365.87
MAPE15.9511.3523.7413.2214.0428.6011.4053.8729.5816.8818.5628.87
RMSE3.853.478.914.658.4510.907.5019.608.947.785.206.77
IA0.780.570.650.900.930.730.650.430.950.480.790.79
MLPTRMSE12.0012.0175.0136.68122.7591.1756.64354.3051.9672.3526.6039.30
MAE2.812.727.435.398.867.866.2615.985.556.693.955.35
MAPE12.299.1726.7416.2018.0723.8211.7242.9023.6616.8615.9024.67
RMSE3.463.468.666.0611.089.557.5318.827.218.515.166.27
IA0.860.720.660.810.810.840.570.540.960.570.840.87
RBFTRMSE13.4716.4794.0648.51265.7571.9168.07320.2463.6482.7428.4841.46
MAE2.782.968.186.2214.297.066.4315.416.247.464.425.52
MAPE11.539.8430.2918.0427.2420.8511.7839.1726.8719.2017.7625.17
RMSE3.674.069.706.9716.308.488.2517.907.989.105.346.44
IA0.860.660.570.720.270.880.470.560.950.510.820.87
ELMTRMSE13.8513.3375.2536.34105.9680.7551.02212.4748.4263.1827.6238.44
MAE2.942.767.065.018.907.036.1413.405.746.183.935.57
MAPE12.839.2325.8014.4317.3520.9811.5733.4524.4615.5915.5325.46
RMSE3.723.658.676.0310.298.997.1414.586.967.955.266.20
IA0.840.720.670.810.900.870.540.720.970.590.840.87
ESNTRMSE8.7517.3857.5435.7578.0149.8052.17380.4439.5463.2921.2148.57
MAE2.213.216.494.896.446.705.7714.995.406.483.665.87
MAPE10.0010.9623.4513.2014.2119.9810.6741.4922.6116.1414.3727.96
RMSE2.964.177.595.988.837.067.2219.506.297.964.616.97
IA0.920.710.750.820.930.910.750.640.970.540.880.85
ARTRMSE47.7030.00200.80188.40340.38219.91127.13993.01119.35269.6685.60103.41
MAE6.034.8111.7711.5213.8212.899.6525.089.3212.927.449.00
MAPE18.8312.0533.9324.7920.9234.2013.6150.7639.7226.6024.6830.82
RMSE6.915.4814.1713.7318.4514.8311.2831.5110.9216.429.2510.17
IA0.830.940.360.890.970.530.950.750.920.110.300.35
PERSMSE14.1320.62116.8942.5296.5285.7776.56578.2874.19105.4940.9160.30
MAE2.893.459.055.728.077.676.7217.507.638.234.906.50
MAPE12.2111.7933.4417.9818.3421.9112.8054.1629.5021.3718.9829.60
RMSE3.764.5410.816.529.829.268.7524.058.6110.276.407.77
IA0.860.640.530.850.920.880.590.530.960.440.770.83
Table A7. Evaluation measures for Ipojuca PM10. The best values by month are highlighted in bold.
Table A7. Evaluation measures for Ipojuca PM10. The best values by month are highlighted in bold.
MetricJanFebMarAprMayJunJulAugSepOctNovDec
MLPPPMSE41.2681.2519.3220.55176.0472.4353.0421.5019.3843.2520.116.56
MAE5.357.513.584.1411.027.304.954.233.715.443.561.97
MAPE12.1332.1713.0615.2658.2019.4312.9614.7610.7313.8310.686.58
RMSE6.429.014.404.5313.278.517.284.644.406.584.482.56
IA0.800.290.810.320.550.780.680.910.840.750.880.89
RBFPPMSE97.7238.9922.4621.20137.0162.8033.1879.399.8853.8831.1411.34
MAE8.605.554.143.989.866.574.597.252.625.874.522.93
MAPE19.3121.2414.9814.7258.0816.4310.9225.387.5714.8115.499.83
RMSE9.896.244.744.6011.717.925.768.913.147.345.583.37
IA0.040.770.740.430.500.670.780.630.930.390.610.75
ELMPPMSE102.7433.5533.0324.10176.5958.1834.8649.7521.4644.6811.478.12
MAE8.725.285.274.4311.535.984.656.293.905.142.392.06
MAPE19.4819.2021.1616.6362.6915.4611.9822.4111.3412.797.987.04
RMSE10.145.795.754.9113.297.635.907.054.636.683.392.85
IA0.030.830.470.210.520.800.820.700.810.570.920.89
ESNPPMSE23.1116.0326.3316.35108.8146.4940.1773.2511.5730.7319.357.63
MAE3.903.633.843.569.135.054.495.432.584.313.492.30
MAPE8.9513.3813.3612.8747.2012.1710.9017.627.759.3711.917.68
RMSE4.814.005.134.0410.436.826.348.563.405.544.402.76
IA0.860.930.720.570.730.840.750.810.920.850.830.86
ARPPMSE21.9820.8026.8015.7487.2748.0145.54127.3820.4252.5723.033.80
MAE3.973.754.283.208.045.545.098.663.705.683.941.46
MAPE9.3614.6316.3311.4241.5314.4412.8529.9210.8614.0013.594.89
RMSE4.694.565.183.979.346.936.7511.294.527.254.801.95
IA0.870.890.580.760.780.760.750.500.810.540.740.94
MLPTRMSE34.8444.6138.8315.46147.92100.0657.7489.1013.7872.9214.478.24
MAE5.135.575.362.9810.847.896.267.872.987.203.362.32
MAPE11.8022.4519.7010.5060.1718.6115.7927.748.6218.3911.027.57
RMSE5.906.686.233.9312.1610.007.609.443.718.543.802.87
IA0.740.730.470.680.510.480.600.620.900.390.890.87
RBFTRMSE31.00147.2537.9419.70126.8489.1163.83173.7711.0075.6815.5811.54
MAE4.679.995.153.749.797.896.6212.282.697.563.572.72
MAPE10.6942.4619.0513.4255.6619.6315.9442.877.8619.4611.598.82
RMSE5.5712.136.164.4411.269.447.9913.183.328.703.953.40
IA0.780.250.420.540.560.430.430.160.920.450.890.83
ELMTRMSE33.2461.7832.9914.70126.62104.0559.1241.0415.7878.0613.628.44
MAE4.936.304.692.699.287.996.145.433.286.962.952.38
MAPE11.4322.9417.799.4359.6718.7015.4318.529.5618.079.487.77
RMSE5.777.865.743.8311.2510.207.696.413.978.843.692.90
IA0.770.760.610.720.520.580.590.870.880.470.900.87
ESNTRMSE23.1043.1131.639.11121.1486.4839.17135.1417.3562.3813.208.51
MAE4.385.374.872.699.627.845.769.003.316.593.112.32
MAPE10.3722.2218.129.7754.9419.5615.2831.359.7517.4510.357.48
RMSE4.816.575.623.0211.019.306.2611.624.177.903.632.92
IA0.870.680.590.780.580.560.790.450.860.470.900.88
ARTRMSE45.9146.4979.7361.37563.32238.31102.43248.1934.5599.8029.1721.24
MAE6.025.647.977.2321.0012.757.9212.214.408.214.413.66
MAPE13.1854.6825.5927.82512.6028.1116.2375.2112.0129.8114.9111.43
RMSE6.786.828.937.8323.7315.4410.1215.755.889.995.404.61
IA0.820.940.770.240.410.590.910.780.260.840.750.50
PERSMSE37.0549.2546.5519.90227.81141.8066.80213.5918.6084.8516.9616.98
MAE5.125.316.093.1312.8711.217.057.263.807.653.593.41
MAPE12.3317.0723.3310.5961.7928.9319.1023.1710.8618.4411.1010.90
RMSE6.097.026.824.4615.0911.918.1714.614.319.214.124.12
IA0.810.840.530.700.520.520.750.590.880.550.910.79
Figure A1. Concentration of PM10 for Kallio station (1 January 2001 to 31 December 2003).
Figure A1. Concentration of PM10 for Kallio station (1 January 2001 to 31 December 2003).
Sustainability 12 07310 g0a1
Figure A2. Concentration of PM2.5 for Kallio station (1 January 2001 to 31 December 2003).
Figure A2. Concentration of PM2.5 for Kallio station (1 January 2001 to 31 December 2003).
Sustainability 12 07310 g0a2
Figure A3. Concentration of PM10 for Vallila station (1 January 2001 to 31 December 2003).
Figure A3. Concentration of PM10 for Vallila station (1 January 2001 to 31 December 2003).
Sustainability 12 07310 g0a3
Figure A4. Concentration of PM10 for São Paulo station (1 January 2017 to 31 December 2019).
Figure A4. Concentration of PM10 for São Paulo station (1 January 2017 to 31 December 2019).
Sustainability 12 07310 g0a4
Figure A5. Concentration of PM2.5 for São Paulo station (1 January 2017 to 31 December 2019).
Figure A5. Concentration of PM2.5 for São Paulo station (1 January 2017 to 31 December 2019).
Sustainability 12 07310 g0a5
Figure A6. Concentration of PM10 for Campinas station (1 January 2007 to 31 December 2008).
Figure A6. Concentration of PM10 for Campinas station (1 January 2007 to 31 December 2008).
Sustainability 12 07310 g0a6
Figure A7. Concentration of PM10 for Ipojuca station (17 July 2015 to 9 April 2017).
Figure A7. Concentration of PM10 for Ipojuca station (17 July 2015 to 9 April 2017).
Sustainability 12 07310 g0a7
In Table A8, the acronym list of this paper is presented.
Table A8. Acronym list.
Table A8. Acronym list.
AcronymMeaning
ANNArtificial Neural Network
ELMExtreme Learning Machines
ESNEcho State Networks
IAIndex of Agreement
MAEMean Absolute Error
MAPEMean Absolute Percentage Error
MLPMultilayer Perceptron
MSEMean Squared Error
PM10particulate matter with aerodynamic diameter less than or equal to 10 μm
PM2.5particulate matter with aerodynamic diameter less than or equal to 2.5 μm
PPproposed forecasting method (which consider decomposition)
RBFRadial Basis Function Networks
RMSERoot Mean Squared Error
TRTraditional forecasting method

References

  1. World Health Organization. Available online: https://www.who.int/news-room/detail/02-05-2018-9-out-of-10-people-worldwide-breathe-polluted-air-but-more-countries-are-taking-action (accessed on 22 August 2019).
  2. Kryza, M.; Werner, M.; Dudek, J.; Dore, A.J. The effect of emission inventory on modelling of seasonal exposure metrics of particulate matter and ozone with the WRF-Chem model for Poland. Sustainability 2020, 12, 5414. [Google Scholar] [CrossRef]
  3. Langrish, J.P.; Li, X.; Wang, S.; Lee, M.M.; Barnes, G.D.; Miller, M.R.; Cassee, F.R.; Boon, N.A.; Donaldson, K.; Li, J. Reducing personal exposure to particulate air pollution improves cardiovascular health in patients with coronary heart disease. Environ. Health Perspect. 2012, 120, 367–372. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Wu, C.F.; Li, Y.R.; Kuo, I.C.; Hsu, S.C.; Lin, L.Y.; Su, T.C. Investigating the association of cardiovascular effects with personal exposure to particle components and sources. Sci. Total Environ. 2012, 431, 176–182. [Google Scholar] [CrossRef] [PubMed]
  5. Maestrelli, P.; Canova, C.; Scapellato, M.; Visentin, A.; Tessari, R.; Bartolucci, G.; Simonato, L.; Lotti, M. Personal exposure to particulate matter is associated with worse health perception in adult asthma. J. Investig. Allergol. Clin. Immunol. 2011, 21, 120–128. [Google Scholar] [PubMed]
  6. Maji, K.J.; Dikshit, A.K.; Arora, M.; Deshpande, A. Estimating premature mortality attributable to PM2.5 exposure and benefit of air pollution control policies in China for 2020. Sci. Total Environ. 2018, 612, 683–693. [Google Scholar] [CrossRef]
  7. Ardiles, L.G.; Tadano, Y.S.; Costa, S.; Urbina, V.; Capucim, M.N.; Silva, I.; Braga, A.; Martins, J.A.; Martins, L.D. Negative binomial regression model for analysis of the relationship between hospitalization and air pollution. Atmos. Pollut. Res. 2018, 9, 333–341. [Google Scholar] [CrossRef]
  8. Kukkonen, J.; Partanen, L.; Karppinen, A.; Ruuskanen, J.; Junninen, H.; Kolehmainen, M.; Niska, H.; Dorling, S.; Chatterton, T.; Foxall, R.; et al. Extensive evaluation of neural network models for the prediction of NO2 and PM10 concentrations, compared with a deterministic modelling system and measurements in central Helsinki. Atmos. Environ. 2003, 37, 4539–4550. [Google Scholar] [CrossRef]
  9. Niska, H.; Rantamaki, M.; Hiltunen, T.; Karppinen, A.; Kukkonen, J.; Ruuskanen, J.; Kolehmainen, M. Evaluation of an integrated modelling system containing a multilayer perceptron model and the numerical weather prediction model HIRLAM for the forecasting of urban airborne pollutant concentrations. Atmos. Environ. 2005, 39, 6524–6536. [Google Scholar] [CrossRef]
  10. Vlachogianni, A.; Kassomenos, P.; Karppinen, A.; Karakitsios, S.; Kukkonen, J. Evaluation of a multiple regression model for the forecasting of the concentrations of NOx and PM10 in Athens and Helsinki. Sci. Total Environ. 2011, 409, 1559–1571. [Google Scholar] [CrossRef]
  11. Siwek, K.; Osowski, S. Improving the accuracy of prediction of PM10 pollution by the wavelet transformation and an ensemble of neural predictors. Eng. Appl. Artif. Intell. 2012, 25, 1246–1258. [Google Scholar] [CrossRef]
  12. Albuquerque, F.; Madeiro, F.; Fernandes, S.M.M.; de Mattos Neto, P.S.O.; Ferreira, T.A.E. Time-series forecasting of pollutant concentration levels using particle swarm optimization and artificial neural networks. Quim. Nova 2013, 36, 783–789. [Google Scholar] [CrossRef] [Green Version]
  13. De Mattos Neto, P.S.G.; Madeiro, F.; Ferreira, T.A.E.; Cavalcanti, G.D.C. Hybrid intelligent system for air quality forecasting using phase adjustment. Eng. Appl. Artif. Intell. 2014, 32, 185–191. [Google Scholar] [CrossRef]
  14. Biancofiore, F.; Busilacchio, M.; Verdecchia, M.; Tomassetti, B.; Aruffo, E.; Bianco, S.; Di Tommaso, S.; Colangeli, C.; Rosatelli, G.; Di Carlo, P. Recursive neural network model for analysis and forecast of PM10 and PM2.5. Atmos. Pollut. Res. 2017, 8, 652–659. [Google Scholar] [CrossRef]
  15. Polezer, G.; Tadano, Y.S.; Siqueira, H.; Godoi, A.F.L.; Yamamoto, C.I.; de André, P.A.; Pauliquevis, T.; Andrade, M.F.; Oliveira, A.; Saldiva, P.H.N.; et al. Assessing the impact of PM2.5 on respiratory disease using artificial neural networks. Environ. Pollut. 2018, 235, 394–403. [Google Scholar] [CrossRef]
  16. De Mattos Neto, P.S.G.; Cavalcanti, G.D.C.; Madeiro, F.; Ferreira, T.A.E. An approach to improve the performance of PM forecasters. PLoS ONE 2015, 10, e0138507. [Google Scholar] [CrossRef] [Green Version]
  17. Garcia Nieto, P.J.; Lasheras, F.S.; Garcia-Gonzalo, E.; de Cos Juez, F.J. PM10 concentration forecasting in the metropolitan area of Oviedo (Northern Spain) using models based on SVM, MLP, VARMA and ARIMA: A case study. Sci. Total Environ. 2018, 621, 753–761. [Google Scholar] [CrossRef]
  18. Gennaro, G.; Trizio, L.; Di Gilio, A.; Pey, J.; Perez, N.; Cusack, M.; Alastuey, A.; Querol, X. Neural network model for the prediction of PM10 daily concentrations in two sites in the Western Mediterranean. Sci. Total Environ. 2013, 463, 875–883. [Google Scholar] [CrossRef]
  19. Antanasijevic, C.D.Z.; Pocajt, V.V.; Povrenovic, D.S.; Ristic, D.D.; Peric-Grujic, A.A. PM10 emission forecasting using artificial neural networks and genetic algorithm input variable optimization. Sci. Total Environ. 2013, 443, 511–519. [Google Scholar] [CrossRef]
  20. Zhou, Q.; Jiang, H.; Wang, J.; Zhou, J. A hybrid model for PM2.5 forecasting based on ensemble empirical mode decomposition and a general regression neural network. Sci. Total Environ. 2014, 496, 264–274. [Google Scholar] [CrossRef]
  21. Campos, D.S.; Tadano, Y.S.; Antonini Alves, T.; Siqueira, H.V.; Marino, M.H.d.N. Unorganized machines and linear multivariate regression model applied to atmospheric pollutants forecasting. Acta Sci. Technol. 2020, 42, e18203. [Google Scholar] [CrossRef]
  22. Dablemont, S.; Simon, G.; Lendasse, A.; Ruttiens, A.; Blayo, F.; Verleysen, M. Time series forecasting with SOM and local non-linear models—Application to the DAX30 index prediction. In Proceedings of the Workshop on Self-Organizing Maps, Workshop on Self-Organizing Maps, Kitakyushu, Japan, 11–14 September 2003. [Google Scholar]
  23. Ni, H.; Yin, H. Exchange rate prediction using hybrid neural networks and trading indicators. Neurocomputing 2009, 72, 2815–2823. [Google Scholar] [CrossRef]
  24. Ismail, S.; Shabri, A.; Samsudin, R.A. Hybrid model of self-organizing maps (SOM) and least square support vector machine (LSSVM) for time-series forecasting. Expert Syst. Appl. 2011, 38, 10574–10578. [Google Scholar] [CrossRef]
  25. Hsu, C.M.A. Hybrid procedure for stock price prediction by integrating self-organizing map and genetic programming. Expert Syst. Appl. 2011, 38, 14026–14036. [Google Scholar] [CrossRef]
  26. Miranian, A.; Abdollahzade, M. Developing a local least-squares support vector machines-based neuro-fuzzy model for nonlinear and chaotic time series prediction. IEEE Trans. Neural Netw. Learn. Syst. 2013, 24, 207–218. [Google Scholar] [CrossRef]
  27. Siqueira, H.V.; Boccato, L.; Attux, R.; Lyra, C. Unorganized machines for seasonal streamflow series forecasting. Int. J. Neural Syst. 2014, 24, 1430009–1430016. [Google Scholar] [CrossRef]
  28. Siqueira, H.V.; Boccato, L.; Luna, I.; Attux, R.; Lyra, C. Performance analysis of unorganized machines in streamflow forecasting of Brazilian plants. Appl. Soft Comput. 2018, 68, 494–506. [Google Scholar] [CrossRef]
  29. Paschalidou, A.K.; Karakitsios, S.; Kleanthous, S.; Kassomenos, P.A. Forecasting hourly PM10 concentration in Cyprus through artificial neural networks and multiple regression models: Implications to local environmental management. Environ. Sci. Pollut. Res. 2011, 18, 316–327. [Google Scholar] [CrossRef] [PubMed]
  30. Shekarrizfard, M.; Karimi-Jashni, A.; Hadad, K. Wavelet transform-based artificial neural networks (WT-ANN) in PM10 pollution level estimation, based on circular variables. Environ. Sci. Pollut. Res. 2012, 19, 256–268. [Google Scholar] [CrossRef] [PubMed]
  31. Arhami, M.; Kamali, N.; Rajabi, M. Predicting hourly air pollutant levels using artificial neural networks coupled with uncertainty analysis by Monte Carlo simulations. Environ. Sci. Pollut. Res. 2013, 20, 4777–4789. [Google Scholar] [CrossRef]
  32. Ding, W.; Zhang, J.; Leung, Y. Prediction of air pollutant concentration based on sparse response back-propagation training feedforward neural networks. Environ. Sci. Pollut. Res. 2016, 23, 19481–19494. [Google Scholar] [CrossRef]
  33. Li, S.; Griffith, D.A.; Shu, H. Temperature prediction based on the space-time regression-kriging model. J. Appl. Stat. 2020, 47, 1168–1190. [Google Scholar] [CrossRef]
  34. Ouaret, R.; Ionescu, A.; Petrehus, V.; Candau, Y.; Ramalho, O. Spectral band decomposition combined with nonlinear models: Application to indoor formaldehyde concentration forecasting. Stoch. Environ. Res. Risk Assess. 2018, 32, 985–997. [Google Scholar] [CrossRef]
  35. Zhu, J.; Wu, P.; Chen, H.; Zhou, L.; Tao, Z. A hybrid forecasting approach to air quality time series based on endpoint condition and combined forecasting model. Int. J. Environ. Res. Public Health 2018, 15, 1941. [Google Scholar] [CrossRef] [Green Version]
  36. Di, D.; Yang, X.; Wang, X. A four-stage hybrid model for hydrological time series forecasting. PLoS ONE 2014, 9, e104663. [Google Scholar] [CrossRef]
  37. Luna, I.; Ballini, R. Monthly electric energy demand forecasting by fuzzy inference system. Learn. Nonlinear Models Rev. Soc. Bras. Redes Neurais 2012, 10, 137–144. [Google Scholar] [CrossRef]
  38. Huertas, J.I.; Huertas, M.E.; Sols, D.A. Characterization of airborne particles in an open pit mining region. Sci. Total Environ. 2012, 423, 39–46. [Google Scholar] [CrossRef] [PubMed]
  39. Shi, W.; Wong, M.S.; Wang, J.; Zhao, Y. Analysis of airborne particulate matter (PM2.5) over Hong Kong using remote sensing and GIS. Sensors 2012, 12, 6825–6836. [Google Scholar] [CrossRef] [PubMed]
  40. Pouliot, G.; Pierce, T.; Van der Gon, H.D.; Schaap, M.; Moran, M.; Nopmongcol, U. Comparing emission inventories and model-ready emission datasets between Europe and North America for the AQMEII project. Atmos. Environ. 2012, 53, 04–14. [Google Scholar] [CrossRef]
  41. Javed, W.; Wexler, A.S.; Murtaza, G.; Ahmad, H.R.; Basra, S.M. Spatial, temporal and size distribution of particulate matter and its chemical constituents in Faisalabad, Pakistan. Atmosfera 2015, 28, 99–116. [Google Scholar] [CrossRef]
  42. Corder, G.W.; Foreman, D.I. Nonparametric Statistics for Non-Statisticians: A Step-by-Step Approach; John Wiley & Sons: New York, NY, USA, 2014. [Google Scholar]
  43. Silver, E.; Pyke, E.; Peterson, R. Inventory Management and Production Planning and Scheduling; John Wiley & Sons: New York, NY, USA, 1998. [Google Scholar]
  44. Kahn, K. In search of forecastability. In Proceedings of the Forecasting Summit Conference, International Institute of Forecasters, Orlando, FL, USA, 17 February 2006. [Google Scholar]
  45. Hill, A.V.; Zhang, W.; Burch, G.F. Forecasting the forecastability quotient for inventory management. Int. J. Forecast. 2015, 31, 651–663. [Google Scholar] [CrossRef]
  46. Ballini, R.; Luna, I.; Soares, S.; Filho, D.S. Fuzzy inference systems for synthetic monthly inflow time series generation. In Proceedings of the 7th Conference of the European Society for Fuzzy Logic and Technology, Aix-les-Bains, France, 18–22 July 2011; Atlantis Press: Paris, France, 2011. [Google Scholar]
  47. Koster, R.D.; Suarez, M.J.; Heiser, M. Variance and predictability of precipitation at seasonal-to- interannual time scales. J. Hydrometeorol. 2000, 1, 26–46. [Google Scholar] [CrossRef]
  48. Bahra, B. Implied risk-neutral probability density functions from option prices: A central bank perspective. In Forecasting Volatility in the Financial Markets; Knight, J., Satchell, S., Eds.; Butterworth-Heinemann: Oxford, UK, 2007; pp. 201–226. [Google Scholar]
  49. Fatichi, S.; Ivanov, V.Y.; Caporali, E. Investigating inter annual variability of precipitation at the global scale: Is there a connection with seasonality. J. Clim. 2012, 25, 5512–5523. [Google Scholar] [CrossRef]
  50. Xayasouk, T.; Lee, H.; Lee, G. Air pollution prediction using long short-term memory (LSTM) and deep autoencoder (DAE) models. Sustainability 2020, 12, 2570. [Google Scholar] [CrossRef] [Green Version]
  51. Oh, H.-J.; Kim, J. Monitoring air quality and estimation of personal exposure to particulate matter using an indoor model and artificial neural network. Sustainability 2020, 12, 3794. [Google Scholar] [CrossRef]
  52. Wang, P.; Feng, H.; Zhang, G.; Yu, D. A period-aware hybrid model applied for forecasting AQI time series. Sustainability 2020, 12, 4730. [Google Scholar] [CrossRef]
  53. Rahman, M.M.; Shafiullah, M.; Rahman, S.M.; Khondaker, A.N.; Amao, A.; Zahir, M.H. Soft computing applications in air quality modeling: Past, present, and future. Sustainability 2020, 12, 4045. [Google Scholar] [CrossRef]
  54. Chang, J.R.; Wei, L.Y.; Cheng, C.H. A hybrid ANFIS model based on AR and volatility for TAIEX forecasting. Appl. Soft Comput. 2011, 11, 1388–1395. [Google Scholar] [CrossRef]
  55. Huang, G.-B.; Zhu, Q.-Y.; Siew, C.-K. Extreme learning machine: Theory and applications. Neurocomputing 2006, 70, 489–501. [Google Scholar] [CrossRef]
  56. Jaeger, H. The Echo State Approach to Analyzing and Training Recurrent Neural Networks; Tech. Rep. GMD Report 148; German National Research Center for Information Technology: Bremem, Germany, 2001. [Google Scholar]
  57. Haykin, S. Neural Networks and Learning Machines; Pearson: Toronto, ON, Canada, 2009. [Google Scholar]
  58. Siqueira, H.; Luna, I. Performance comparison of feedforward neural networks applied to stream flow series forecasting. Math. Eng. Sci. Aerosp. 2019, 10, 41–53. [Google Scholar]
  59. Araujo, N.A.; Belotti, J.T.; Antonini Alves, T.; Tadano, Y.S.; Siqueira, H. Ensemble method based on Artificial Neural Networks to estimate air pollution health risks. Environ. Model. Softw. 2020, 123, 104567. [Google Scholar] [CrossRef]
  60. Siqueira, H.V.; Boccato, L.; Attux, R.R.F.; Lyra Filho, C. Echo state networks and extreme learning machines: A comparative study on seasonal streamflow series prediction. Lect. Notes Comput. Sci. 2012, 7664, 491–500. [Google Scholar]
  61. Siqueira, H.V.; Boccato, L.; Attux, R.R.F.; Lyra Filho, C. Echo state networks for seasonal streamflow series forecasting. Lect. Notes Comput. Sci. 2012, 7435, 226–236. [Google Scholar]
  62. IBGE—Brazilian Institute of Geography and Statistics (in Portuguese: Instituto Brasileiro de Geografia e Estatística). Censo 2010. 2019. Available online: https://censo2010.ibge.gov.br/ (accessed on 22 August 2019).
  63. Kachba, Y.; Chiroli, D.M.G.; Belotti, J.; Antonini Alves, T.; de Souza Tadano, Y.; Siqueira, H. Artificial neural networks to estimate the influence of vehicular emission variables on morbidity and mortality in the largest metropolis in South America. Sustainability 2020, 12, 2621. [Google Scholar] [CrossRef] [Green Version]
  64. Weather Spark. Mean meteorological conditions of Campinas, São Paulo, Ipojuca, Helsinki, and region (in Portuguese: Condições meteorológicas médias de Campinas, São Paulo, Ipojuca, Helsinki e região). 2020. Available online: https://pt.weatherspark.com (accessed on 22 August 2020).
  65. Voukantsis, D.; Karatzas, K.; Kukkonen, J.; Raanen, T.; Karppinen, A.; Kolehmainen, M. Intercomparison of air quality data using principal component analysis, and forecasting of PM10 and PM2.5 concentrations using artificial neural networks, in Thessaloniki and Helsinki. Sci. Total Environ. 2011, 409, 1266–1276. [Google Scholar] [CrossRef]
  66. Statistics Finland. Population Projection 2019: Vital Statistics by Sex and Area, 2019–2040. 2020. Available online: http://pxnet2.stat.fi/PXWeb/pxweb/en/StatFin/StatFin__vrm__vaenn/statfin_vaenn_pxt_128w.px/ (accessed on 15 May 2020).
  67. CETESB—Environmental Company of São Paulo State (in Portuguese: Companhia Ambiental do Estado de São Paulo). Qualidade do Ar. 2020. Available online: https://cetesb.sp.gov.br/ar/qualar/ (accessed on 15 May 2020).
  68. APAC—Environmental Agency of Pernambuco (in Portuguese: Agência Pernambucana de Águas e Clima). Meteorologia. 2019. Available online: http://www.apac.pe.gov.br/meteorologia/ (accessed on 16 July 2019).
  69. De Mattos Neto, P.S.G.; Silva, D.; Ferreira, T.; Cavalcanti, G.D.C. Market volatility modelling for short time window. Physica A 2001, 390, 3444–3453. [Google Scholar] [CrossRef] [Green Version]
  70. Rodrigues, A.L.J.; Silva, D.A.; de Mattos Neto, P.S.G.; Ferreira, T.A.E. An experimental study of fitness function and time series forecasting using artificial neural networks. In Proceedings of the Genetic and Evolutionary Computation Conference, Portland, OR, USA, 7–11 July 2010. [Google Scholar]
  71. Santana, C.J., Jr.; Macedo, M.; Siqueira, H.; Gokhale, A.; Bastos-Filho, C.J.A. A novel binary artificial bee colony algorithm. Future Gener. Comput. Syst. 2019, 98, 180–196. [Google Scholar] [CrossRef]
  72. Guyon, I.; Elisseeff, A. An introduction to variable and feature selection. J. Mach. Learn. Res. 2003, 3, 1157–1182. [Google Scholar]
  73. Siqueira, H.; Macedo, M.; Tadano, Y.S.; Antonini Alves, T.; Stevan, S.L., Jr.; Oliveira, D.S., Jr.; Marinho, M.H.N.; de Mattos Neto, P.S.G.; de Oliveira, J.F.L.; Luna, I.; et al. Selection of temporal lags for predicting riverflow series from hydroelectric plants using variable selection methods. Energies 2020, 13, 4236. [Google Scholar] [CrossRef]
  74. Box, G.E.P.; Jenkins, G.M.; Reinsel, G.C. Time Series Analysis, Forecasting and Control; John Wiley & Sons: New York, NY, USA, 2008. [Google Scholar]
  75. Tadano, Y.S.; Antonini Alves, T.; Siqueira, H.V. Unorganized machines to predict hospital admissions for respiratory diseases. In Proceedings of the IEEE Latin American Congress on Computational Intelligence, Cartagena de Las Índias, Colombia, 2–4 November 2016. [Google Scholar]
  76. Willmott, C.J.; Matsuura, K. Advantages of the mean absolute error (MAE) over the root mean square error (RMSE) in assessing average model performance. Clim. Res. 2005, 30, 79–82. [Google Scholar] [CrossRef]
  77. Girotto, S.B.F.T.; Simioni, F.J.; Tadano, Y.S.; Costa, V.J.; Alvarenga, R.A.F. Evaluation of characterization models for the photochemical smog impact category focused on the Brazilian reality. Rev. Lat. Am. Avaliação Ciclo Vida 2019, 3, e34263. [Google Scholar] [CrossRef]
  78. Potting, J.; Hertel, O.; Schöpp, W.; Bastrup-Birk, A. Spatial differentiation in the characterisation of photochemical ozone formation: The EDIP2003 methodology. Int. J. Life Cycle Assess. 2006, 11, 72–80. [Google Scholar] [CrossRef]
  79. United Nations. Sustainable Development Goals: Knowledge Platform. 2019. Available online: https://sustainabledevelopment.un.org/ (accessed on 25 August 2020).
  80. Cabaneros, S.M.; Calautit, J.K.; Hughes, B.R. A review of artificial neural network models for ambient air pollution prediction. Environ. Model. Softw. 2019, 119, 285–304. [Google Scholar] [CrossRef]
Figure 1. Proposed method—training phase.
Figure 1. Proposed method—training phase.
Sustainability 12 07310 g001
Figure 2. Proposed method—test phase.
Figure 2. Proposed method—test phase.
Sustainability 12 07310 g002
Figure 3. Proposed partitioning scheme.
Figure 3. Proposed partitioning scheme.
Sustainability 12 07310 g003
Figure 4. Location of Brazilian Stations: Ipojuca (green spot), Campinas (yellow spot), and São Paulo (red spot). The satellite map is from Google Maps (Map data ©2020 Google, INEGI; https://www.google.com/maps/place/Brazil/); the satellite is from Google Earth Pro (Map data ©2020 Google; https://www.google.com/maps/@-23.2636702,-47.1095854,9.5z and https://www.google.com/ maps/@-8.0624551, -34.9114682,11.92z).
Figure 4. Location of Brazilian Stations: Ipojuca (green spot), Campinas (yellow spot), and São Paulo (red spot). The satellite map is from Google Maps (Map data ©2020 Google, INEGI; https://www.google.com/maps/place/Brazil/); the satellite is from Google Earth Pro (Map data ©2020 Google; https://www.google.com/maps/@-23.2636702,-47.1095854,9.5z and https://www.google.com/ maps/@-8.0624551, -34.9114682,11.92z).
Sustainability 12 07310 g004
Figure 5. Location of Finland Stations at Helsinki: Kallio (blue spot) and Vallila (purple spot). The satellite map is from Google Maps (Map data ©2020 Google; https://www.google.com/maps/place/Finland/); the satellite is from Google Earth Pro (Map data ©2020 Google; https://www.google.com/maps/@60.1834096,24.8975655,12.83z).
Figure 5. Location of Finland Stations at Helsinki: Kallio (blue spot) and Vallila (purple spot). The satellite map is from Google Maps (Map data ©2020 Google; https://www.google.com/maps/place/Finland/); the satellite is from Google Earth Pro (Map data ©2020 Google; https://www.google.com/maps/@60.1834096,24.8975655,12.83z).
Sustainability 12 07310 g005
Figure 6. Forecasting of the last 120 points of the test set to the Kallio PM10 concentration time series.
Figure 6. Forecasting of the last 120 points of the test set to the Kallio PM10 concentration time series.
Sustainability 12 07310 g006
Figure 7. Forecasting of the last 120 points of the test set to the Kallio PM2.5 concentration time series.
Figure 7. Forecasting of the last 120 points of the test set to the Kallio PM2.5 concentration time series.
Sustainability 12 07310 g007
Figure 8. Forecasting of the last 120 points of the test set to the Vallila PM10 concentration time series.
Figure 8. Forecasting of the last 120 points of the test set to the Vallila PM10 concentration time series.
Sustainability 12 07310 g008
Figure 9. Forecasting of the last 120 points of the test set to the São Paulo PM10 concentration time series.
Figure 9. Forecasting of the last 120 points of the test set to the São Paulo PM10 concentration time series.
Sustainability 12 07310 g009
Figure 10. Forecasting of the last 120 points of the test set to the São Paulo PM2.5 concentration time series.
Figure 10. Forecasting of the last 120 points of the test set to the São Paulo PM2.5 concentration time series.
Sustainability 12 07310 g010
Figure 11. Forecasting of the last 120 points of the test set to Campinas PM10 concentration time series.
Figure 11. Forecasting of the last 120 points of the test set to Campinas PM10 concentration time series.
Sustainability 12 07310 g011
Figure 12. Forecasts of the last 120 points of the test set to Ipojuca PM10 concentration time series.
Figure 12. Forecasts of the last 120 points of the test set to Ipojuca PM10 concentration time series.
Sustainability 12 07310 g012
Table 1. Number of samples, data range and considered pollutant to each studied station.
Table 1. Number of samples, data range and considered pollutant to each studied station.
StationNumber of SamplesTime RangeData Source
Kallio–PM101090Jan 1st 2001 to Dec 31st 2003[66]
Kallio–PM2.51095Jan 1st 2001 to Dec 31st 2003[66]
Vallila–PM101092Jan 1st 2001 to Dec 31st 2003[66]
São Paulo–PM101095Jan 1st 2017 to Dec 31st 2019[67]
São Paulo–PM2.51095Jan 1st 2017 to Dec 31st 2019[67]
Campinas–PM10731Jan 1st 2007 to Dec 31st 2008[67]
Ipojuca–PM10632Jul 17th 2015 to Apr 9th 2017[68]
Table 2. Mean, standard deviation, maximum and minimum values for each studied station.
Table 2. Mean, standard deviation, maximum and minimum values for each studied station.
PollutantStationMeanS. DeviationMaxMin
PM10 [µg/m3]Kallio16.79.9180.02.9
Vallila20.312.69138.03.1
São Paulo32.117.30102.45.5
Campinas38.015.25128.712.2
Ipojuca35.811.1378.72.8
PM2.5 [µg/m3]Kallio8.85.4856.81.8
São Paulo19.711.3764.13.4
Table 3. Coefficient of variation (CV) for particulate matter (PM) concentration time series.
Table 3. Coefficient of variation (CV) for particulate matter (PM) concentration time series.
PartitionCoefficient of Variation (CV)Mean CV
Kallio PM10Without Partition
(whole series)
0.600.60
Annual Partition0.550.680.540.59
Monthly Partition0.440.690.490.530.400.360.480.570.680.560.390.540.51
Kallio PM2.5Without Partition
(whole series)
0.620.62
Annual Partition0.540.690.600.61
Monthly Partition0.520.640.710.480.480.370.390.740.600.570.480.800.57
Vallila PM10Without Partition
(whole series)
0.630.63
Annual Partition0.520.710.580.60
Monthly Partition0.430.650.530.600.400.350.320.590.550.710.460.570.51
S Paulo PM10Without Partition
(whole series)
0.520.52
Annual Partition0.540.560.510.53
Monthly Partition0.330.40.370.450.460.440.460.590.500.420.360.370.43
S Paulo PM2.5Without Partition
(whole series)
0.570.57
Annual Partition0.560.600.560.57
Monthly Partition0.340.380.390.450.470.480.470.590.520.420.370.350.44
Campinas PM10Without Partition
(whole series)
0.410.41
Annual Partition0.410.370.39
Monthly Partition0.280.220.300.270.350.380.350.320.440.350.240.260.31
Ipojuca PM10Without Partition (whole series)0.310.31
Annual Partition0.280.320.270.29
Monthly Partition0.320.310.310.240.480.240.350.300.230.210.180.220.28
Table 4. MSE for Kallio PM10. The best values by month are highlighted in bold.
Table 4. MSE for Kallio PM10. The best values by month are highlighted in bold.
MethodJanFebMarAprMayJunJulAugSepOctNovDec
MLPPP33.7954.9653.7227.2538.3310.8229.727.695.2054.478.2415.19
RBFPP21.4135.1338.0624.4730.089.0831.144.654.7334.216.8315.68
ELMPP29.4279.2859.1031.8639.4215.0946.6614.109.0942.577.5820.39
ESNPP23.5956.5045.5232.8427.318.8837.147.575.2436.397.4816.81
ARPP24.3733.4876.0430.0534.3111.8637.2511.618.1641.067.2720.95
MLPTR36.3775.8253.3425.5544.5113.8035.5510.8510.2652.3711.1125.56
RBFTR38.8278.1346.9522.3747.5415.4032.9012.9012.2857.2013.4526.61
ELMTR31.6274.9339.1716.7938.9014.1835.609.097.5052.729.9422.00
ESNTR31.4876.6149.0122.4651.3614.1533.7011.197.9354.4010.3922.07
ARTR103.18116.57173.42107.56113.7933.0051.4023.0020.8483.7532.8227.27
PERS34.8757.2678.3232.1544.9216.4336.3215.5416.7472.9115.2432.72
Table 5. MSE for Kallio PM2.5. The best values by month are highlighted in bold.
Table 5. MSE for Kallio PM2.5. The best values by month are highlighted in bold.
MethodJanFebMarAprMayJunJulAugSepOctNovDec
MLPPP6.4113.5129.9112.907.031.7614.241.322.5532.710.9914.01
RBFPP6.4317.2229.8610.824.842.2316.101.406.278.171.499.18
ELMPP5.0625.5027.4318.226.953.6312.083.417.6933.361.9416.64
ESNPP8.4515.0224.6111.346.752.204.822.995.5028.271.9512.93
ARPP11.3916.7356.5122.507.172.7311.292.167.2830.481.8514.34
MLPTR9.3536.0743.1820.217.584.0920.302.589.5937.352.1220.92
RBFTR39.1346.1846.4952.7720.026.7822.893.7326.2631.2710.6936.23
ELMTR4.9531.7542.7317.927.333.5612.722.238.0634.142.1518.79
ESNTR11.8625.8340.3521.537.984.1511.883.519.1635.851.8421.25
ARTR58.30188.61217.1692.7130.9917.0179.737.1060.99106.9310.9834.32
PERS16.4022.7767.2629.099.084.7616.153.0214.4644.132.4733.33
Table 6. MSE for Vallila PM10. The best values by month are highlighted in bold.
Table 6. MSE for Vallila PM10. The best values by month are highlighted in bold.
MethodJanFebMarAprMayJunJulAugSepOctNovDec
MLPPP42.0676.1375.6430.5827.4223.1617.796.4074.7473.5816.4322.69
RBFPP18.7559.0880.9045.6522.8423.9114.084.3337.8435.4812.6717.96
ELMPP47.7786.75180.8768.3720.5525.8821.1811.3975.7775.677.8527.25
ESNPP34.1531.77176.2783.4325.2813.419.927.1359.5934.9514.9217.91
ARPP48.3374.74199.8158.9231.8444.5210.638.0184.5484.5415.1926.91
MLPTR60.01107.66260.4338.5134.0440.0226.007.5475.8575.859.7221.69
RBFTR58.1684.31242.6659.8023.1756.1030.128.3090.2290.2212.7921.92
ELMTR59.38102.80228.6742.1736.3536.1324.368.1684.2584.259.5322.17
ESNTR71.34115.27239.2350.1044.8449.5125.528.6798.7998.7911.2620.20
ARTR340.26390.361562.82223.77170.67176.45126.2429.43158.63158.6336.3243.24
PERS73.55105.38332.5756.2246.0858.5423.3810.83135.86135.8613.3429.02
Table 7. MSE for São Paulo PM10. The best values by month are highlighted in bold.
Table 7. MSE for São Paulo PM10. The best values by month are highlighted in bold.
MethodJanFebMarAprMayJunJulAugSepOctNovDec
MLPPP83.1825.4335.1140.6688.3786.1796.4575.08226.3719.7655.8218.78
RBFPP99.127.2734.0543.4278.3198.9739.8061.14370.7025.0338.719.96
ELMPP88.2016.7334.6262.27107.2893.21101.08135.57241.6527.6751.6916.74
ESNPP73.619.3121.0268.2587.1896.1371.7939.88260.8848.3735.9615.69
ARPP75.7613.2725.0981.8995.36131.43126.04103.74304.8446.4039.0417.09
MLPTR115.9427.9731.68139.21125.65136.33231.53170.03284.37120.30100.2717.80
RBFTR133.4126.4770.20119.44192.56225.62256.58180.61316.1898.14103.7416.50
ELMTR124.1528.8532.10135.68118.77125.79207.64151.44266.64110.7392.6414.38
ESNTR102.9813.8928.01136.71150.38171.41220.79172.95270.9774.31103.0115.37
ARTR138.6452.1755.57191.96221.42306.05496.17408.73562.43189.19174.3926.71
PERS131.4432.4460.26148.14215.47183.56231.51127.88435.96138.17119.0821.86
Table 8. MSE for São Paulo PM2.5. The best values by month are highlighted in bold.
Table 8. MSE for São Paulo PM2.5. The best values by month are highlighted in bold.
MethodJanFebMarAprMayJunJulAugSepOctNovDec
MLPPP10.302.757.992.4643.7636.3549.6449.4665.7720.4810.346.13
RBFPP27.553.1610.348.8234.7121.2746.3447.5096.5313.5617.094.80
ELMPP15.714.269.1820.4059.6053.0240.5254.9336.9921.2518.956.26
ESNPP26.912.5810.5618.6930.1136.6534.2239.4767.4816.1615.207.35
ARPP26.652.8212.1718.5545.3352.5958.1161.25100.7519.4116.536.99
MLPTR39.485.3614.2138.1761.9470.72131.3678.94108.4830.3040.5110.96
RBFTR39.146.5219.4831.3983.8455.69107.5267.57100.2538.4646.339.52
ELMTR33.935.4813.5539.5445.8854.51109.7164.24104.5133.8940.0010.29
ESNTR27.695.5413.1740.3159.8890.64116.5876.0595.3635.8141.8011.45
ARTR33.366.7519.4258.8689.8297.71291.64153.85241.7774.6679.3716.97
PERS30.938.2122.3842.97100.3379.20127.2475.87146.6348.6856.5611.55
Table 9. MSE for Campinas PM10. The best values by month are highlighted in bold.
Table 9. MSE for Campinas PM10. The best values by month are highlighted in bold.
MethodJanFebMarAprMayJunJulAugSepOctNovDec
MLPPP14.8711.5064.0333.6791.4659.0076.03302.2742.7137.5432.5243.23
RBFPP7.849.4231.8428.78104.3074.8944.87276.7494.1333.3932.2067.83
ELMPP10.619.4847.1436.8388.6297.0843.59359.9238.8459.0529.6847.04
ESNPP16.1914.8943.8020.9946.39107.1738.97238.3939.7140.4517.7436.03
ARPP14.8212.0479.4121.5871.48118.8656.20384.1279.9560.5927.0245.84
MLPTR12.0012.0175.0136.68122.7591.1756.64354.3051.9672.3526.6039.30
RBFTR13.4716.4794.0648.51265.7571.9168.07320.2463.6482.7428.4841.46
ELMTR13.8513.3375.2536.34105.9680.7551.02212.4748.4263.1827.6238.44
ESNTR8.7517.3857.5435.7578.0149.8052.17380.4439.5463.2921.2148.57
ARTR47.7030.00200.80188.40340.38219.91127.13993.01119.35269.6685.60103.41
PERS14.1320.62116.8942.5296.5285.7776.56578.2874.19105.4940.9160.30
Table 10. MSE for Ipojuca PM10. The best values by month are highlighted in bold.
Table 10. MSE for Ipojuca PM10. The best values by month are highlighted in bold.
MethodJanFebMarAprMayJunJulAugSepOctNovDec
MLPPP41.2681.2519.3220.55176.0472.4353.0421.5019.3843.2520.116.56
RBFPP97.7238.9922.4621.20137.0162.8033.1879.399.8853.8831.1411.34
ELMPP102.7433.5533.0324.10176.5958.1834.8649.7521.4644.6811.478.12
ESNPP23.0916.0326.3316.35108.8146.4940.1773.2511.5730.7319.357.63
ARPP21.9820.8026.8015.7487.2748.0145.54127.3820.4252.5723.033.80
MLPTR34.8444.6138.8315.46147.92100.0657.7489.1013.7872.9214.478.24
RBFTR31.00147.2537.9419.70126.8489.1163.83173.7711.0075.6815.5811.54
ELMTR33.2461.7832.9914.70126.62104.0559.1241.0415.7878.0613.628.44
ESNTR23.1043.1131.639.11121.1486.4839.17135.1417.3562.3813.208.51
ARTR45.9146.4979.7361.37563.32238.31102.43248.1934.5599.8029.1721.24
PERS37.0549.2546.5519.90227.81141.8066.80213.5918.6084.8516.9616.98
Table 11. Ranking of the best predictors by month regarding the mean squared error (MSE) in the test sets.
Table 11. Ranking of the best predictors by month regarding the mean squared error (MSE) in the test sets.
JanFebMarAprMayJunJulAugSepOctNovDecTotal
MLPPP1133-112212118
RBFPP32212122332427
ELMPP----1---2-2-5
ESNPP13213442-31125
ARPP11--1------14
MLPTR-------------
RBFTR-------------
ELMTR1--1---1----3
ESNTR---1-1------2
ARTR-------------
PERS-------------

Share and Cite

MDPI and ACS Style

de Mattos Neto, P.S.G.; Marinho, M.H.N.; Siqueira, H.; de Souza Tadano, Y.; Machado, V.; Antonini Alves, T.; de Oliveira, J.F.L.; Madeiro, F. A Methodology to Increase the Accuracy of Particulate Matter Predictors Based on Time Decomposition. Sustainability 2020, 12, 7310. https://doi.org/10.3390/su12187310

AMA Style

de Mattos Neto PSG, Marinho MHN, Siqueira H, de Souza Tadano Y, Machado V, Antonini Alves T, de Oliveira JFL, Madeiro F. A Methodology to Increase the Accuracy of Particulate Matter Predictors Based on Time Decomposition. Sustainability. 2020; 12(18):7310. https://doi.org/10.3390/su12187310

Chicago/Turabian Style

de Mattos Neto, Paulo S. G., Manoel H. N. Marinho, Hugo Siqueira, Yara de Souza Tadano, Vivian Machado, Thiago Antonini Alves, João Fausto L. de Oliveira, and Francisco Madeiro. 2020. "A Methodology to Increase the Accuracy of Particulate Matter Predictors Based on Time Decomposition" Sustainability 12, no. 18: 7310. https://doi.org/10.3390/su12187310

APA Style

de Mattos Neto, P. S. G., Marinho, M. H. N., Siqueira, H., de Souza Tadano, Y., Machado, V., Antonini Alves, T., de Oliveira, J. F. L., & Madeiro, F. (2020). A Methodology to Increase the Accuracy of Particulate Matter Predictors Based on Time Decomposition. Sustainability, 12(18), 7310. https://doi.org/10.3390/su12187310

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop