Next Article in Journal
Exploring the Relationships among HRM Investment, Strategy Implementation, and Firm Performance with Multiple Correspondence Analysis
Next Article in Special Issue
Healthcare Waste Management through Multi-Stage Decision-Making for Sustainability Enhancement
Previous Article in Journal
Exploring Differences and Evolution of Coordination Level of the Industrial Structure, Economy and Ecological Environment Complex System in Beijing–Tianjin–Hebei Urban Agglomeration
Previous Article in Special Issue
EDAR 4.0: Machine Learning and Visual Analytics for Wastewater Management
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Short-Term Air Pollutant Concentration Forecasting Method Based on a Hybrid Neural Network and Metaheuristic Optimization Algorithms

1
Department of Energy Management and Optimization, Institute of Science and High Technology and Environmental Sciences, Graduate University of Advanced Technology, Kerman 7631885356, Iran
2
Department of Computer Engineering and Information Technology, Islamic Azad University of Kerman, Kerman 7635131167, Iran
3
Department of Astronautical, Electrical and Energy Engineering (DIAEE), Sapienza University of Rome, 00184 Rome, Italy
*
Author to whom correspondence should be addressed.
Sustainability 2024, 16(11), 4829; https://doi.org/10.3390/su16114829
Submission received: 9 March 2024 / Revised: 24 May 2024 / Accepted: 31 May 2024 / Published: 5 June 2024

Abstract

:
In the contemporary era, global air quality has been adversely affected by technological progress, urban development, population expansion, and the proliferation of industries and power plants. Recognizing the urgency of addressing air pollution consequences, the prediction of the concentration levels of air pollutants has become crucial. This study focuses on the short-term prediction of nitrogen dioxide (NO2) and sulfur dioxide (SO2), prominent air pollutants emitted by the Kerman Combined Cycle Power Plant, from May to September 2019. The proposed method utilizes a new two-step feature selection (FS) process, a hybrid neural network (HNN), and the Coot optimization algorithm (COOT). This combination of FS and COOT selects the most relevant input features while eliminating redundant ones, leading to improved prediction accuracy. The application of HNN for training further enhances the accuracy significantly. To assess the model’s performance, two datasets, including real data from two different parts of Combined Cycle Power Plant in Kerman, Iran, from 1 May 2019 to 30 September 2019 (namely dataset A and B), are utilized. Subsequently, mean square error (MSE), mean absolute error (MAE), root mean square deviation (RMSE), and mean absolute percentage error (MAPE) were employed to obtain the accuracy of FS-HNN-COOT. Experimental results showed MSE of FS-HNN-COOT for NO2 ranged from 0.002 to 0.005, MAE from 0.016 to 0.0492, RMSE from 0.0142 to 0.0736, and MAEP from 4.21% to 8.69%. Also, MSE, MAE, RMSE, and MAPE ranged from 0.0001 to 0.0137, 0.0108 to 0.0908, 0.0137 to 0.1173, and 9.03% to 15.93%, respectively, for SO2.

1. Introduction

It is evident that air pollution has become one of the most critical challenges faced by modern societies. Air pollutants originate from a variety of sources, including both man-made and environmental factors. Naturally occurring causes of pollutant emissions into the atmosphere include wildfires and volcanic activities. However, human activities, such as burning fossil fuels and industrial processes, contribute significantly to the overall pollutant emissions [1]. Another substantial man-made contributor to pollutant emissions is the ever-increasing demand for electricity. According to reports from the International Energy Agency in 2019, global electricity generation has surged by 129% compared to 1990, reaching a staggering 27,000 terawatt-hours. Notably, fossil fuels accounted for 62.8% of the world’s electricity energy production in the same year, as the reports show that a significant portion of global electricity demand is met through fossil fuel power plants. These power plants utilize coal, oil, and natural gas to generate electricity. Burning fossil fuels contributes to the release of significant amounts of hazardous gases, including ozone (O3), carbon dioxide (CO2), carbon monoxide (CO), nitrogen oxides (NOx), sulfur oxides (SOx), and hydrocarbons, into the atmosphere. These gases can have adverse effects on both human health and the environment. Addressing air pollution is a complex and ongoing process that necessitates collaboration among all members of society and governments. However, there are many efficient pollution forecasting methods which facilitate issuing advance warnings to the public, authorities, and decision makers about air quality.
There is a wide variety of approaches aimed at predicting air pollutant concentrations using data-driven methods. These methods can be broadly categorized into two main groups: statistical and artificial intelligence (AI)-based methods. Statistical methods rely on historical data to predict a future event. The most frequently used methods for this purpose are Autoregressive Moving Average (ARMA) and Auto-regressive Integrated Moving Average (ARIMA). While statistical methods can capture linear features in time series data, they may not adequately handle non-linear characteristics. AI-based approaches use past experiences, observations, and patterns to predict future values. Examples of AI-based methods include artificial neural networks (ANNs), Extreme Learning Machine (ELM), Multi-Layer Perceptron (MLP), support vector machine (SVM), long short-term memory (LSTM), bidirectional LSTM (BiSLTM), recurrent neural networks (RNNs), generative adversarial network (GAN), convolutional neural networks (CNNs), and gated recurrent unit (GRU). AI-based methods, unlike statistical methods, have good ability to obtain non-linear features; however, they are sensitive to the values of input parameters and learning parameters, risk falling into local optima, and are computationally complex [2,3]. To overcome these limitations, researchers have developed hybrid models. Hybrid models employ a wide range of methods including data decomposition, feature selection, optimization algorithms, and learning approaches to accurately predict future values. Decomposition methods break down a sequence into multiple sub-sequences and reduce the noise. Feature selection methods choose the most effective input features. Using feature selection techniques enhances the accuracy of the prediction model significantly. Since decomposition, feature selection, and learning approaches have many fine-tuning parameters, using optimization algorithms to find the most optimal values of these parameters enhances the prediction accuracy and reduces the training time.
In recent years, many studies have been conducted to predict air pollution concentration using the above-mentioned methods. For example, authors in [4] employed a hybrid model including a data decomposition technique, a multi-objective optimization algorithm, and ELM to predict air pollution. In their work, Gu et al. [5] applied a hybrid prediction model including Nonlinear Auto Regressive Moving Average with Exogenous Input and neural networks to predict particulate matter 2.5 (PM2.5). Researchers in [6] developed an air pollution prediction model including two decomposition methods, an optimization algorithm, and BiLSTM neural networks. They firstly decomposed the air pollutant sequence with complete ensemble empirical mode decomposition with adaptive noise method (CEEMDAN) and obtained some sub-series. Subsequently, they used variational mode decomposition (VMD) as a secondary decomposition method for further denoising. Also, they applied an optimization algorithm named grey wolf optimizer to find the optimal values of VMD. Finally, BiLSTM neural networks were employed to train the model. In [7], authors introduce a model for the prediction of PM2.5. Their proposed model included a combination of GRU based on encoder–decoder. They demonstrated that their model outperformed many benchmark models. Asaei-Moamam et al. in [8] proposed a framework of air quality prediction include GAN network. Tao et al. [9] introduce a model for air pollution prediction. In their work, they applied partial correlation and simulated annealing methods for the feature selection process. Subsequently, they used a combination of extremely randomized trees (ERT) and LSTM for the learning process. Also, they optimized the hyperparameter of LSTM with Bayesian optimization. In another study [10], authors used LSTM for the learning process of air pollution prediction. Moreover, they utilized genetic algorithm (GA) to fine-tune the hyperparameter of LSTM. Researchers in [11] used SVM to predict air pollution index. Authors in [12] developed an air pollution model using Pearson correlation, for feature selection, and BiLSTM with attention mechanism, for the learning process. Bekkar et al. [13] developed a novel model to predict PM2.5. Their model included Pearson correlation for feature selectin. Subsequently, they applied a combination of CNN and LSTM for the learning process. Authors in [14] employed a combination of LSTM and deep autoencoder to predict PM concentration. In another study [15], authors used a combination of linear regression, ANN, and LSMT to forecast PM2.5 concentration. To predict the concentration of PM10 and PM2.5, authors in [16] employed a combination of SVM, geographically weighted regression, ANN, and auto-regressive nonlinear neural network with external input. Mihirani et al. in [17] proposed a model to predict PM2.5, SO2, NO2, and CO. They used various methods, including linear regression, lasso regression, random forest regression, and K-nearest neighbor regression. Their experimental findings demonstrated that random forest regression outperformed other models. In their work [18], Srivastava et al. proposed using SVM, random forest classifier, logistic regression, linear regression, and random forest regression to forecast air pollution. Their results demonstrated that random forest regression and random forest classification outperformed other models. In another study [19], authors introduced a novel air pollution method based on Spiking Neural Networks. Authors in [20] compared different deep learning models (LSTM, Bi-LSTM, Bi-RNN) and a statistical method (Kernel Ridge Regression) for air quality index prediction. Their finding demonstrated that the Bi-RNN model significantly outperformed all other models. By considering the difficulty of air pollution monitoring in megacities, Rabie et al. [21] developed a hybrid forecasting model, including CNN and BiLSTM neural network. Ozone pollution is highlighted in [22] as a major concern contributing to climate warming and crop productivity. In response to this issue, researchers employed a time series forecasting approach to analyze and predict future ozone levels. They also introduced a new method called the time selection layer in deep learning models to improve feature selection, enhancing prediction accuracy, model performance, and interpretability.
This survey aims at developing an air pollution prediction model based on a two-step feature selection approach, an optimization algorithm, and neural networks. For this purpose, we used real-world air pollutant data collected from statistics of Center of Kerman Combined Cycle Power Plant from May to September 2019, Kerman, Iran. The main contributions and novelties of this study include the following:
  • This paper employs a two-step feature selection model (FS) to carefully choose the most effective input variables, recognizing their crucial role in enhancing the forecasting model’s performance.
  • To optimize the process of two-step feature selection, this paper used the COOT optimization algorithm.
  • A novel forecasting model (FS-HNN-COOT) is introduced for the prediction of NO2 and SO2 emissions from the Combined Cycle Power Plant. Also, to fine-tune hypermeters of HNN, the COOT optimization algorithm was employed.
  • The impact of air pollution from manufacturing industry production is investigated across various months, utilizing two datasets. The effectiveness of the analysis is validated using real-world datasets.
The reminder of this paper is as follows. Section 2 describes the case study and Section 3 provides the methodology. The simulation and discussion of results are presented in Section 4. Finally, Section 5 and Section 6 include study limitations and the conclusion.

2. Case Study

Kerman Combined Cycle Power Plant, located in the third kilometer of Baghin road and twentieth kilometer of Kerman-Rafsanjan highway with geographical coordinates 11,230 N and 274,856 S, has 1912 MW capacity. It comprises eight gas units with 159 MW capacity and eight steam units with 160 MW capacity. The power plant occupies 120 hectares of land, of which about 60 hectares are green space. The energy source is about 70% natural gas and 30% diesel fuel. To generate one megawatt per hour (MWh) of electricity, the Kerman Combined Cycle Power Plant uses approximately 335 cubic meters of natural gas and 335 L of diesel fuel (with a ±10% tolerance).

Air Pollution Data

To assess the proposed method, we collected meteorological data, including wind speed, air temperature, and air pollutant concentrations (specifically nitrogen dioxide (NO2) and sulfur dioxide (SO2)) from the Kerman Combined Cycle Power Plant in Iran. The data spanned from 1 May 2019 to 30 September 2019. Notably, we recorded SO2 and NO2 levels at 3 h intervals. Our data collection involved two sets: Set A and Set B. Table 1 and Table 2 present statistical information on air pollutants for both sets, including average, minimum, maximum, and standard deviation values by month.

3. Methodology

3.1. Normalization

Normalization refers to a statistical method in which values measured on different scales are adjusted to a common scale. For example, in this survey, the scales of the meteorological data, such as air temperature, differ from those of the air pollutant data. Additionally, air temperature values are not directly comparable to air pollutant values. Consequently, in this survey, all input values are rescaled using the feature scaling method to fall within the range of [0, 1]. Equation (1) describes normalization within the range of [0, 1]; in this equation, Xi is an original value, X′i is the normalized value, Xmax is the maximum value, and Xmin is the minimum value of original values:
X i = X i X m i n X m a x X m i n

3.2. Data Preparation

Data preparation refers to the process of cleaning and transforming the raw data. Converting a time series into a supervised learning model involves transforming the sequential data into a tabular format suitable for standard machine learning algorithms. Time series data consist of observations collected at regular intervals over time. Each observation has a timestamp and a corresponding value. In this study, we have four time series, including wind speed (W), air temperature (T), NO2 concentration (N), and SO2 concentration (S) time series. Therefore, we constructed four initial matrices. Then, these four matrices were combined to build the input matrix (known as the feature matrix). Target variables are the next time point’s value of NO2 or SO2. Shifting the target variables by the desired prediction horizon (BS) results in the generation of the target vector. The columns of the target vector and input matrix represent the number of training samples (TS). We used Equations (2) and (3) to build the initial input matrix and target vector.
I n p u t M a t r i x B S T S = I V 1 I V 2 I V i W h j W h 2 W h 1 W h j + 1 W h 3 W h 2 W h j + i 1 W h 1 + i W h i I V 1 + 2 i I V 2 + 2 i I V 3 i N h j N h 2 N h 1 N h j + 1 N h 3 N h 2 N h j + i 1 N h 1 + i N h i I V 1 + 3 i I V 2 + 3 i I V 4 i S h j S h 2 S h 1 S h j + 1 S h 3 S h 2 S h j + i 1 S h 1 + i S h i
Target 1 j y = x h j 1   x h 2   x h 1   x h
In the next step, the input matrix and target vector are divided into two subsets, including training and test sets. A total of 80% of the data were allocated to training and the remaining 20% to testing.

3.3. Two-Step Feature Selection Model

In the process of developing a predictive model, feature selection refers to the procedure of applying some algorithms to reduce the dimensionality of data. Applying feature selection methods results in the elimination of irrelevant, redundant, and inconsistent input features [23]. Furthermore, feature selection methods improve the performance of the model, reduce computational complexity and training time, diminish required storage space, and build a model with generalizability. In this section, a two-step feature selection approach based on mutual information (MI) is presented (Figure 1). In the first step, to remove irrelevant features, MI measures the mutual dependence between the target vector and each feature (x).
  • The removal of irrelevant features
Take, for instance, Stotal = {x1, x2, x3, …}, which is a set of input features, and y is the target vector. We calculated MI by Equation (4). In this equation, P x i , y j represents the joint probability distribution between the input variable ( x i ) and the target variable ( y i ), and P x represents the probability distribution of the random variable x.
M I ( x , y ) = i = 1 n j = 1 m P ( x i , y j ) log 2 P x i , y j P x i P y j
The higher the MI (x, y) value, the higher the correlation between the input variable (xi) and target vector. Furthermore, Equation (5) states that if the correlation value between an input variable and target vector is equal to or higher than TH1, the input variable is relevant to the target vector and should be selected as one of the input variables of the prediction model.
M I x , y   T H 1
In the second step, to eliminate redundant features, MI measures the mutual dependence between every two variables of the initial matrix.
  • The removal of redundant features
After the removal of irrelevant features from Stotal, redundant features must be eliminated. To this end, take S1 as a set of input variables completely relevant to the target vector, more specifically, S1 ⊂ Stotal. To find redundant features, we used Equations (6) and (7). In these equations, xi and xj are two selected inputs.
R E x i , x j = I G x i ; x j ; y
I G x i ; x j ; y = M I x i , x j ; y M I x i , y M I x j , y
After calculating IG, we used Equation (8) in order to find redundant features.
I G x i ; x j ; y   T H 2
According to Equation (8), two input variables xi and xj that possesses an IG value equal to or higher than TH2 are the same, and one of them should be eliminated. In this study, we calculated TH1 and TH2 using optimization methods. We will describe the optimization methods in following sections. After the removal of irrelevant and redundant features, the final set of S2, S2⊂ S1, is the input matrix of the neural networks. Figure 1 represents the process of two-step feature selection.

3.4. Learning Models

Air pollution data are nonlinear and time-variant. Due to these characteristics, a simple neural network is not able to create a high-performance prediction model. In comparison to a standard neural network, a hybrid model achieves enhanced performance and requires fewer training examples. In this study, we employed a learning model comprising three MLP neural networks trained using the Levenberg–Marquardt (LM) and Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithms. LM appears to be the fastest method to train medium-sized neural networks. BFGS is a type of second-order optimization algorithm. It belongs to a class of algorithms referred to as Quasi-Newton methods.
Weights are the learnable parameters of a neural network. When a neural network is trained, it is initialized with a set of random weights, but during the training process, weights are optimized and then transferred to the next layer. It means that each layer uses incoming weights to continue the training process. The proposed model consists of three MLPs. The first and third MLPs are trained with LM as the learning algorithm and the second MLP is trained with BFGS. After training the first MLP, its final weights are considered as the initial weights for the second MLP. The second MLP applies incoming weights and produces new weights, then transfers them to the third MLP. The last MLP employs incoming weights and makes the final prediction.
The most common error functions for regression problems are mean square error (MSE, Equation (9)), mean absolute error (MAE, Equation (10)), root mean square error (RMSE, Equation (11)), and mean absolute percent error (MAPE, Equation (12)).
M S E =   1 N i = 1 N ( x i A C T x i F O R ) 2
M A E =   1 N i = 1 N x i A C T x i F O R
R M S E =   1 N i = 1 N ( x i A C T x i F O R ) 2
M A P E = 100   1 N i = 1 N x i A C T x i F O R x i A C T
In these equations, xiFOR is the ith predicted value and xiACT is the ith actual value.

3.5. Optimization Algorithms

Optimization is defined as the problem of finding the best solution from all possible solutions. An integral part of research and practical applications is single-objective problems. But most real-world problems are multi-objective. In single-objective optimization problems, there is just one objective function to be optimized, but multi-objective problems involve more than one objective function to be optimized.

3.5.1. Non-Dominated Sorting Genetic Algorithm-II (NSGA-II)

Non-dominated Sorting Genetic Algorithm II (NSGA-II), proposed by Deb et al. in 2002, is a multi-objective optimization algorithm designed to address limitations of the earlier NSGA algorithm [24]. NSGA faced challenges like high computational cost and difficulty in setting the optimal value for the sharing parameter. NSGA-II addresses these issues by employing a fast non-dominated sorting approach and incorporating elitism, resulting in lower computational complexity. Additionally, NSGA-II excels at finding well-distributed solutions (superior spread) and achieving convergence near the true Pareto-optimal front. We will now delve into the core concepts of NSGA-II.
  • Dominance concept
Solution A dominates solution B if solution A is not worse than solution B in all objectives and solution A is superior to solution B in at least one objective. Because the dominance concept makes available comparing solutions with multiple objectives, it is employed in multi-objective optimization problems to find non-dominated solutions.
  • Swarm distance
To assess the density of solutions around a specific solution within the population, the mean distance between two points on either side of this solution along each objective function is computed. This measure acts as an approximation of the perimeter of the cuboid formed by considering the nearest neighbors as vertices, referred to as the crowding distance. For the ith solution within its front, the crowding distance is defined as the average side length of the cuboid. Equations (13)–(15) have been used to calculate the crowding distance.
d i 1 = f 1 X i + 1 f 1 X i 1 f 1 m a x f 1 m i n
d i 2 = f 2 X i + 1 f 2 X i 1 f 2 m a x f 2 m i n
d i = d i 1 + d i 2
  • Crowded-Comparison Operator
The crowded-comparison operator (<n) directs the selection process at different algorithmic stages, aiming for a uniformly distributed Pareto-optimal front. Each individual in the population is assumed to have two attributes: nondomination rank (irank) and crowding distance (idistance). Consequently, a partial order (<n) is established, as described in Equation (16). When comparing two solutions with distinct nondomination ranks, preference is given to the solution with the superior rank. In cases where both solutions belong to the same front, priority is given to the solution situated in a less crowded region.
i n j             i f           i r a n k < j r a n k           o r     ( i r a n k = j r a n k         a n d       ( i d i s t a n c e > j d i s t a n c e ) )
  • Main Loop
In each generation of t, initially, the offspring population of Qt using the parent population of Pt and the genetic algorithm operator is created. Then, these two populations are combined, and the new population, named Rt with population size of 2N, is generated. Afterward, Rt is divided into non-dominated classes, and the new population is filled with different non-dominated front points. The filling starts with first non-dominated front (from class 1) and continues with points of the second non-dominated front, and so on. Once non-dominated sorting is completed, because the overall size of Rt is 2N, it is not possible to accommodate all the fronts in N slots. In order to reduce the size of Rt from 2N to standard size, which is N and represents the size of parent population, crowding distance should be calculated for all member of Rt classes and then sorted based on the crowded-comparison operator. Next, the first N individuals are selected to make the parent population Pt+1. Then, the new population Pt+1 of size N is used for selection, crossover, and mutation to create a new offspring population Qt+1 of size N. This cycle continues to exist until stop conditions are met. Figure 2 shows the main loop of the NSGA-II algorithm.

3.5.2. Coot Optimization Algorithm

The coot optimization algorithm is a metaheuristic optimization algorithm based on population. The coot is a type of water bird. The collective behavior of coots foraging for food is the source inspiration for the algorithm [25].
Similar to other population-based optimization algorithms, the algorithm’s initial population is randomly generated. Then, the initial population is divided into two groups, including group leaders and ordinary coots. Afterwards, the fitness of each solution is calculated, and subsequently, the best fitness values of group leaders and coots are stored in two different variables. Also, the best fitness value of both group leaders and coots is stored in a variable representing the best solution. If stop conditions are not met, the next step is the random movements of coots. Random movement is followed by chain movement. The average position of two coots can be used to implement chain movement. More specifically, the new position of coot i, based on chain movement, is the average position of coot i and coot i − 1. In the next step, coots must adjust their positions based on the group leaders. To this end, several groups based on the number of leaders are formed, and each coot chooses one leader and moves toward the group led by the chosen leader. Finally, all leaders move toward the optimal area. Figure 3 represents the coot optimization algorithm flowchart.

4. Simulation and Discussion of Results

In this study, after data normalization and constructing input and targe matrices, two-step feature selection was employed. The feature selection approach utilizes two key parameters, TH1 and TH2, which significantly impact the prediction accuracy. Following feature selection, the preprocessed data are fed into the MLPs. The number of hidden layer nodes (NH) in each MLP can vary considerably, and this value also significantly affects the model’s prediction accuracy. To find out the optimal values of TH1, TH2, and NH, we applied the coot optimization algorithm. The coot optimization algorithm focuses on minimizing the MAPE of the test data.
To verify the efficiency of the proposed model, we constructed several benchmark models, including FS-HNN-NSGA_II, FS-MLP-NSGA_II, FS-MLP-COOT, FS-HNN, and FS-MLP. The goal of using the NSGA-II optimization algorithm in the FS-HNN-NSGA_II model is to find solutions that simultaneously minimize the MAPE of the test data and the RMSE of the training data. In fact, we wanted to compare the effectiveness of NSGA-II with the coot optimization algorithm. Figure 4 represents the process of finding the optimal values of TH1, TH2, and NH with NSGA-II and the coot optimization algorithm.
As illustrated in Figure 4, in the first step of the proposed method, we initialized TH1, TH2, and NH randomly. Then, we ran two-step feature selection and fed its outputs to the hybrid neural network. The hybrid neural network generates a prediction, and the MAPE and RMSE are calculated to evaluate the prediction accuracy. This process continues iteratively until a stopping criterion is met. The stopping criteria can be reaching the maximum number of iterations allowed or identifying the optimal values for TH1, TH2, and NH. The overall flowchart of the proposed method is illustrated in Figure 5.

Results

In this study, we leveraged air pollutant data, including NO2 and SO2, in conjunction with meteorological variables such as wind speed and air temperature to predict future values of NO2 and SO2. Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 provide experimental results of applying the proposed model and benchmark models on dataset A and dataset B.
Analyzing the results reveals an interesting finding. As the results show, the MAPE of FS-MLP-NSGA_II for NO2 and SO2 is 21.06% and 18.41%, respectively. In contrast, the MAPE of FS-MLP-COOT for NO2 and SO2 is 21% and 17.91%, respectively. This comparison proves the superiority of the coot optimization algorithm. On the other hand, the MAPE of the FS-HNN-NSGA_II model for NO2 and SO2 is 18.644% and 15.159%, while the MAPE of the proposed model is 4.215% and 12.534%. Interestingly, although NSGA-II was designed to optimize two objective functions for parameter tuning, the COOT optimization algorithm consistently achieved superior performance in terms of model performance for both datasets across various test months. In the comparison between the hybrid neural network (HNN) and the multilayer perceptron (MLP), the results show that FS-HNN outperforms FS-MLP for NO2 prediction using MAPE. However, the opposite is true for SO2 prediction, where FS-MLP achieves better accuracy based on MAPE. As the numerical results demonstrated, FS-HNN-COOT outperformed other models. This improved prediction accuracy is attributed to two main factors: the combination of the two-step feature selection method with coot and the use of HNN. First, the two-step feature selection method ranks each input feature using mutual information, which measures the dependency between the feature and the decision variable. This ranking is expressed as numerical values that indicate the relevance of each feature. While this feature selection approach is highly powerful, it raises the important question of which features should be selected as the most relevant and which should be eliminated as redundant. This is where coot enhances the effectiveness of the feature selection process. By determining the optimal threshold values for selecting the most relevant input features and eliminating redundancies, this approach demonstrated its superiority compared to other feature selection models. Second, the application of HNN ensures that in each step, optimized weights are transferred to the next step. This effectively enhances the prediction accuracy of the HNNs.
In addition, Figure 6 provides a visual representation of the results, showcasing two distinct sections. The first section presents a comprehensive overview of the actual air pollution data alongside the corresponding predictions. Meanwhile, the second section offers a magnified view specifically focused on the test data and its corresponding predictions.
Furthermore, Figure 7 illustrates the Pareto front for NO2 and SO2 values within dataset A for the same month. These visualizations contribute significantly to enhancing our understanding of the forecasting capabilities of the proposed model, providing valuable insights into the trade-offs between NO2 and SO2 predictions.
In summary, using coot optimization algorithms, integrated with a two-step feature selection method and a hybrid neural network architecture, demonstrates promising results in forecasting air pollution levels. The presented tables and figures offer a comprehensive evaluation and visualization of the model’s performance on datasets A and B, substantiating its effectiveness in predicting NO2 and SO2 concentrations.

5. Limitations

While this study offers valuable insights into the dynamics of NO2 and SO2 pollution and their impacts, it is important to acknowledge several limitations. Considering the role of other air pollutants would be ideal for a more comprehensive analysis. Unfortunately, data on other pollutants were not available in this study, which is a limitation. This constraint highlights the need for more extensive data collection in future research to account for the multifaceted nature of air pollution.

6. Conclusions

Air pollution is a growing global crisis, posing significant threats to public health and the environment. Accurate air pollution prediction can not only prevent environmental risks to public health but also inform policymakers in developing effective strategies for air pollution control. In this study, a hybrid air pollution method was proposed comprising a two-step feature selection approach, three MLPs with different learning algorithms, and the coot optimization algorithm. Furthermore, this study constructed five benchmark models including FS-HNN-NSGA_II, FS-MLP-NSGA_II, FS-MLP-COOT, FS-HNN, and FS-MLP to predict concentrations of major air pollutants and meteorological data collected from the Kerman Combined Cycle Power Plant. A comparison between the results of the proposed model and the benchmark models demonstrated the superiority of the proposed model during the process of air pollution prediction.

Author Contributions

Conceptualization, all authors.; methodology, H.J. and F.A.; software, H.J. and F.A.; validation, H.J. and F.A.; writing—original draft preparation, H.J. and F.A.; writing—review and editing, all authors; supervision, F.K. and A.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Nakhjiri, A.; Kakroodi, A.A. Air pollution in industrial clusters: A comprehensive analysis and prediction using multi-source data. Ecol. Inform. 2024, 80, 102504. [Google Scholar] [CrossRef]
  2. Maciąg, P.S.; Bembenik, R.; Piekarzewicz, A.; Del Ser, J.; Lobo, J.L.; Kasabov, N.K. Effective air pollution prediction by combining time series decomposition with stacking and bagging ensembles of evolving spiking neural networks. Environ. Model. Softw. 2023, 170, 105851. [Google Scholar] [CrossRef]
  3. Ding, Z.; Chen, H.; Zhou, L.; Wang, Z. A forecasting system for deterministic and uncertain prediction of air pollution data. Expert Syst. Appl. 2022, 208, 118123. [Google Scholar] [CrossRef]
  4. Bai, L.; Liu, Z.; Wang, J. Novel hybrid extreme learning machine and multi-objective optimization algorithm for air pollution prediction. Appl. Math. Model. 2022, 106, 177–198. [Google Scholar] [CrossRef]
  5. Gu, Y.; Li, B.; Meng, Q. Hybrid interpretable predictive machine learning model for air pollution prediction. Neurocomputing 2022, 468, 123–136. [Google Scholar] [CrossRef]
  6. Wu, F.; Min, P.; Jin, Y.; Zhang, K.; Liu, H.; Zhao, J.; Li, D. A novel hybrid model for hourly PM25 prediction considering air pollution factors meteorological parameters, GNSS-ZTD. Environ. Model. Softw. 2023, 167, 105780. [Google Scholar] [CrossRef]
  7. Shakya, D.; Deshpande, V.; Goyal, M.K.; Agarwal, M. PM2.5 air pollution prediction through deep learning using meteorological, vehicular, and emission data: A case study of New Delhi, India. J. Clean. Prod. 2023, 427, 139278. [Google Scholar] [CrossRef]
  8. Asaei-Moamam, Z.-S.; Safi-Esfahani, F.; Mirjalili, S.; Mohammadpour, R.; Nadimi-Shahraki, M.-H. Air quality particulate-pollution prediction applying GAN network and the Neural Turing Machine. Appl. Soft Comput. 2023, 147, 110723. [Google Scholar] [CrossRef]
  9. Tao, H.; Jawad, A.H.; Shather, A.; Al-Khafaji, Z.; Rashid, T.A.; Ali, M.; Al-Ansari, N.; Marhoon, H.A.; Shahid, S.; Yaseen, Z.M. Machine learning algorithms for high-resolution prediction of spatiotemporal distribution of air pollution from meteorological and soil parameters. Environ. Int. 2023, 175, 107931. [Google Scholar] [CrossRef] [PubMed]
  10. Drewil, G.I.; Al-Bahadili, R.J. Air pollution prediction using LSTM deep learning and metaheuristics algorithms. Meas. Sens. 2022, 24, 100546. [Google Scholar] [CrossRef]
  11. Leong, W.C.; Kelani, R.O.; Ahmad, Z. Prediction of air pollution index (API) using support vector machine (SVM). J. Environ. Chem. Eng. 2020, 8, 103208. [Google Scholar] [CrossRef]
  12. Jia, T.; Cheng, G.; Chen, Z.; Yang, J.; Li, Y. Forecasting urban air pollution using multi-site spatiotemporal data fusion method (Geo-BiLSTMA). Atmos. Pollut. Res. 2024, 15, 102107. [Google Scholar] [CrossRef]
  13. Bekkar, A.; Hssina, B.; Douzi, S.; Douzi, K. Air-pollution prediction in smart city, deep learning approach. J. Big Data 2021, 8, 161. [Google Scholar] [CrossRef]
  14. Xayasouk, T.; Lee, H.; Lee, G. Air pollution prediction using long short-term memory (LSTM) and deep autoencoder (DAE) models. Sustainability 2020, 12, 2570. [Google Scholar] [CrossRef]
  15. Sinnott, R.O.; Guan, Z. Prediction of Air Pollution through Machine Learning Approaches on the Cloud. In Proceedings of the 2018 IEEE/ACM 5th International Conference on Big Data Computing Applications and Technologies (BDCAT), Zurich, Switzerland, 17–20 December 2018. [Google Scholar]
  16. Delavar, M.R.; Gholami, A.; Shiran, G.R.; Rashidi, Y.; Nakhaeizadeh, G.R.; Fedra, K.; Afshar, S.H. A novel method for improving air pollution prediction based on machine learning approaches: A case study applied to the capital city of Tehran. ISPRS Int. J. Geo-Inf. 2019, 8, 99. [Google Scholar] [CrossRef]
  17. Mihirani, M.; Yasakethu, L.; Balasooriya, S. Machine Learning-based Air Pollution Prediction Model. In Proceedings of the 2023 IEEE IAS Global Conference on Emerging Technologies (GlobConET), London, UK, 19–21 May 2023. [Google Scholar]
  18. Srivastava, H.; Sahoo, G.K.; Das, S.K.; Singh, P. Performance Analysis of Machine Learning Models for Air Pollution Prediction. In Proceedings of the 2022 International Conference on Smart Generation Computing, Communication and Networking (SMART GENCON), Bangalore, India, 23–25 December 2022. [Google Scholar]
  19. Maciąg, P.S.; Kasabov, N.; Kryszkiewicz, M.; Bembenik, R. Air pollution prediction with clustering-based ensemble of evolving spiking neural networks and a case study for London area. Environ. Model. Softw. 2019, 118, 262–280. [Google Scholar] [CrossRef]
  20. Pande, C.B.; Kushwaha, N.L.; Alawi, O.A.; Sammen, S.S.; Sidek, L.M.; Yaseen, Z.M.; Pal, S.C.; Katipoğlu, O.M. Daily scale air quality index forecasting using bidirectional recurrent neural networks: Case study of Delhi, India. Environ. Pollut. 2024, 351, 124040. [Google Scholar] [CrossRef]
  21. Rabie, R.; Asghari, M.; Nosrati, H.; Niri, M.E.; Karimi, S. Spatially resolved air quality index prediction in megacities with a CNN-Bi-LSTM hybrid framework. Sustain. Cities Soc. 2024, 109, 105537. [Google Scholar] [CrossRef]
  22. Jiménez-Navarro, M.J.; Martínez-Ballesteros, M.; Martínez-Álvarez, F.; Asencio-Cortés, G. Explaining deep learning models for ozone pollution prediction via embedded feature selection. Appl. Soft Comput. 2024, 157, 111504. [Google Scholar] [CrossRef]
  23. Amjady, N.; Keynia, F. A new prediction strategy for price spike forecasting of day-ahead electricity markets. Appl. Soft Comput. 2011, 11, 4246–4256. [Google Scholar] [CrossRef]
  24. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  25. Naruei, I.; Keynia, F. A new optimization method based on COOT bird natural life model. Expert Syst. Appl. 2021, 183, 115352. [Google Scholar] [CrossRef]
Figure 1. The general process of two-step feature selection.
Figure 1. The general process of two-step feature selection.
Sustainability 16 04829 g001
Figure 2. Schematic diagram of NSGA-II algorithm.
Figure 2. Schematic diagram of NSGA-II algorithm.
Sustainability 16 04829 g002
Figure 3. Coot optimization algorithm.
Figure 3. Coot optimization algorithm.
Sustainability 16 04829 g003
Figure 4. The structure of NSGA-Ⅱ and coot optimization algorithm.
Figure 4. The structure of NSGA-Ⅱ and coot optimization algorithm.
Sustainability 16 04829 g004
Figure 5. The process of finding the optimal values of TH1, TH2, and NH.
Figure 5. The process of finding the optimal values of TH1, TH2, and NH.
Sustainability 16 04829 g005
Figure 6. Training, test, and predicted NO2 and SO2 values of A dataset in May 2019.
Figure 6. Training, test, and predicted NO2 and SO2 values of A dataset in May 2019.
Sustainability 16 04829 g006
Figure 7. Pareto front for NO2 and SO2 values of dataset A in May 2019.
Figure 7. Pareto front for NO2 and SO2 values of dataset A in May 2019.
Sustainability 16 04829 g007
Table 1. Statistical data of dataset A.
Table 1. Statistical data of dataset A.
MonthAir PollutantAverageMinimumMaximumStandard Deviation
MayNO21.21740.732.020.2193
SO20.26720.110.430.0791
JuneNO21.22100.41.840.2004
SO20.27800.110.480.0772
JulyNO21.01760.261.730.2333
SO20.28120.110.430.0756
AugustNO21.11630.691.470.1653
SO20.27760.060.430.0795
SeptemberNO20.15560.741.680.1946
SO20.05490.060.430.0693
Table 2. Statistical data of dataset B.
Table 2. Statistical data of dataset B.
MonthAir PollutantAverageMinimumMaximumStandard Deviation
MayNO21.46800.018.761.4308
SO20.27480.041.910.1435
JuneNO21.14940.11.90.2015
SO20.27670.060.480.0767
JulyNO21.03630.212.190.2601
SO20.32680.062.260.2784
AugustNO21.11260.711.550.1712
SO20.27920.110.430.0731
SeptemberNO21.07820.011.660.3094
SO20.27750.110.480.0689
Table 3. MSE, MAE, and RMSE error of training data versus MAPE error of test data using FS-HNN-NSGA_II on dataset A.
Table 3. MSE, MAE, and RMSE error of training data versus MAPE error of test data using FS-HNN-NSGA_II on dataset A.
Output VariablesInput VariablesMonthError Indices
FS-HNN-NSGA_II
MSEMAERMSEMAPE
NO2Wind speed, air temperature, the value of NO2 one hour ago, the value of SO2 one hour agoMay0.00370.04680.061510.5709
June0.00300.04600.05497.7062
July0.00530.05800.073410.55
August0.00860.06480.09299.6795
September0.01520.08870.123418.6441
SO2Wind speed, air temperature, the value of NO2 one hour ago, the value of SO2 one hour agoMay0.01320.06970.114913.8924
June0.01120.07700.106119.2530
July0.01970.09960.140418.0187
August0.02680.12370.163822.2917
September0.11610.09080.116115.1599
Table 4. MSE, MAE, and RMSE error of training data versus MAPE error of test data using FS-MLP-NSGA_II on dataset A.
Table 4. MSE, MAE, and RMSE error of training data versus MAPE error of test data using FS-MLP-NSGA_II on dataset A.
Output VariablesInput VariablesMonthError Indices
FS-MLP-NSGA_II
MSEMAERMSEMAPE
NO2Wind speed, air temperature, the value of NO2 one hour ago, the value of SO2 one hour agoMay0.00570.04750.070114.50
June0.00440.04990.06699.64
July0.00710.05930.080211.34
August0.00920.06890.099112.36
September0.01720.09070.193421.06
SO2Wind speed, air temperature, the value of NO2 one hour ago, the value of SO2 one hour agoMay0.01610.09210.123214.92
June0.01330.09910.143623.91
July0.01990.12160.162320.11
August0.02810.16570.178223.31
September0.12010.17780.138918.41
Table 5. MSE, MAE, and RMSE error of training data versus MAPE error of test data using FS-MLP-coot on dataset A.
Table 5. MSE, MAE, and RMSE error of training data versus MAPE error of test data using FS-MLP-coot on dataset A.
Output VariablesInput VariablesMonthError Indices
FS-MLP-COOT
MSEMAERMSEMAPE
NO2Wind speed, air temperature, the value of NO2 one hour ago, the value of SO2 one hour agoMay0.00470.04650.070013.50
June0.00390.04740.06549.34
July0.00660.05920.079111.21
August0.00870.06760.098812.31
September0.01120.09340. 191121.00
SO2Wind speed, air temperature, the value of NO2 one hour ago, the value of SO2 one hour agoMay0.01340.09160.121114.77
June0.01160.09880.142223.81
July0.01000.12110.158920.00
August0.02660.15990.177923.01
September0.11600.17770.137417.91
Table 6. MSE, MAE, and RMSE error of training data versus MAPE error of test data using FS-HNN on dataset A.
Table 6. MSE, MAE, and RMSE error of training data versus MAPE error of test data using FS-HNN on dataset A.
Output VariablesInput VariablesMonthError Indices
FS-HNN
MSEMAERMSEMAPE
NO2Wind speed, air temperature, the value of NO2 one hour ago, the value of SO2 one hour agoMay0.00730.05180.075517.01
June0.00810.05590.068910.31
July0.01650.06640.084112.34
August0.01450.06790.096315.58
September0.01860.11560. 194423.26
SO2Wind speed, air temperature, the value of NO2 one hour ago, the value of SO2 one hour agoMay0.01560.09880.127715.99
June0.01420.14570.146526.35
July0.01890.12760.168725.15
August0.02320.17210.178827.11
September0.12550.17820.140021.06
Table 7. MSE, MAE, and RMSE error of training data versus MAPE error of test data using FS-MLP on dataset A.
Table 7. MSE, MAE, and RMSE error of training data versus MAPE error of test data using FS-MLP on dataset A.
Output VariablesInput VariablesMonthError Indices
FS-MLP
MSEMAERMSEMAPE
NO2Wind speed, air temperature, the value of NO2 one hour ago, the value of SO2 one hour agoMay0.00930.05230.076717.50
June0.00820.05680.069911.34
July0.01340.06670.084313.34
August0.01670.06990.097816.38
September0.01940.11270. 196525.06
SO2Wind speed, air temperature, the value of NO2 one hour ago, the value of SO2 one hour agoMay0.01890.09870.128217.64
June0.01510.14530.148827.94
July0.02110.12890.169926.17
August0.02990.16910.179128.77
September0.12430.17970.141120.66
Table 8. MSE, MAE, and RMSE error of training data versus MAPE error of test data using FS-HNN-coot on dataset B.
Table 8. MSE, MAE, and RMSE error of training data versus MAPE error of test data using FS-HNN-coot on dataset B.
Output VariablesInput VariablesMonthError Indices
FS-HNN-COOT
MSEMAERMSEMAPE
NO2Wind speed, air temperature, the value of NO2 one hour ago, the value of SO2 one hour agoMay0.00020.01600.01428.6992
June0.00160.03310.04055.6254
July0.00110.04920.03337.0860
August0.00540.04390.07368.3494
September0.00070.02410.02734.2154
SO2Wind speed, air temperature, the value of NO2 one hour ago, the value of SO2 one hour agoMay0.00060.06450.024711.0186
June0.00490.05260.07069.0310
July0.00010.01080.013711.1684
August0.01370.05380.117315.9380
September0.00420.09080.064912.5348
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jalali, H.; Keynia, F.; Amirteimoury, F.; Heydari, A. A Short-Term Air Pollutant Concentration Forecasting Method Based on a Hybrid Neural Network and Metaheuristic Optimization Algorithms. Sustainability 2024, 16, 4829. https://doi.org/10.3390/su16114829

AMA Style

Jalali H, Keynia F, Amirteimoury F, Heydari A. A Short-Term Air Pollutant Concentration Forecasting Method Based on a Hybrid Neural Network and Metaheuristic Optimization Algorithms. Sustainability. 2024; 16(11):4829. https://doi.org/10.3390/su16114829

Chicago/Turabian Style

Jalali, Hossein, Farshid Keynia, Faezeh Amirteimoury, and Azim Heydari. 2024. "A Short-Term Air Pollutant Concentration Forecasting Method Based on a Hybrid Neural Network and Metaheuristic Optimization Algorithms" Sustainability 16, no. 11: 4829. https://doi.org/10.3390/su16114829

APA Style

Jalali, H., Keynia, F., Amirteimoury, F., & Heydari, A. (2024). A Short-Term Air Pollutant Concentration Forecasting Method Based on a Hybrid Neural Network and Metaheuristic Optimization Algorithms. Sustainability, 16(11), 4829. https://doi.org/10.3390/su16114829

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop