Next Article in Journal
Network Optimization of Carbon Monoxide Sensor Nodes in the Metropolitan Region of São Paulo
Next Article in Special Issue
Fast Prediction Method of Combustion Chamber Parameters Based on Artificial Neural Network
Previous Article in Journal
Hierarchical Classification for Large-Scale Learning
Previous Article in Special Issue
DFANet: Denoising Frequency Attention Network for Building Footprint Extraction in Very-High-Resolution Remote Sensing Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep-Learning-Based Water Quality Monitoring and Early Warning Methods: A Case Study of Ammonia Nitrogen Prediction in Rivers

1
School of Applied Chemistry and Materials, Zhuhai College of Science and Technology, Zhuhai 519041, China
2
Department of Industrial Electronics, School of Engineering, University of Minho, 4704-553 Braga, Portugal
3
School of Computer Science, Zhuhai College of Science and Technology, Zhuhai 519041, China
4
Key Laboratory of Symbol Computation and Knowledge Engineering of the Ministry of Education, College of Computer Science and Technology, Jilin University, 2699 Qianjin Street, Changchun 130012, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(22), 4645; https://doi.org/10.3390/electronics12224645
Submission received: 16 October 2023 / Revised: 10 November 2023 / Accepted: 12 November 2023 / Published: 14 November 2023
(This article belongs to the Special Issue Applications of Computational Intelligence, Volume 2)

Abstract

:
In line with rapid economic development and accelerated urbanization, the increasing discharge of wastewater and agricultural fertilizer usage has led to a gradual rise in ammonia nitrogen levels in rivers. High concentrations of ammonia nitrogen pose a significant challenge, causing eutrophication and adversely affecting the aquatic ecosystems and sustainable utilization of water resources. Traditional ammonia nitrogen detection methods suffer from limitations such as cumbersome sample handling and analysis, low sensitivity, and lack of real-time and dynamic feedback. In contrast, automated monitoring and ammonia nitrogen prediction technologies offer more efficient methods and accurate solutions. However, existing approaches still have some shortcomings, including sample processing complexity, interference issues, and the absence of real-time and dynamic information feedback. Consequently, deep learning techniques have emerged as promising methods to address these challenges. In this paper, we propose the application of a neural network model based on Long Short-Term Memory (LSTM) to analyze and model ammonia nitrogen monitoring data, enabling high-precision prediction of ammonia nitrogen indicators. Moreover, through correlation analysis between water quality parameters and ammonia nitrogen indicators, we identify a set of key feature indicators to enhance prediction efficiency and reduce costs. Experimental validation demonstrates the potential of our proposed approach to improve the accuracy, timeliness, and precision of ammonia nitrogen monitoring and prediction, which could provide support for environmental management and water resource governance.

1. Introduction

In recent years, rapid economic development and accelerated urbanization have led to improvements in industrial and agricultural production, as well as the living standards of urban residents. However, this progress has resulted in increased wastewater discharge and agricultural fertilizer usage, leading to a gradual rise in the concentration of ammonia nitrogen in rivers [1]. While ammonia nitrogen is an essential nutrient in river water, excessive levels can cause environmental issues, with water eutrophication being one of the most serious problems [2]. Eutrophication refers to the excessive nutrient content in river water, which triggers a rapid increase in biomass and fundamental changes in the aquatic ecosystem [3]. High concentrations of ammonia nitrogen promote the growth of algae and other aquatic plants, leading to an abundance of algae and phytoplankton, discoloration, and the emergence of harmful algae such as “Blue-Green Algae”. The proliferation and death of these organisms result in a sharp decline in dissolved oxygen, deteriorating water quality and creating “Dead Zones”. These not only affect the river’s aquatic ecosystem but also have significant negative consequences for water resource utilization and ecological conservation. Moreover, excessive ammonia nitrogen levels pose risks to other organisms, including fish and invertebrates, affecting their respiratory and reproductive systems, and potentially causing respiratory difficulties, toxin accumulation, and even death. Additionally, ammonia nitrogen can react with other substances in water to form compounds such as nitrites and nitrates, which can harm human and animal health [4,5,6].
As a result, monitoring ammonia nitrogen concentrations in rivers has become a crucial task for environmental management. By monitoring ammonia nitrogen levels, pollution in river water can be promptly detected, enabling appropriate measures to be taken to prevent water eutrophication and other environmental problems. Furthermore, monitoring ammonia nitrogen levels provides scientific evidence for environmental management and protection, serving as a basis for formulating environmental protection policies and supporting sustainable water resource utilization [7,8,9].
To address water quality concerns, various water quality monitoring technologies, including ammonia nitrogen detection and early warning techniques, have been developed [10,11,12]. Traditional methods for ammonia nitrogen detection, such as the Nessler method, evaporation determination method, indicator method, and fluorescence method, have limitations in terms of cumbersome operations, low sensitivity, and limited accuracy. In recent years, automated monitoring technologies such as chromatography, electrochemical methods, optical methods, and biosensors have been widely adopted for ammonia nitrogen detection [13,14]. These methods offer advantages such as simplified operations, high efficiency, and improved accuracy, some of which enable real-time monitoring of water quality. Additionally, current ammonia nitrogen early warning technologies utilize a combination of monitoring instruments and information systems to achieve real-time monitoring and early warning of water quality conditions through data collection, transmission, processing, and analysis. Despite the numerous studies conducted on surface water ammonia nitrogen monitoring and early warning, practical applications still face limitations. Traditional chemical analysis methods involve laborious sample handling and analysis procedures, leading to potential errors. Novel techniques such as biosensors exhibit high sensitivity but encounter interference issues in complex environments. Furthermore, conventional monitoring methods often provide static data information and lack real-time and dynamic information feedback [15]. Therefore, improving the accuracy, timeliness, and precision of ammonia nitrogen monitoring and early warning in surface water remains an important research direction [16,17,18].
With the rapid development of artificial intelligence, machine learning has emerged as a popular technology in environmental and water resource management. Traditional machine learning methods have many advantages, such as ease of understanding and interpretation, visual analysis, and easy extraction of rules. In a relatively short period of time, these methods can produce feasible and effective results on large data sources and can handle both categorical and numerical data. They are suitable for handling missing samples and have a fast running speed when testing the dataset. However, there are also obvious disadvantages to machine learning, such as difficulties in handling missing data, the tendency to overfit, and ignoring the correlation between attributes in the dataset. Practical applications have shown that deep learning outperforms traditional machine learning and statistical methods in many tasks [19]. For example, deep learning models can learn and capture complex features of data, including nonlinear relationships and high-order interactions, which provides deep learning with greater flexibility and an advantage in dealing with complex, dynamic, and unknown data. It has strong representational power, and is able to handle data with high-dimensional features, nonlinear relationships, and complex patterns. It also has a high tolerance for noise and outliers, better adaptability to real-world applications, and improved robustness and generalization capabilities. As the amount of data increases, traditional machine learning methods may encounter problems such as the curse of dimensionality. However, deep learning models have excellent scalability and can easily handle large-scale datasets, allowing them to learn more complex patterns from a large amount of data. It has strong memory capabilities, and is able to store and recall a large amount of information. This provides deep learning with a great advantage in application scenarios that require long-term memory and historical information. Finally, in many application scenarios, deep learning can achieve higher prediction accuracy than traditional machine learning methods; in particular, in the field of water quality prediction, deep learning algorithms perform significantly better than traditional machine learning algorithms [20]. In this study, a deep learning model called Long Short-Term Memory (LSTM) was employed to process water quality monitoring data and achieved high-precision prediction of ammonia nitrogen indicators through data analysis and modeling [21,22,23]. LSTM is a recursive neural network (RNN) that solves the problem of gradient disappearance or explosion that exists in traditional RNNs when dealing with long-sequence data by introducing memory units, allowing better capture of the time-series characteristics of the data when dealing with long-sequence data. At the same time, the gate control mechanism in the LSTM model can effectively control the flow of information, avoiding gradient disappearance or explosion problems. Therefore, when dealing with water quality data, LSTM can better capture the long-term dependence relationship between water quality indicators and improve predictive performance through forward and reverse information flow, thus more accurately predicting water quality data [24,25,26]. Furthermore, correlation analysis between different water quality indicators and ammonia nitrogen indicators helps in the identification of key feature indicators for model input, enhancing prediction efficiency and reducing costs [12,22]. To achieve these objectives, a series of experiments were conducted using historical monitoring data from the Qianshan River in Zhuhai City.

2. Materials and Methods

2.1. Study Area and Data Collection

The Qianshan River waterway plays a vital role as a major inland transportation route in Zhuhai City, China. It is located at 21°48′~22°27′ north latitude and 113°03′~114°19′ east longitude, in the south of Guangdong Province, on the west bank of the Pearl River estuary. Its source can be traced back to Lianshiwan in Tantou Town, Zhongshan City, where water is introduced from the Madaomen waterway and flows eastward, passing through Tantou Town and Qianshan Street in Zhuhai City, until it merges into the Pearl River Estuary at Wanzai Shikaoju lock. With a total length of approximately 23 km, the river encompasses a stretch of about 15 km in Tantou Town, Zhongshan, and varies in width from 58 to 220 m. In Zhuhai, the river extends for about 8 km with a width ranging from 200 to 300 m. The Qianshan River basin covers a watershed area of around 338 km2, experiencing an annual runoff volume of 1.54 billion cubic meters, an average annual runoff depth of 1100 mm, and an average runoff coefficient of 0.58. The river basin predominantly consists of sedimentary plain landforms, sloping from the northeast to the southwest.
Since 2015, the Qianshan River basin has experienced a total of 107 industrial pollution sources. Out of these, 20 are located in Sanxiang Town, Zhongshan City, representing 18.7% of the total sources, while 45 are situated in Tantou Town, accounting for 42.1%. Additionally, the Zhuhai area hosts 42 industrial pollution sources, making up 39.3% of the overall count. Urban domestic pollution primarily consists of sewage from urban villages and scattered old villages along the river. Figure 1 shows the specific locations of monitoring areas and monitoring stations.
For the purpose of this study, water quality data was collected from the Shijiaoju monitoring point within the Qianshan Street waterway network. The dataset spans from 8 November 2020 to 28 February 2023, providing historical water quality data at four-hour intervals. The dataset comprises a total of 5058 samples, encompassing nine water quality parameters: ammonia nitrogen (NH3-N), water temperature (Temp), potential of hydrogen (pH), dissolved oxygen (DO), potassium permanganate index (KMnO4), total phosphorus (TP), total nitrogen (TN), conductivity (Cond), and turbidity (Turb).

2.2. Data Preprocessing

During the operation of automated water quality monitoring stations, various factors, including sensor malfunctions, network failures, and unexpected events such as pollutant leaks or extreme weather conditions, can lead to data loss and anomalies. The objective of data preprocessing is to cleanse the raw data by eliminating outliers, noise, and missing values, thereby improving the performance and reliability of water quality prediction models. Thorough data preprocessing ensures that the models are built upon high-quality data, enhancing prediction accuracy and providing a more dependable scientific foundation for water quality monitoring and management decisions [27,28,29].
In the context of handling missing values, two primary approaches, namely single imputation (SI) and multiple imputation (MI), are commonly used. While MI is more complex in operation and relatively costly, this study, considering the nature of the Qianshan River water quality data, adopts linear interpolation as the method for filling missing values. Linear interpolation, widely employed for filling missing values, is particularly suitable for data with a time dimension, such as time series data. Its fundamental concept involves estimating the missing values by performing linear interpolation between the preceding and subsequent observed values [30,31].
To implement linear interpolation, the positions of the missing values within the time series, referred to as interpolation positions, must be determined. Subsequently, the interpolation values are calculated by applying linear interpolation based on the available observed values, thereby obtaining estimates for the missing values [26,27]. Finally, it is essential to verify the interpolation results by ensuring that the post-interpolation data align with the actual situation, adhere to data distribution characteristics, and maintain consistency with other variables.
Let (X1, Y1) represent the preceding observed value of the missing value, (X2, Y2) represent the subsequent observed value, and X0 represent the position of the missing value. The estimated missing value Y0 can be calculated using the following formula:
Y0 = Y1 + (X0 − X1) × (Y2 − Y1)/(X2 − X1)
Here, Y1 and Y2 represent the values of the observed values preceding and following the missing value, respectively, while X1 and X2 represent the corresponding time or position information. X0 represents the position of the missing value [32]. Figure 2 and Table 1 show the basic situation of the water quality data.

2.3. Feature Dataset

The dataset was thoroughly analyzed prior to model construction to gain insights into the relationships among variables, particularly focusing on the correlations between the input variables and the output variable [33]. Strong correlations between input and output variables indicate that the input values can effectively predict the output values, enabling the model to utilize this information during the modeling process [34]. Consequently, the model is expected to exhibit superior predictive performance by accurately capturing the relationships between inputs and outputs [22]. Conversely, weak correlations between input and output variables imply limited predictive capability of the input variables for the output variables [35]. In such cases, the model may struggle to capture these relationships, resulting in restricted predictive performance as it fails to extract sufficient information from the input variables to accurately predict the output variables.
In this paper, the Pearson correlation coefficient, a widely used measure for assessing linear correlations between random variables, was employed to analyze the correlations among input variables and between input variables and the output variables. By calculating the Pearson correlation coefficient, we were able to evaluate the strength of correlations among input variables and the association between the input variables and the output variable [36]. This data analysis facilitated the identification of strong correlations among input variables, addressing the issue of redundant information and enhancing the model’s efficiency [23]. Table 2 shows the calculation results of Pearson correlation coefficient.
The Table 2 analysis revealed significant correlations between NH3-N and six parameters, namely pH, DO, KMnO4, TP, TN, and Cond. Specifically, the correlation coefficient between NH3-N and pH was −0.420, demonstrating a significant negative correlation (p < 0.01) between NH3-N and pH. Similarly, NH3-N and DO exhibited a correlation coefficient of −0.394, indicating a significant negative correlation (p < 0.01) between NH3-N and DO. In contrast, NH3-N and KMnO4 showed a correlation coefficient of 0.209, suggesting a significant positive correlation (p < 0.01) between NH3-N and KMnO4. The correlation coefficient between NH3-N and TP was 0.613, indicating a significant positive correlation (p < 0.01) between NH3-N and TP. Moreover, NH3-N and TN had a correlation coefficient of 0.447, signifying a significant positive correlation (p < 0.01) between NH3-N and TN. Lastly, NH3-N and Cond exhibited a correlation coefficient of −0.038, indicating a significant negative correlation (p < 0.01) between NH3-N and Cond.
Conversely, no significant correlations (p > 0.05) were observed between NH3-N and Temp or Turb, suggesting no significant relationship between NH3-N and these two parameters.
Based on the results of the correlation analysis, each parameter was ranked according to the magnitude of their correlation coefficients. The parameters were then divided into nine groups, with increasing correlation coefficient values, as visually depicted in Figure 3. This grouping allows for a better understanding of the relationships between NH3-N and other parameters, with parameters exhibiting higher correlation coefficients being considered more strongly associated with NH3-N levels.

2.4. LSTM Model

2.4.1. Model Construction and Training

The design and training stages of deep learning models are pivotal in water quality modeling and prediction. Given the multifaceted influences and the temporal-spatial patterns inherent in NH3-N concentrations in surface water, the adoption of the Long Short-Term Memory (LSTM) model, a prominent type of recurrent neural network (RNN), is judicious. Notably, LSTM boasts memory prowess, facilitating adept capture of long-term dependencies inherent in time series data [37]. Especially in the field of water quality prediction, the LSTM algorithm represents a significant improvement compared to traditional machine learning algorithms [38].
During the model training phase, historical NH3-N monitoring data necessitate partitioning into training, validation, and testing sets, designated for model training, validation, and testing, respectively. This partitioning can be realized through either time-series-based or random division, ensuring that the data in these subsets remain representative both temporally and spatially. In this work, the validation set encompassed 10% of the dataset, totaling 506 samples, while the testing set comprised 5% of the dataset, amounting to 253 samples. The remaining samples were allocated for model training.
For model construction, training, and optimization, renowned deep learning frameworks such as TensorFlow and Keras come to the fore, streamlining efficient model design and training. Techniques including grid search and cross-validation prove instrumental in hyperparameter tuning. Grid search entails training and validating the model with assorted hyperparameter combinations within specified ranges, culminating in the selection of the optimal combination via validation set performance. In contrast, cross-validation involves segmenting the training set into multiple folds, training the model on each fold, validating on the remaining folds, and averaging performance metrics to temper evaluation randomness and bolster generalization proficiency. It is prudent to acknowledge that grid search may dictate considerable computational resources and time, mandating judicious hyperparameter range selection and prudent resource allocation to streamline effective hyperparameter tuning [22,23].
LSTM models typically encompass input layers, LSTM layers, and output layers, among other constituents. Model structure can be tailored to data attributes by adjusting parameters such as the number of LSTM neurons and activation functions. During the model training process, setting appropriate hyperparameters—such as learning rate and batch size—assumes significance. Learning rate governs the magnitude of weight updates per iteration, with extremes preventing convergence or inducing local optima. Batch size dictates the number of samples per parameter update, with excessively large batches causing aggressive updates, while overly small batches yield unstable adjustments. Pragmatic experimentation and optimization are indispensable to ascertain suitable hyperparameter values, fostering superior model performance.
In this work, an LSTM model was crafted within the TensorFlow-GPU 2.9 framework. This model comprises three layers: an input layer, an LSTM layer with 50 neurons; a subsequent LSTM layer with 80 neurons; and the ultimate output layer, featuring a single fully connected neuron for prediction output. Sample data from the past 30 time periods are used to predict data for the next 1 time period. A dropout layer, characterized by a dropout rate of 0.2, intervenes between the second and third layers, systematically discarding a fraction of neuron outputs during model training, thus tempering overfitting risks.

2.4.2. Model Evaluation

The assessment of a model’s predictive performance holds paramount importance in affirming its efficacy. Appropriate evaluation metrics must be used to quantitatively gauge the model’s predicted outcomes. In this study, the mean square error (MSE) and coefficient of determination (R2) emerge as primary indices to scrutinize the predictive prowess of the model [39]. Furthermore, the average absolute error (MAE) and root mean square error (RMSE) are also invoked, furnishing a holistic comprehension of the model’s predictive capacity pertaining to ammonia nitrogen concentration [40,41,42].
These four evaluation methods are briefly introduced as follows:
  • Mean square error (MSE): MSE encapsulates the average of squared differences between predicted values and actual values. It provides a measure of prediction accuracy, with lower MSE values denoting enhanced precision in the model’s predictions.
  • Coefficient of determination (R2): R2 quantifies the proportion of the variability in the dependent variable that can be explicated by the model. It ranges from 0 to 1, with higher R2 values indicating stronger model performance in explaining the variance in the data.
  • Average absolute error (MAE): MAE computes the average absolute differences between predicted values and actual values. MAE offers insights into the average prediction error magnitude, with lower MAE values reflecting superior prediction accuracy.
  • Root mean square error (RMSE): RMSE calculates the square root of the average of squared prediction errors. It provides an estimation of the model’s predictive error spread, with smaller RMSE values signifying improved prediction precision.
By leveraging these evaluation metrics, the model’s performance in forecasting ammonia nitrogen concentration can be rigorously assessed, affording a comprehensive understanding of its predictive capabilities.

2.4.3. Model Optimization

In the realm of model optimization, the consideration of model interpretability assumes significance. Deep learning models are often perceived as “black-box” entities, challenging the explanation of the rationale behind their predictions. To address this challenge, visualization techniques and feature importance analysis can be harnessed to unveil the model’s prediction process. This augments model interpretability, streamlining model application and refinement. It is imperative to recognize that model evaluation and optimization represent iterative processes. Depending on the context, multiple cycles of evaluation and optimization may be warranted, entailing continuous adjustments to model design and parameters until the desired performance benchmarks are met.
In this study, optimization efforts entailed the utilization of grid search and cross-validation methodologies. The model was encapsulated as a regressor via KerasRegressor, thereby enabling its seamless integration with scikit-learn. A GridSearchCV object was instantiated to orchestrate grid search and cross-validation within the designated parameter space. This parameter space encompassed batch size, epochs, and the optimizer. The “cv” parameter dictated the number of folds for cross-validation, set to 2 in this instance, indicating deployment of 2-fold cross-validation [43,44,45]. After rigorous experimental comparisons, the following hyperparameters were judiciously selected: a batch size of 32, 50 epochs, and a RMSprop optimizer (root mean square propagation).
RMSprop serves as an optimization algorithm for training neural network models. Operating as an adaptive learning rate technique rooted in the gradient descent algorithm, RMSprop leverages exponentially weighted moving averages of gradients to dynamically adjust the learning rate. In contrast to conventional gradient descent approaches, RMSprop employs the moving average of squared gradients to modulate the learning rate. The central steps of RMSprop entail:
  • Parameter initialization: Weights of the model and exponentially weighted moving average of squared gradients are initialized.
  • Iterative training:
    • Gradients of the model’s loss function concerning the weights are computed.
    • The exponentially weighted moving average of squared gradients is updated.
    • Adjustment value for the learning rate is computed based on the moving average.
    • Weights are updated based on the learning rate adjustment value and gradients.
    • The above steps are reiterated until a termination criterion is satisfied, such as reaching the maximum number of iterations or convergence of the loss function.
RMSprop brings forth several merits, including:
  • Adaptive learning rate: RMSprop dynamically tunes the learning rate in response to gradient changes. Large gradients prompt diminished learning rates, curbing parameter updates, while smaller gradients engender augmented learning rates, hastening parameter updates.
  • Applicability to non-stationary data: RMSprop excels in scenarios with non-stationary gradients, augmenting model training stability and convergence pace.
  • Ameliorating gradient explosion and vanishing: Through the utilization of exponentially weighted moving averages of gradients, RMSprop mitigates the adverse effects of gradient explosion and vanishing, thereby amplifying model training effectiveness.
It remains pivotal to acknowledge that RMSprop mandates manual hyperparameter configuration, including of the initial learning rate and decay coefficient. Additionally, RMSprop may not universally serve as the optimal optimization algorithm, and alternatives such as Adam or Adagrad could outperform RMSprop for specific problems [46,47,48].

3. Results

3.1. Analysis of Spatiotemporal Variation in NH3-N Content in River Water Quality

Figure 4 illustrates the fluctuations in NH3-N concentrations within the Qianshan River. The average NH3-N concentration follows a discernible diurnal rhythm, culminating in the early morning hours (4:00–08:00) and ebbing during the afternoon (16:00–20:00). This diurnal oscillation can be attributed to the urban lifestyle rhythm. The morning surge in NH3-N concentration arises from activities such as waking and personal hygiene, which augment the discharge of organic wastewater, subsequently elevating NH3-N levels. Conversely, afternoon hours, dedicated to work and studies, witness a reduction in organic wastewater discharge, thereby leading to a decline in NH3-N concentration. Temperature variations between these periods may further contribute. Nighttime features lower water temperatures, which retard microbial metabolic activities, facilitating NH3-N accumulation. Daytime warmth, in contrast, accelerates microbial metabolism, promoting NH3-N consumption.
Furthermore, the sway of photosynthesis emerges as a potential influence on NH3-N fluctuations. Aquatic phytoplankton, through photosynthesis, convert carbon dioxide and water into organic matter and oxygen. This process necessitates NH3-N and other inorganic nitrogen compounds, thereby ushering a dip in NH3-N concentration during robust photosynthetic phases in daylight. Subsequently, the absence of photosynthesis during nighttime leads to increased NH3-N concentration.
The daily average NH3-N concentration typically registers an elevation during the middle and upper segments of each month, peaking around the 14th and 15th, while receding during the middle and lower segments, hitting lows around the 18th and 19th. This pattern is intricately intertwined with pollutant emissions and environmental elements. These segments mark peaks for domestic and industrial water usage, leading to wastewater discharge bearing higher NH3-N content and correspondingly elevated NH3-N concentration. Towards the end of the month, as environmental factors and pollutant sources dwindle, NH3-N concentration also gradually diminishes.
Monthly NH3-N concentration averages tend to surge in August and dip in April. This phenomenon likely stems from temperature and climatic alterations. Summer temperatures expedite water chemical reactions, spur bacterial proliferation, and yield additional NH3-N through organic matter decomposition, culminating in heightened NH3-N concentration. Conversely, spring’s cooler temperatures deter chemical reactions and bacterial growth, translating to decreased NH3-N concentration. Further factors such as increased summer temperatures, reduced rainfall, and slower water flow fostering biological growth and heightened microbial metabolic activity play a role in augmenting NH3-N concentration. Spring’s lower temperatures, amplified rainfall, and swifter water flow, conversely, engender a decline in NH3-N concentration.
The distinct NH3-N concentration trends across varied time spans underscore its cyclic variations in the Qianshan River. The multifaceted factors influencing NH3-N concentration warrant comprehensive consideration for the formulation of effective management strategies against NH3-N pollution. Moreover, these analytical insights provide pivotal reference points, guiding the development, forecasting, and refinement of subsequent deep learning models.

3.2. Evaluation of NH3-N Prediction Performance Based on the LSTM Model

The effectiveness of the developed NH3-N concentration model was rigorously evaluated through the application of key metrics, namely R2, MSE, and MAE, which were all applied to the validation dataset. The outcomes of this evaluation validate the proficiency of the LSTM model within the research domain, as depicted in Figure 5.
The model’s trajectory of convergence and stability was observed within a span of 50 iterations. This achievement was coupled with an impressively low MAE that remained below 0.045, accompanied by a MSE that maintained itself below 0.004. This portrayal in Figure 5 succinctly underscores the model’s efficacy in predicting ammonia nitrogen concentrations. The proximity of these metrics to their respective minima enforces the LSTM model’s competence in forecasting ammonia nitrogen concentration.
Furthermore, the predictive outcomes gleaned from the model, as aptly showcased in Figure 5c, manifest a remarkable alignment with the actual measured values. This agreement is further underscored by the calculated R2 value of 0.89. In totality, the LSTM model deftly captures the nuanced concentration variations of NH3-N coursing through the Qianshan River, thus emerging as a robust and adept predictive model.

3.3. Comparison of NH3-N Prediction Performance Based on Different Feature Sets

In order to identify the key input variables combinations that influence the prediction results of ammonia nitrogen concentrations, the LSTM model was utilized with different combinations of the nine input variables to predict ammonia nitrogen levels on the test dataset. Based on the strength of the correlation between the input variables and the target output, the nine input variables were sorted in descending order of their correlation coefficient values with the target output. The input feature combinations were gradually formed by cumulatively adding the correlation coefficient values, as shown in Figure 3.
In the sphere of evaluation metrics, the R-squared (R2) value emerged as a cardinal yardstick, affording substantive insights into the model’s capacity to explain the target variable. Spanning the continuum from 0 to 1, an R2 value approaching unity connoted heightened explanatory efficacy of the model relative to the target variable. Our meticulous scrutiny of R2 values unveiled the preeminence of feature combination 6, a composition encompassing six variables, which secured the acme R2 value of 0.82. This pronounced R2 value underscored the compelling explanatory prowess wielded by feature combination 6 over the target variable.
Furthermore, our scrutiny extended to mean squared error (MSE) and root mean squared error (RMSE), metrics poised to gauge the dissonance between the model’s prognostications and the empirical observations. Remarkably, feature combination 6 evidenced commendable proficiency, yielding nominal error values of 0.0047 and 0.0655 for MSE and RMSE, respectively, thereby accentuating the model’s prowess in delivering refined predictive accuracy.
Concomitantly, the focus converged on mean absolute error (MAE), a barometer of the average absolute divergence between the model’s prognoses and the actual observations. In this purview, feature combination 6 preserved its ascendancy, manifesting a modest absolute error value of 0.0460, an indication of its robust capacity to attenuate prediction bias.
A comprehensive synthesis of Figure 6 and Table 3 unveils compelling revelations. Feature combination 1, characterized by a single indicator, boasted an elevated R2 value of 0.79, alongside mitigated MSE, RMSE, and MAE values. This configuration accentuated the salience of a single feature’s explanatory potential with regard to the target variable, indicating a heightened predictive accuracy. In contrast, feature combinations 2, 3, 4, and 7 followed a trajectory marked by diminished R2 values and accentuated MSE, RMSE, and MAE values—reflective of dwindling explanatory efficacy and curtailed predictive precision. Feature combinations 5, 8, and 9 presented consistent performance, exhibiting amplified R2 values juxtaposed against marginally inflated MSE, RMSE, and MAE values with regard to feature combination 6. This nuanced differentiation intimates a marginal reduction in predictive accuracy for these configurations.
A holistic assimilation of the aforesaid analysis unequivocally elevates feature combination 6—a composite of six variables—as an exemplar of superlative performance, as evidenced across a spectrum of evaluation metrics. Supported by an improved R2 value, reduced MSE, RMSE, and MAE values, as well as enhanced predictive precision, feature combination 6 emerges as a compelling set of input variables, deserving of thorough investigation in future research initiatives and practical implementations.

4. Discussion

4.1. Feasibility of Using the LSTM Model with Easily Measurable Water Quality Data to Predict Ammonia Nitrogen Concentrations in Water Quality

The LSTM model developed in this study has effectively established a nonlinear mapping relationship between readily measurable water quality parameters (NH3-N, Temp, pH, DO, KMnO4, TP, TN, Cond, and Turb) and the target variable (NH3-N). This achievement helps to accurately predict the concentration of ammonia nitrogen in river systems. The model’s predicted NH3-N concentrations closely align with observed values acquired from real-time data collected at river water monitoring sites, attaining an average R2 value of 0.82. This reflects the strong ability of the model to predict the peak concentration of NH3-N, which can provide reliable early warnings to mitigate the impact of elevated NH3-N levels on water quality. This ability is of great significance for intelligent monitoring and management of aquatic environments.
It is worth noting that, unlike the accuracy of predicting concentration peaks, the model exhibits a slight decrease in its effectiveness in predicting NH3-N concentration valleys within specific time intervals, as shown in Figure 6c–i. The model did not fully learn the complexity of data attributes during the training phase NH3-N concentration trough, making it impossible to accurately predict the trough value. Moreover, the limited NH3-N concentrations within water quality samples during valley periods, coupled with potential measurement deviations in Internet of Things (IoT) real-time monitoring devices, may engender diminished accuracy in raw data. These exceptional circumstances inevitably contribute to the attenuation of training data accuracy, consequently influencing the model’s predictive performance. Hence, the acquisition of high-fidelity training data assumes critical importance to reinforce prognostic precision. Furthermore, predicated upon the findings presented in Figure 6, the incorporation of supplementary input variables that wield substantive influence over the output variable could potentially ameliorate model prediction accuracy. Thus, delving into additional potential indicators that affect NH3-N concentrations within river water quality can serve to augment the model’s efficacy, presenting a meaningful avenue for enhancing predictive capabilities.

4.2. Potential for Reducing Model Prediction Costs

In contrast to conventional mechanistic models, the data-driven prediction methodologies surmount the temporal limitations associated with sample procurement, analysis, and detection, concurrently reducing the demand for substantial human, financial, and material resources. However, when scrutinizing indicators that are measurable within brief timeframes of minutes to a day, the adoption of sensors having high temporal resolution might entail elevated costs, notably in terms of instrument probe maintenance. Thus, it is imperative to identify pivotal variables for model training that exhibit a minimal compromise on prediction performance. This understanding may significantly improve the operational efficiency of the model, thereby reducing computational energy consumption and prediction expenses. Empirical findings indicate that using a separate input–output paradigm in the prediction model to achieve accurate NH3-N prediction is not sufficient. Additionally, the iterative approach of progressively augmenting input variables to discern the optimal input combination yielding superior NH3-N prediction accuracy entails notable temporal and operational investments. Conversely, the methodological application of Pearson correlation coefficient analysis effectively identifies a subset of input variables characterized by robust interactions that materially contribute to the model’s output. Notably, the current study demonstrates the relevance of NH3-N, pH, DO, KMnO4, TP, and TN (as delineated in Figure 3). Therefore, the composition of input variables is amenable to adjustment contingent upon the ordering of their correlations, thereby engendering the identification of an optimal input indicator combination predicated upon prediction performance.

5. Conclusions

In this investigation, a data-driven Long Short-Term Memory (LSTM) model was designed to predict NH3-N concentrations in river water networks. This model shows good accuracy in predicting NH3-N concentration. Primarily, an exploratory examination was undertaken to assess the aptitude of deep learning methodologies in NH3-N prediction. The outcomes manifestly underscored the data-driven NH3-N prediction model’s robust generalization potential, led by an impressive R2 value of 0.82 for the optimal input indicator amalgamation. Furthermore, the model’s performance was amenable to enhancement through judiciously modulating the number of layers and neurons within the LSTM framework. Equally noteworthy, the employment of Pearson correlation coefficient analysis expeditiously illuminated and quantified the multi-faceted contributions of diverse input variables to the model’s predictive outcomes. This analytical framework significantly enriched our comprehension of deep learning results and facilitated model optimization. Overall, the proposed LSTM-based NH3-N prediction model effectively overcomes the limitations of traditional monitoring methods in terms of time and economic costs and enables fast modeling at low costs. This provides a feasible solution for early warning of high NH3-N concentrations in river water, enabling water environmental management departments to develop inspection plans and reduce incidents of water quality hazards caused by excessive NH3-N concentrations.
However, the proposed model has some limitations. Rooted in the underpinnings of the deep learning algorithm, modeling efficacy hinges upon the interplay between input and output variables. This study predominantly accentuated the correlation existing between input and output variables, thereby inadvertently disregarding the latent interplay amongst the input variables themselves. This analytical disposition could potentially lead to the inadvertent omission of pivotal variables, given the plausible existence of inherent correlations amongst the input variables. It is plausible that certain features might furnish supplementary insights that underpin enhanced predictive capabilities. The inadvertent oversight of internal feature correlations could yield the exclusion of such salient features, impinging upon the model’s precision and performance. Additionally, the introduction of redundant features—features with a high degree of correlation—might entail unwarranted complexities, hampering model training and generalization proficiencies. In scenarios wherein inter-feature correlations exist, the model may inadvertently assign disproportionate weights to features exhibiting elevated correlations, inadvertently sidelining features characterized by lower correlations. This asymmetry in feature weighting can engender biased feature attributions and potentially compromise the model’s adeptness in harnessing the complete spectrum of available information. Neglecting the intrinsic feature correlations further augments the model’s tendency to disproportionately depend on specific features during the training phase, thereby amplifying the risk of overfitting. Consequently, in the realm of feature engineering, a judicious consideration of both the correlation between input–output variables and the internal inter-feature correlations is indispensable, potentially culminating in more exhaustive and precise predictive models.
In summary, the future research should focus on improving model performance, expanding application domains, streamlining workflows, and further enhancing model interpretability to better support various aspects of water quality environmental management and governance. To better understand the model’s performance variations at different times, we plan to incorporate seasonality and other temporal patterns as input features. This step will enable us to more accurately capture the seasonal variations in water quality, providing precise data support for water quality management. Furthermore, we will actively investigate various data processing and feature selection methods, such as principal component analysis and causal analysis, to gain a deeper understanding of the reasons behind performance differences. By continually optimizing the model, enhancing its generalization capabilities and robustness, we will ensure that the model performs excellently under diverse conditions [49]. Simultaneously, we will compare the performance of different deep learning models, streamline model algorithms, and improve model interpretability. By quantifying model costs, we will maintain efficient workflows while enhancing performance. This approach will better serve the needs of water quality prediction and water quality environmental management.

Author Contributions

Conceptualization, X.W., Y.L. (Yanchun Liang) and A.T.; methodology, M.Q.; software, X.W. and Y.L. (Ying Li); validation, A.T., X.W. and M.Q.; formal analysis, X.W. and A.T.; investigation, X.W.; resources, M.Q. and Q.Q.; data curation, X.W. and Y.L. (Ying Li); writing—original draft preparation, X.W.; writing—review and editing, Q.Q., A.T. and Y.L. (Yanchun Liang); visualization, X.W.; supervision, Y.L. (Yanchun Liang); project administration, Y.L. (Ying Li). All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the NSFC (62372494, 61972174), the Guangdong Universities’ Innovation Team (2021KCXTD015), the Key Disciplines Projects (2021ZDJS138), and the Guangdong Provincial Junior Innovative Talents Project for Ordinary Universities (2022KQNCX146).

Data Availability Statement

Data shall be provided by the first and corresponding authors upon special request.

Acknowledgments

Thanks to Qingyue Data (data.epmap.org) for support of environmental data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mohapatra, J.B.; Jha, P.; Jha, M.K.; Biswal, S. Efficacy of Machine Learning Techniques in Predicting Groundwater Fluctuations in Agro-Ecological Zones of India. Sci. Total Environ. 2021, 785, 147319. [Google Scholar] [CrossRef]
  2. Wang, S.; Peng, H.; Liang, S. Prediction of Estuarine Water Quality Using Interpretable Machine Learning Approach. J. Hydrol. 2022, 605, 127320. [Google Scholar] [CrossRef]
  3. Rostam, N.A.P.; Malim, N.H.A.H.; Abdullah, R.; Ahmad, A.L.; Ooi, B.S.; Chan, D.J.C. A Complete Proposed Framework for Coastal Water Quality Monitoring System with Algae Predictive Model. IEEE Access 2021, 9, 108249–108265. [Google Scholar] [CrossRef]
  4. Ransom, K.M.; Nolan, B.T.; Traum, J.A.; Faunt, C.C.; Bell, A.M.; Gronberg, J.A.M.; Wheeler, D.C.; Rosecrans, C.Z.; Jurgens, B.; Schwarz, G.E.; et al. A Hybrid Machine Learning Model to Predict and Visualize Nitrate Concentration throughout the Central Valley Aquifer, California, USA. Sci. Total Environ. 2017, 601–602, 1160–1172. [Google Scholar] [CrossRef]
  5. Mejía, L.; Barrios, M. Identifying Watershed Predictors of Surface Water Quality through Iterative Input Selection. Int. J. Environ. Sci. Technol. 2022, 20, 7201–7216. [Google Scholar] [CrossRef]
  6. Azrour, M.; Mabrouki, J.; Fattah, G.; Guezzaz, A.; Aziz, F. Machine Learning Algorithms for Efficient Water Quality Prediction. Model. Earth Syst. Environ. 2022, 8, 2793–2801. [Google Scholar] [CrossRef]
  7. Lin, K.; Zhu, Y.; Zhang, Y.; Lin, H. Determination of Ammonia Nitrogen in Natural Waters: Recent Advances and Applications. Trends Environ. Anal. Chem. 2019, 24, e00073. [Google Scholar] [CrossRef]
  8. Li, D.; Xu, X.; Li, Z.; Wang, T.; Wang, C. Detection Methods of Ammonia Nitrogen in Water: A Review. TrAC Trends Anal. Chem. 2020, 127, 115890. [Google Scholar] [CrossRef]
  9. Insausti, M.; Timmis, R.; Kinnersley, R.; Rufino, M.C. Advances in Sensing Ammonia from Agricultural Sources. Sci. Total Environ. 2020, 706, 135124. [Google Scholar] [CrossRef]
  10. Wan, H.; Xu, R.; Zhang, M.; Cai, Y.; Li, J.; Shen, X. A Novel Model for Water Quality Prediction Caused by Non-Point Sources Pollution Based on Deep Learning and Feature Extraction Methods. J. Hydrol. 2022, 612, 128081. [Google Scholar] [CrossRef]
  11. Du, Z.; Qi, J.; Wu, S.; Zhang, F.; Liu, R. A Spatially Weighted Neural Network Based Water Quality Assessment Method for Large-Scale Coastal Areas. Environ. Sci. Technol. 2021, 55, 2553–2563. [Google Scholar] [CrossRef] [PubMed]
  12. Hu, Z.; Zhang, Y.; Zhao, Y.; Xie, M.; Zhong, J.; Tu, Z.; Liu, J. A Water Quality Prediction Method Based on the Deep LSTM Network Considering Correlation in Smart Mariculture. Sensors 2019, 19, 1420. [Google Scholar] [CrossRef]
  13. Akbar, M.A.; Selvaganapathy, P.R.; Kruse, P. Nanocarbon Based Chemiresistive Detection of Monochloramine in Water. ECS Meet. Abstr. 2022, MA2022-01, 2137. [Google Scholar] [CrossRef]
  14. Kruse, P.; Akbar, M.A.; Sharif, O.; Selvaganapathy, P.R. Single-Walled Carbon Nanotube Chemiresistive Sensors for the Identification and Quantification of Disinfectants. In Proceedings of the ECS Meeting Abstracts; The Electrochemical Society, Inc.: Pennington, NJ, USA, 2021; Volume 2021, p. 1613. [Google Scholar]
  15. Wei, L.; Zhang, Y.; Han, Y.; Zheng, J.; Xu, X.; Zhu, L. Effective Abatement of Ammonium and Nitrate Release from Sediments by Biochar Coverage. Sci. Total Environ. 2023, 899, 165710. [Google Scholar] [CrossRef]
  16. Yu, H.; Yang, L.; Li, D.; Chen, Y. A Hybrid Intelligent Soft Computing Method for Ammonia Nitrogen Prediction in Aquaculture. Inf. Process. Agric. 2021, 8, 64–74. [Google Scholar] [CrossRef]
  17. Jiang, Y.; Dong, X.; Li, Y.; Li, Y.; Liang, Y.; Zhang, M. An Environmentally-Benign Flow-Batch System for Headspace Single-Drop Microextraction and on-Drop Conductometric Detecting Ammonium. Talanta 2021, 224, 121849. [Google Scholar] [CrossRef] [PubMed]
  18. Zhao, Y.; Shi, R.; Bian, X.; Zhou, C.; Zhao, Y.; Zhang, S.; Wu, F.; Waterhouse, G.I.; Wu, L.-Z.; Tung, C.-H.; et al. Ammonia Detection Methods in Photocatalytic and Electrocatalytic Experiments: How to Improve the Reliability of NH3 Production Rates? Adv. Sci. 2019, 6, 1802109. [Google Scholar] [CrossRef] [PubMed]
  19. Zhang, S.-Z.; Chen, S.; Jiang, H. A Back Propagation Neural Network Model for Accurately Predicting the Removal Efficiency of Ammonia Nitrogen in Wastewater Treatment Plants Using Different Biological Processes. Water Res. 2022, 222, 118908. [Google Scholar] [CrossRef]
  20. Wang, X.; Li, Y.; Qiao, Q.; Tavares, A.; Liang, Y. Water Quality Prediction Based on Machine Learning and Comprehensive Weighting Methods. Entropy 2023, 25, 1186. [Google Scholar] [CrossRef]
  21. Yu, X.; Cui, T.; Sreekanth, J.; Mangeon, S.; Doble, R.; Xin, P.; Rassam, D.; Gilfedder, M. Deep Learning Emulators for Groundwater Contaminant Transport Modelling. J. Hydrol. 2020, 590, 125351. [Google Scholar] [CrossRef]
  22. Jiang, Y.; Li, C.; Song, H.; Wang, W. Deep Learning Model Based on Urban Multi-Source Data for Predicting Heavy Metals (Cu, Zn, Ni, Cr) in Industrial Sewer Networks. J. Hazard. Mater. 2022, 432, 128732. [Google Scholar] [CrossRef]
  23. Jiang, Y.; Li, C.; Zhang, Y.; Zhao, R.; Yan, K.; Wang, W. Data-Driven Method Based on Deep Learning Algorithm for Detecting Fat, Oil, and Grease (FOG) of Sewer Networks in Urban Commercial Areas. Water Res. 2021, 207, 117797. [Google Scholar] [CrossRef] [PubMed]
  24. Kumar, L.; Afzal, M.S.; Ahmad, A. Prediction of Water Turbidity in a Marine Environment Using Machine Learning: A Case Study of Hong Kong. Reg. Stud. Mar. Sci. 2022, 52, 102260. [Google Scholar] [CrossRef]
  25. Wang, K.; Band, S.S.; Ameri, R.; Biyari, M.; Hai, T.; Hsu, C.-C.; Hadjouni, M.; Elmannai, H.; Chau, K.-W.; Mosavi, A. Performance Improvement of Machine Learning Models via Wavelet Theory in Estimating Monthly River Streamflow. Eng. Appl. Comput. Fluid Mech. 2022, 16, 1833–1848. [Google Scholar] [CrossRef]
  26. Zhi, W.; Feng, D.; Tsai, W.-P.; Sterle, G.; Harpold, A.; Shen, C.; Li, L. From Hydrometeorology to River Water Quality: Can a Deep Learning Model Predict Dissolved Oxygen at the Continental Scale? Environ. Sci. Technol. 2021, 55, 2357–2368. [Google Scholar] [CrossRef] [PubMed]
  27. Tung, T.M.; Yaseen, Z.M. A Survey on River Water Quality Modelling Using Artificial Intelligence Models: 2000–2020. J. Hydrol. 2020, 585, 124670. [Google Scholar]
  28. Diez-Gonzalez, J.; Alvarez, R.; Prieto-Fernandez, N.; Perez, H. Local Wireless Sensor Networks Positioning Reliability under Sensor Failure. Sensors 2020, 20, 1426. [Google Scholar] [CrossRef]
  29. Gaddam, A.; Wilkin, T.; Angelova, M.; Gaddam, J. Detecting Sensor Faults, Anomalies and Outliers in the Internet of Things: A Survey on the Challenges and Solutions. Electronics 2020, 9, 511. [Google Scholar] [CrossRef]
  30. Huang, G. Missing Data Filling Method Based on Linear Interpolation and Lightgbm. In Proceedings of the Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2021; Volume 1754, p. 012187. [Google Scholar]
  31. Park, I.; Kim, H.S.; Lee, J.; Kim, J.H.; Song, C.H.; Kim, H.K. Temperature Prediction Using the Missing Data Refinement Model Based on a Long Short-Term Memory Neural Network. Atmosphere 2019, 10, 718. [Google Scholar] [CrossRef]
  32. Li, Y.; Kong, B.; Yu, W.; Zhu, X. An Attention-Based CNN-LSTM Method for Effluent Wastewater Quality Prediction. Appl. Sci. 2023, 13, 7011. [Google Scholar] [CrossRef]
  33. Kayhomayoon, Z.; Arya Azar, N.; Ghordoyee Milan, S.; Kardan Moghaddam, H.; Berndtsson, R. Novel Approach for Predicting Groundwater Storage Loss Using Machine Learning. J. Environ. Manag. 2021, 296, 113237. [Google Scholar] [CrossRef] [PubMed]
  34. Zavareh, M.; Maggioni, V. Application of Rough Set Theory to Water Quality Analysis: A Case Study. Data 2018, 3, 50. [Google Scholar] [CrossRef]
  35. Li, Q.; Yang, Y.; Yang, L.; Wang, Y. Comparative Analysis of Water Quality Prediction Performance Based on LSTM in the Haihe River Basin, China. Environ. Sci. Pollut. Res. 2022, 30, 7498–7509. [Google Scholar] [CrossRef] [PubMed]
  36. Barzegar, R.; Razzagh, S.; Quilty, J.; Adamowski, J.; Pour, H.K.; Booij, M.J. Improving GALDIT-Based Groundwater Vulnerability Predictive Mapping Using Coupled Resampling Algorithms and Machine Learning Models. J. Hydrol. 2021, 598, 126370. [Google Scholar] [CrossRef]
  37. Zhang, Y.; Li, C.; Jiang, Y.; Sun, L.; Zhao, R.; Yan, K.; Wang, W. Accurate Prediction of Water Quality in Urban Drainage Network with Integrated EMD-LSTM Model. J. Clean. Prod. 2022, 354, 131724. [Google Scholar] [CrossRef]
  38. Yu, Q. Enhancing Streamflow Simulation Using Hybridized Machine Learning Models in a Semi-Arid Basin of the Chinese Loess Plateau. J. Hydrol. 2023, 617, 129115. [Google Scholar] [CrossRef]
  39. Zhao, Z.; Wang, Z.; Yuan, J.; Ma, J.; He, Z.; Xu, Y.; Shen, X.; Zhu, L. Development of a Novel Feedforward Neural Network Model Based on Controllable Parameters for Predicting Effluent Total Nitrogen. Engineering 2021, 7, 195–202. [Google Scholar] [CrossRef]
  40. Deng, T.; Chau, K.-W.; Duan, H.-F. Machine Learning Based Marine Water Quality Prediction for Coastal Hydro-Environment Management. J. Environ. Manag. 2021, 284, 112051. [Google Scholar] [CrossRef]
  41. Li, H.; Zhang, G.; Zhu, Y.; Kaufmann, H.; Xu, G. Inversion and Driving Force Analysis of Nutrient Concentrations in the Ecosystem of the Shenzhen-Hong Kong Bay Area. Remote Sens. 2022, 14, 3694. [Google Scholar] [CrossRef]
  42. Hadjisolomou, E.; Stefanidis, K.; Herodotou, H.; Michaelides, M.; Papatheodorou, G.; Papastergiadou, E. Modelling Freshwater Eutrophication with Limited Limnological Data Using Artificial Neural Networks. Water 2021, 13, 1590. [Google Scholar] [CrossRef]
  43. Ranjan, G.; Verma, A.K.; Radhika, S. K-Nearest Neighbors and Grid Search Cv Based Real Time Fault Monitoring System for Industries. In Proceedings of the 2019 IEEE 5th International Conference for Convergence in Technology (I2CT), Bombay, India, 29–31 March 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–5. [Google Scholar]
  44. Ahmad, G.N.; Fatima, H.; Ullah, S.; Saidi, A.S. Efficient Medical Diagnosis of Human Heart Diseases Using Machine Learning Techniques with and without GridSearchCV. IEEE Access 2022, 10, 80151–80173. [Google Scholar] [CrossRef]
  45. Alhakeem, Z.M.; Jebur, Y.M.; Henedy, S.N.; Imran, H.; Bernardo, L.F.; Hussein, H.M. Prediction of Ecofriendly Concrete Compressive Strength Using Gradient Boosting Regression Tree Combined with GridSearchCV Hyperparameter-Optimization Techniques. Materials 2022, 15, 7432. [Google Scholar] [CrossRef]
  46. Xu, D.; Zhang, S.; Zhang, H.; Mandic, D.P. Convergence of the RMSProp Deep Learning Method with Penalty for Nonconvex Optimization. Neural Netw. 2021, 139, 17–23. [Google Scholar] [CrossRef] [PubMed]
  47. Kumar, A.; Sarkar, S.; Pradhan, C. Malaria Disease Detection Using Cnn Technique with Sgd, Rmsprop and Adam Optimizers. In Deep Learning Techniques for Biomedical and Health Informatics; Springer: Cham, Switzerland, 2020; pp. 211–230. [Google Scholar] [CrossRef]
  48. Shi, N.; Li, D.; Hong, M.; Sun, R. RMSprop Converges with Proper Hyper-Parameter. In Proceedings of the International Conference on Learning Representations, Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
  49. Zavareh, M.; Maggioni, V.; Sokolov, V. Investigating Water Quality Data Using Principal Component Analysis and Granger Causality. Water 2021, 13, 343. [Google Scholar] [CrossRef]
Figure 1. Spatial distribution of monitoring points within the study area.
Figure 1. Spatial distribution of monitoring points within the study area.
Electronics 12 04645 g001
Figure 2. Temporal variation curves of water quality parameters.
Figure 2. Temporal variation curves of water quality parameters.
Electronics 12 04645 g002
Figure 3. (a) Pearson correlation coefficient between each indicator and ammonia nitrogen; (b) a multiple indicator dataset with progressive accumulation of Pearson correlation coefficient values.
Figure 3. (a) Pearson correlation coefficient between each indicator and ammonia nitrogen; (b) a multiple indicator dataset with progressive accumulation of Pearson correlation coefficient values.
Electronics 12 04645 g003
Figure 4. Temporal variations of NH3-N within the study area at different time scales.
Figure 4. Temporal variations of NH3-N within the study area at different time scales.
Electronics 12 04645 g004
Figure 5. Learning curves and prediction results of the LSTM model on the validation dataset, along with the corresponding R2 values: (a) learning curve (MAE); (b) learning curve (MSE); (c) observed and predicted NH3-N concentrations, along with the corresponding R2 value.
Figure 5. Learning curves and prediction results of the LSTM model on the validation dataset, along with the corresponding R2 values: (a) learning curve (MAE); (b) learning curve (MSE); (c) observed and predicted NH3-N concentrations, along with the corresponding R2 value.
Electronics 12 04645 g005
Figure 6. Comparison of observed and predicted NH3-N concentrations on different feature sets on the test dataset, along with the corresponding R2 values: (a) prediction results of combination 1; (b) prediction results of combination 2; (c) prediction results of combination 3; (d) prediction results of combination 4; (e) prediction results of combination 5; (f) prediction results of combination 6; (g) prediction results of combination 7; (h) prediction results of combination 8; (i) prediction results of combination 9.
Figure 6. Comparison of observed and predicted NH3-N concentrations on different feature sets on the test dataset, along with the corresponding R2 values: (a) prediction results of combination 1; (b) prediction results of combination 2; (c) prediction results of combination 3; (d) prediction results of combination 4; (e) prediction results of combination 5; (f) prediction results of combination 6; (g) prediction results of combination 7; (h) prediction results of combination 8; (i) prediction results of combination 9.
Electronics 12 04645 g006
Table 1. Summary statistics of water quality parameters.
Table 1. Summary statistics of water quality parameters.
VariablesUnitsMaxMinMeanStd
NH3-Nmg/L3.0950.0250.462 0.462
Temp°C32.630 13.400 24.617 4.754
pH-9.4456.4067.610 0.572
DOmg/L28.2411.010 8.254 3.889
KMnO4mg/L10.470 1.120 3.781 1.626
TPmg/L0.3190.0310.111 0.050
TNmg/L5.7551.9453.276 0.685
Condμs/cm2988.800 214.3011261.352 977.574
TurbNTU268.3775.302 53.190 34.524
Table 2. Pearson correlation coefficient table.
Table 2. Pearson correlation coefficient table.
VariablesNH3-NTemppHDOKMnO4TPTNCondTurb
NH3-N1
Temp0.0261
pH−0.420−0.3311
DO−0.394−0.5680.7901
KMnO40.209−0.3260.5270.5111
TP0.6130.426−0.409−0.4850.1401
TN0.447−0.1990.1990.0570.5470.4661
Cond−0.038−0.6170.6270.6280.760−0.2430.4201
Turb−0.0220.362−0.345−0.354−0.2030.333−0.149−0.4181
Table 3. Summary statistics of feature sets and their corresponding evaluation metric values.
Table 3. Summary statistics of feature sets and their corresponding evaluation metric values.
Indicator CombinationR2MSERMSEMAE
Combination 10.790.00450.06740.0453
Combination 20.740.00530.07300.0503
Combination 30.730.00480.06950.0518
Combination 40.750.00520.07220.0543
Combination 50.750.00670.08160.0629
Combination 60.820.00470.06550.0460
Combination 70.750.00660.08140.0630
Combination 80.720.00810.09000.0695
Combination 90.780.00570.07520.0576
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, X.; Qiao, M.; Li, Y.; Tavares, A.; Qiao, Q.; Liang, Y. Deep-Learning-Based Water Quality Monitoring and Early Warning Methods: A Case Study of Ammonia Nitrogen Prediction in Rivers. Electronics 2023, 12, 4645. https://doi.org/10.3390/electronics12224645

AMA Style

Wang X, Qiao M, Li Y, Tavares A, Qiao Q, Liang Y. Deep-Learning-Based Water Quality Monitoring and Early Warning Methods: A Case Study of Ammonia Nitrogen Prediction in Rivers. Electronics. 2023; 12(22):4645. https://doi.org/10.3390/electronics12224645

Chicago/Turabian Style

Wang, Xianhe, Mu Qiao, Ying Li, Adriano Tavares, Qian Qiao, and Yanchun Liang. 2023. "Deep-Learning-Based Water Quality Monitoring and Early Warning Methods: A Case Study of Ammonia Nitrogen Prediction in Rivers" Electronics 12, no. 22: 4645. https://doi.org/10.3390/electronics12224645

APA Style

Wang, X., Qiao, M., Li, Y., Tavares, A., Qiao, Q., & Liang, Y. (2023). Deep-Learning-Based Water Quality Monitoring and Early Warning Methods: A Case Study of Ammonia Nitrogen Prediction in Rivers. Electronics, 12(22), 4645. https://doi.org/10.3390/electronics12224645

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop