Deep Reinforcement Learning for Multi-Objective Real-Time Pump Operation in Rainwater Pumping Stations
Abstract
:1. Introduction
- We define a multi-objective optimization problem that considers both the minimization of retention basin water levels and the reduction of maintenance costs due to pump switching. To address this, we developed a DDQN model.
- By incorporating a time-series-aware agent and designing an effective reward function, our experimental results demonstrate that the proposed model can maintain lower water levels in the retention basin compared to rule-based pump policies while simultaneously minimizing maintenance costs.
- We accurately modeled the pump and retention basin environment of the Gasan pumping station in Seoul, comparing the DDQN-based approach with the rule-based method currently in use, thereby providing insights for potential operational improvements.
- We developed a control system that can effectively respond to rainfall fluctuations resulting from climate change and rapid urbanization. This system was tested using synthetic extreme rainfall scenarios, rather than typical rainfall, to ensure robust performance under severe weather conditions.
2. Materials and Methods
2.1. Modeling the Pumping Station
2.2. Rainfall Data
2.3. Problem Formulation
- [Multi-Objective Pump Combination Selection Problem]
2.4. Double Deep Q-Network for Pumping Systems
2.4.1. Reinforcement Learning
2.4.2. Double Deep Q-Network
2.4.3. Gated Recurrent Unit
2.4.4. Model Configuration
2.4.5. States
2.4.6. Actions
2.4.7. Reward Function
3. Results and Discussion
4. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- NOAA National Centers for Environmental Information (NCEI). U.S. Billion-Dollar Weather and Climate Disasters. 2024. Available online: https://www.ncei.noaa.gov/access/billions/ (accessed on 22 November 2024).
- Tabari, H. Climate change impact on flood and extreme precipitation increases with water availability. Sci. Rep. 2020, 10, 13768. [Google Scholar] [CrossRef] [PubMed]
- Martel, J.L.; Brissette, F.P.; Lucas-Picher, P.; Troin, M.; Arsenault, R. Climate Change and Rainfall Intensity–Duration–Frequency Curves: Overview of Science and Guidelines for Adaptation. J. Hydrol. Eng. 2021, 26, 03121001. [Google Scholar] [CrossRef]
- Zhuan, X.; Xiaohua, X. Optimal operation scheduling of a pumping station with multiple pumps. Appl. Energy 2013, 104, 250–257. [Google Scholar] [CrossRef]
- Bachtiar, S.; Limantara, L.M.; Sholichin, M.; Seotopo, W. Optimization of Integrated Reservoir for Supporting the Raw Water Supply. Civ. Eng. J. 2023, 9, 860–872. [Google Scholar] [CrossRef]
- Jafari, F.; Mousavi, S.J.; Yazdi, J.; Kim, J.H. Real-time operation of pumping systems for urban flood mitigation: Single-period vs. multi-period optimization. Water Resour. Manag. 2018, 32, 4643–4660. [Google Scholar] [CrossRef]
- Mounce, S.R.; Shepherd, W.; Ostojin, S.; Abdel-Aal, M.; Schellart, A.N.A.; Shucksmith, J.D.; Tait, S.J. Optimisation of a fuzzy logic based local real-time control system for mitigation of sewer flooding using genetic algorithms. J. Hydroinform. 2020, 22, 281–295. [Google Scholar] [CrossRef]
- Sadler, J.M.; Goodall, J.L.; Behl, M.; Bowes, B.D.; Morsy, M.M. Exploring real-time control of stormwater systems for mitigating flood risk due to sea level rise. J. Hydrol. 2020, 583, 124571. [Google Scholar] [CrossRef]
- Sun, C.; Puig, V.; Cembrano, G. Real-Time Control of Urban Water Cycle under Cyber-Physical Systems Framework. Water 2020, 12, 406. [Google Scholar] [CrossRef]
- Baumeister, T.; Brunton, S.L.; Kutz, J.N. Deep learning and model predictive control for self-tuning mode-locked lasers. J. Opt. Soc. Am. B 2018, 35, 617–626. [Google Scholar] [CrossRef]
- Lee, X.Y.; Balu, A.; Stoecklein, D.; Ganapathysubramanian, B.; Sarkar, S. A Case Study of Deep Reinforcement Learning for Engineering Design: Application to Microfluidic Devices for Flow Sculpting. J. Mech. Des. 2019, 141, 111401. [Google Scholar] [CrossRef]
- Dargan, S.; Kumar, M.; Ayyagari, M.R.; Kumar, G. A Survey of Deep Learning and Its Applications: A New Paradigm to Machine Learning. Arch. Comput. Methods Eng. 2020, 27, 1071–1092. [Google Scholar] [CrossRef]
- Zhang, W.; Li, H.; Li, Y.; Liu, H.; Chen, Y.; Ding, X. Application of deep learning algorithms in geotechnical engineering: A short critical review. Artif. Intell. Rev. 2021, 54, 5633–5673. [Google Scholar] [CrossRef]
- Mantach, S.; Lutfi, A.; Moradi Tavasani, H.; Ashraf, A.; El-Hag, A.; Kordi, B. Deep Learning in High Voltage Engineering: A Literature Review. Energies 2022, 15, 5005. [Google Scholar] [CrossRef]
- Fu, G.; Jin, Y.; Sun, S.; Yuan, Z.; Butler, D. The role of deep learning in urban water management: A critical review. Water Res. 2022, 223, 118973. [Google Scholar] [CrossRef]
- Wu, Z.; Zhou, Y.; Wang, H. Real-Time Prediction of the Water Accumulation Process of Urban Stormy Accumulation Points Based on Deep Learning. IEEE Access 2020, 8, 151938–151951. [Google Scholar] [CrossRef]
- Wang, C.; Bowes, B.D.; Beling, A.; Goodall, J.L. Reinforcement Learning for Flooding Mitigation in Complex Stormwater Systems during Large Storms. In Proceedings of the 19th International Conference on Smart Technologies, Lviv, Ukraine, 6–8 July 2021; pp. 274–279. [Google Scholar] [CrossRef]
- Tian, W.; Xin, K.; Zhang, Z.; Zhao, M.; Liao, Z.; Tao, T. Flooding mitigation through safe & trustworthy reinforcement learning. J. Hydrol. 2023, 620 Pt A, 129435. [Google Scholar] [CrossRef]
- Li, X.; Liang, X.; Wang, X.; Wang, R.; Shu, L.; Xu, W. Deep reinforcement learning for optimal rescue path planning in uncertain and complex urban pluvial flood scenarios. Appl. Soft Comput. 2023, 144, 110543. [Google Scholar] [CrossRef]
- Tian, W.; Fu, G.; Xin, K.; Zhang, Z.; Liao, Z. Improving the interpretability of deep reinforcement learning in urban drainage system operation. Water Res. 2024, 249, 120912. [Google Scholar] [CrossRef]
- Mullapudi, A.; Lewis, M.J.; Gruden, C.L.; Kerkez, B. Deep reinforcement learning for the real time control of stormwater systems. Adv. Water Resour. 2020, 140, 103600. [Google Scholar] [CrossRef]
- Bowes, B.D.; Tavakoli, A.; Wang, C.; Heydarian, A.; Behl, M.; Beling, P.A.; Goodall, J.L. Flood mitigation in coastal urban catchments using real-time stormwater infrastructure control and reinforcement learning. J. Hydroinform. 2021, 23, 529–547. [Google Scholar] [CrossRef]
- Saliba, S.M.; Bowes, B.D.; Adams, S.; Beling, P.A.; Goodall, J.L. Deep Reinforcement Learning with Uncertain Data for Real-Time Stormwater System Control and Flood Mitigation. Water 2020, 12, 3222. [Google Scholar] [CrossRef]
- Xu, W.; Meng, F.; Guo, W.; Li, X.; Fu, G. Deep Reinforcement Learning for Optimal Hydropower Reservoir Operation. J. Water Resour. Plan. Manag. 2021, 147, 04021045. [Google Scholar] [CrossRef]
- Ismail, S.; Dawoud, D.W.; Ismail, N.; Marsh, R.; Alshami, A.S. IoT-Based Water Management Systems: Survey and Future Research Direction. IEEE Access 2022, 10, 35942–35952. [Google Scholar] [CrossRef]
- Mnih, V.; Kavukcuoglu, K.; Silver, D.; Graves, A.; Antonoglou, I.; Wierstra, D.; Riedmiller, M.A. Playing Atari with Deep Reinforcement Learning. arXiv 2013. [Google Scholar] [CrossRef]
- Brunke, L.; Greeff, M.; Hall, A.W.; Yuan, Z.; Zhou, S.; Panerati, J.; Schoellig, A.P. Safe learning in robotics: From learning-based control to safe reinforcement learning. Annu. Rev. Control Robot. Auton. Syst. 2022, 5, 411–444. [Google Scholar] [CrossRef]
- Zhao, Y.; Wang, J.; Cao, G.; Yuan, Y.; Yao, X.; Qi, L. Intelligent Control of Multilegged Robot Smooth Motion: A Review. IEEE Access 2023, 11, 86645–86685. [Google Scholar] [CrossRef]
- Li, Z.; Bai, L.; Tian, W.; Yan, H.; Hu, W.; Xin, K.; Tao, T. Online Control of the Raw Water System of a High-Sediment River Based on Deep Reinforcement Learning. Water 2023, 15, 1131. [Google Scholar] [CrossRef]
- Tian, W.; Xin, K.; Zhang, Z.; Liao, Z.; Li, F. State Selection and Cost Estimation for Deep Reinforcement Learning-Based Real-Time Control of Urban Drainage System. Water 2023, 15, 1528. [Google Scholar] [CrossRef]
- United States Environmental Protection Agency. Storm Water Management Model (SWMM). Available online: https://www.epa.gov/water-research/storm-water-management-model-swmm (accessed on 22 November 2024).
- McDonnell, B.E.; Ratliff, K.; Tryby, M.E.; Wu, J.J.X.; Mullapudi, A. PySWMM: The Python Interface to Stormwater Management Model (SWMM). J. Open Source Softw. 2020, 5, 1–3. [Google Scholar] [CrossRef]
- Huff, F.A. Time distribution of rainfall in heavy storms. Water Resour. Res. 1967, 3, 1007–1019. [Google Scholar] [CrossRef]
- Lee, E.H.; Lee, Y.S.; Joo, J.G.; Jung, D.; Kim, J.H. Investigating the Impact of Proactive Pump Operation and Capacity Expansion on Urban Drainage System Resilience. J. Water Resour. Plan. Manag. 2017, 143, 04017024. [Google Scholar] [CrossRef]
- Sutton, R.S.; Barto, A.G. Reinforcement Learning: An Introduction, 2nd ed.; The MIT Press: Cambridge, MA, USA, 2018. [Google Scholar]
- Arulkumaran, K.; Deisenroth, M.P.; Brundage, M.; Bharath, A.A. Deep Reinforcement Learning: A Brief Survey. IEEE Signal Process. Mag. 2017, 34, 26–38. [Google Scholar] [CrossRef]
- Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A.A.; Veness, J.; Bellemare, M.G.; Graves, A.; Riedmiller, M.; Fidjeland, A.K.; Ostrovski, G.; et al. Human-level control through deep reinforcement learning. Nature 2015, 518, 529–533. [Google Scholar] [CrossRef]
- Hasselt, H.V.; Guez, A.; Silver, D. Deep Reinforcement Learning with Double Q-Learning. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; pp. 2094–2100. [Google Scholar]
- Cho, K.; van Merriënboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, Doha, Qatar, 25–29 October 2014; pp. 1724–1734. [Google Scholar] [CrossRef]
- Chung, J.; Gülçehre, Ç.; Cho, K.; Bengio, Y. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. arXiv 2014. [Google Scholar] [CrossRef]
- Liu, R.; Zou, J.Y. The Effects of Memory Replay in Reinforcement Learning. In Proceedings of the 56th Annual Allerton Conference on Communication, Control, and Computing, Allerton, Monticello, IL, USA, 2–5 October 2018; pp. 478–485. [Google Scholar] [CrossRef]
Models | Advantages | Disadvantages |
---|---|---|
Rule-Based Algorithms |
|
|
Metaheuristics |
|
|
Model Predictive Control |
|
|
Deep Reinforcement Learning |
|
|
Height (m) | Water Volume (m3) |
---|---|
4.7 | 0.0 |
5.8 | 1000.0 |
6.3 | 2407.0 |
7.0 | 4452.0 |
8.0 | 8105.0 |
8.2 | 1008.0 |
10 | 16,000.0 |
Pumping Operation Type | Representation |
---|---|
Action1: no pumping | [0, 0, 0, 0, 0] |
Action 2: one 100 m3/min pump in operation | [1, 0, 0, 0, 0] |
Action 3: two 100 m3/min pumps in operation | [1, 1, 0, 0, 0] |
Action 4: three 100 m3/min pumps in operation | [1, 1, 1, 0, 0] |
Action 5: four (100 m3/min: 3 + 170 m3/min: 1) pumps in operation | [1, 1, 1, 1, 0] |
Action 6: all pumps operating | [1, 1, 1, 1, 1] |
When the Water Level Rises | When the Water Level Drops | ||
---|---|---|---|
Water Level | Pumping Operation Type | Water Level | Pumping Operation Type |
6.2 | Action 1 | 5.9 | Action 4 |
6.3 | Action 2 | 5.8 | Action 3 |
6.4 | Action 3 | 5.7 | Action 2 |
6.5 | Action 4 | 5.6 | Action 1 |
6.6 | Action 5 | 5.5 | Action 0 |
Frequency (Year) | Duration-Specific Probability Rainfall (mm) | ||||||||
---|---|---|---|---|---|---|---|---|---|
60 (min) | 120 (min) | 180 (min) | 240 (min) | 360 (min) | 540 (min) | 720 (min) | 1080 (min) | 1440 (min) | |
10 | 64.8 | 87.1 | 101.4 | 114.8 | 137.0 | 158.4 | 169.1 | 186.6 | 198.8 |
20 | 73.6 | 99.3 | 115.6 | 131.1 | 157.0 | 181.9 | 193.3 | 213.1 | 226.9 |
30 | 78.6 | 106.3 | 123.8 | 140.5 | 168.5 | 195.5 | 207.2 | 228.4 | 243.0 |
50 | 84.9 | 115.1 | 134.0 | 152.2 | 183.0 | 212.4 | 224.6 | 247.4 | 263.2 |
80 | 90.6 | 123.1 | 143.3 | 163.0 | 196.1 | 227.8 | 240.5 | 264.8 | 281.6 |
100 | 93.3 | 126.9 | 147.8 | 168.1 | 202.4 | 235.2 | 248.0 | 273.1 | 290.4 |
First quartile | 0.5462 | 0.1414 | −0.005158 | 7.948 × 10−5 | −5.774 × 10−7 | 1.615 × 10−9 |
Second quartile | 0.4219 | −0.03800 | 0.004340 | −1.041 × 10−4 | 9.786 × 10−7 | −3.269 × 10−9 |
Third quartile | −0.1844 | 0.08131 | −0.004237 | 1.042 × 10−4 | −1.082 × 10−6 | 3.941 × 10−9 |
Forth quartile | 0.4736 | −0.04096 | 0.002784 | −6.970 × 10−5 | 7.689 × 10−7 | −3.041 × 10−9 |
Parameters | Values |
---|---|
Loss function | Mean squared error |
Learning rate | 1 × 10−3 |
Optimizer | Adam optimizer |
Discount factor γ, used in RL | 0.4 |
Epsilon ↋ | Initial value 1.0, last value 0.2 Value decreases by 1/the number of current samples |
Replay memory size | 100 |
Batch size | 20 |
Target network parameter update period | 10 samples (episodes) |
Training epochs | 2552 (the number of training samples) |
Duration (min) | Model | Return Periods | |||||
---|---|---|---|---|---|---|---|
10 | 20 | 30 | 50 | 80 | 100 | ||
60 | DDQN (we = 1) | 6.74 | 7.31 | 7.84 | 8.11 | 8.33 | 8.49 |
DDQN (we = 1, wp = 2) | 6.85 | 7.45 | 7.81 | 8.10 | 8.30 | 8.52 | |
Rule base | 7.57 | 8.05 | 8.34 | 8.77 | 9.27 | 9.37 | |
120 | DDQN (we = 1) | 6.96 | 7.77 | 8.19 | 8.89 | 9.55 | 9.79 |
DDQN (we = 1, wp = 2) | 7.08 | 7.83 | 8.16 | 8.82 | 9.43 | 9.81 | |
Rule base | 8.06 | 8.78 | 9.08 | 9.57 | 9.95 | 10.00 | |
180 | DDQN (we = 1) | 6.72 | 7.80 | 7.99 | 8.97 | 9.65 | 9.83 |
DDQN (we = 1, wp = 2) | 6.47 | 7.73 | 8.15 | 8.83 | 9.66 | 9.75 | |
Rule base | 7.57 | 8.51 | 9.04 | 9.68 | 9.90 | 10.00 | |
240 | DDQN (we = 1) | 6.36 | 7.70 | 7.93 | 8.94 | 9.74 | 9.86 |
DDQN (we = 1, wp = 2) | 6.28 | 7.46 | 8.01 | 8.95 | 9.43 | 9.83 | |
Rule base | 7.29 | 8.22 | 9.07 | 9.49 | 9.90 | 10.00 | |
360 | DDQN (we = 1) | 5.32 | 6.64 | 7.69 | 8.50 | 9.19 | 9.58 |
DDQN (we = 1, wp = 2) | 5.27 | 6.74 | 7.57 | 8.40 | 9.35 | 9.17 | |
Rule base | 6.73 | 7.56 | 8.38 | 9.36 | 9.86 | 9.94 | |
540 | DDQN (we = 1) | 4.70 | 5.00 | 5.79 | 6.62 | 7.62 | 8.20 |
DDQN (we = 1, wp = 2) | 4.70 | 4.91 | 5.51 | 6.84 | 7.67 | 8.18 | |
Rule base | 6.59 | 6.81 | 7.14 | 7.43 | 8.32 | 8.55 | |
720 | DDQN (we = 1) | 4.70 | 4.70 | 4.70 | 4.83 | 5.07 | 5.24 |
DDQN (we = 1, wp = 2) | 4.70 | 4.70 | 4.70 | 4.96 | 4.83 | 5.52 | |
Rule base | 6.51 | 6.56 | 6.61 | 6.62 | 6.67 | 6.76 | |
1080 | DDQN (we = 1) | 4.70 | 4.70 | 4.70 | 4.70 | 4.70 | 4.70 |
DDQN (we = 1, wp = 2) | 4.70 | 4.70 | 4.70 | 4.70 | 4.70 | 4.70 | |
Rule base | 6.46 | 6.51 | 6.52 | 6.55 | 6.55 | 6.58 | |
1440 | DDQN (we = 1) | 4.70 | 4.70 | 4.70 | 4.70 | 4.70 | 4.70 |
DDQN (we = 1, wp = 2) | 4.70 | 4.70 | 4.70 | 4.70 | 4.70 | 4.70 | |
Rule base | 6.41 | 6.41 | 6.42 | 6.51 | 6.51 | 6.52 |
Duration (min) | Model | Return Periods | |||||
---|---|---|---|---|---|---|---|
10 | 20 | 30 | 50 | 80 | 100 | ||
60 | DDQN (we = 1) | 14.77 | 13.54 | 14.15 | 13.85 | 12.39 | 10.92 |
DDQN (we = 1, wp = 2) | 7.15 | 7.15 | 7.92 | 8.39 | 8.54 | 7.77 | |
Rule base | 9.92 | 9.92 | 9.39 | 7.77 | 5.31 | 5.69 | |
120 | DDQN (we = 1) | 14.92 | 12.92 | 12.15 | 10.77 | 10.00 | 8.46 |
DDQN (we = 1, wp = 2) | 8.69 | 8.77 | 8.85 | 9.62 | 8.69 | 8.39 | |
Rule base | 9.92 | 8.08 | 9.08 | 8.85 | 6.15 | 6.92 | |
180 | DDQN (we = 1) | 15.39 | 14.54 | 14.23 | 13.31 | 12.77 | 13.62 |
DDQN (we = 1, wp = 2) | 8.69 | 8.23 | 8.54 | 9.15 | 9.00 | 9.15 | |
Rule base | 10.00 | 9.31 | 9.15 | 8.85 | 8.08 | 8.08 | |
240 | DDQN (we = 1) | 18.00 | 17.08 | 16.39 | 14.15 | 13.15 | 15.15 |
DDQN (we = 1, wp = 2) | 8.85 | 9.00 | 8.54 | 8.54 | 8.39 | 10.54 | |
Rule base | 10.46 | 10.31 | 8.46 | 9.23 | 8.77 | 8.31 | |
360 | DDQN (we = 1) | 17.46 | 16.46 | 17.69 | 20.00 | 15.46 | 17.00 |
DDQN (we = 1, wp = 2) | 9.31 | 9.00 | 9.62 | 9.54 | 9.46 | 8.69 | |
Rule base | 10.31 | 10.23 | 10.54 | 8.54 | 8.46 | 7.31 | |
540 | DDQN (we = 1) | 16.69 | 16.39 | 17.00 | 17.46 | 17.92 | 17.77 |
DDQN (we = 1, wp = 2) | 10.85 | 10.08 | 9.85 | 10.39 | 10.85 | 11.00 | |
Rule base | 10.77 | 12.31 | 11.23 | 11.62 | 10.77 | 12.62 | |
720 | DDQN (we = 1) | 17.15 | 17.31 | 18.54 | 17.62 | 18.69 | 17.92 |
DDQN (we = 1, wp = 2) | 11.85 | 11.23 | 11.69 | 11.39 | 10.54 | 10.46 | |
Rule base | 12.00 | 10.54 | 11.62 | 12.92 | 13.39 | 13.23 | |
1080 | DDQN (we = 1) | 17.31 | 17.31 | 18.08 | 17.15 | 18.85 | 17.46 |
DDQN (we = 1, wp = 2) | 14.15 | 12.69 | 12.46 | 13.08 | 11.62 | 11.62 | |
Rule base | 12.15 | 14.00 | 14.15 | 14.69 | 13.85 | 14.62 | |
1440 | DDQN (we = 1) | 16.39 | 18.08 | 17.77 | 17.31 | 17.46 | 17.92 |
DDQN (we = 1, wp = 2) | 13.69 | 13.46 | 14.92 | 13.85 | 13.77 | 14.15 | |
Rule base | 18.77 | 17.23 | 14.69 | 14.69 | 17.69 | 19.00 |
Models | Average Maximum Water Level (m) | Average Pump Changes |
---|---|---|
DDQN (we = 1) | 6.85 | 15.78 |
DDQN (we = 1, wp = 2) | 6.83 | 10.22 |
Rule-based | 7.96 | 10.93 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Joo, J.-G.; Jeong, I.-S.; Kang, S.-H. Deep Reinforcement Learning for Multi-Objective Real-Time Pump Operation in Rainwater Pumping Stations. Water 2024, 16, 3398. https://doi.org/10.3390/w16233398
Joo J-G, Jeong I-S, Kang S-H. Deep Reinforcement Learning for Multi-Objective Real-Time Pump Operation in Rainwater Pumping Stations. Water. 2024; 16(23):3398. https://doi.org/10.3390/w16233398
Chicago/Turabian StyleJoo, Jin-Gul, In-Seon Jeong, and Seung-Ho Kang. 2024. "Deep Reinforcement Learning for Multi-Objective Real-Time Pump Operation in Rainwater Pumping Stations" Water 16, no. 23: 3398. https://doi.org/10.3390/w16233398
APA StyleJoo, J.-G., Jeong, I.-S., & Kang, S.-H. (2024). Deep Reinforcement Learning for Multi-Objective Real-Time Pump Operation in Rainwater Pumping Stations. Water, 16(23), 3398. https://doi.org/10.3390/w16233398