Next Article in Journal
Editorial to the Special Issue “Acoustic Sensing and Monitoring in Urban and Natural Environments”
Previous Article in Journal
Improving Indoor WiFi Localization by Using Machine Learning Techniques
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Transformer-Based Approach to Leakage Detection in Water Distribution Networks

1
Computer Science and Electronic Engineering, Hunan University, Changsha 410082, China
2
Institute of High Performance Computing, Agency for Science, Technology and Research, Singapore 138632, Singapore
3
School of Internet of Things Engineering, Jiangnan University, Wuxi 214000, China
*
Authors to whom correspondence should be addressed.
Sensors 2024, 24(19), 6294; https://doi.org/10.3390/s24196294 (registering DOI)
Submission received: 5 September 2024 / Revised: 22 September 2024 / Accepted: 24 September 2024 / Published: 28 September 2024
(This article belongs to the Section Electronic Sensors)

Abstract

:
The efficient detection of leakages in water distribution networks (WDNs) is crucial to ensuring municipal water supply safety and improving urban operations. Traditionally, machine learning methods such as Convolutional Neural Networks (CNNs) and Autoencoders (AEs) have been used for leakage detection. However, these methods heavily rely on local pressure information and often fail to capture long-term dependencies in pressure series. In this paper, we propose a transformer-based model for detecting leakages in WDNs. The transformer incorporates an attention mechanism to learn data distributions and account for correlations between historical pressure data and data from the same time on different days, thereby emphasizing long-term dependencies in pressure series. Additionally, we apply pressure data normalization across each leakage scenario and concatenate position embeddings with pressure data in the transformer model to avoid feature misleading. The performance of the proposed method is evaluated by using detection accuracy and F1-score. The experimental studies conducted on simulated pressure datasets from three different WDNs demonstrate that the transformer-based model significantly outperforms traditional CNN methods.

1. Introduction

Water distribution pipes in a city can span hundreds of kilometers and connect thousands of nodes, creating a complex network that facilitates urban water supply. These pipes are usually buried underground, and many have been in service for decades. The aging of pipes and external forces are the main risks, causing significant damage and leakages in WDNs. It is reported that nearly one-third of drinking water is wasted annually due to pipe leakages [1]. Additionally, pipe leakages can lead to drops in in-pipe water pressure, resulting in the intrusion of contaminants and pathogens, which can cause serious public health issues [2]. Detecting these leakages is challenging because leak points are usually not observable and local water loss is difficult to differentiate from highly fluctuating daily demand data. However, advancements in hydraulic sensors and communication technologies have enabled the deployment of comprehensive data acquisition systems. These systems collect large volumes of time-series data, such as pressure and flow, to profile the state of hydraulic dynamics and detect pipe bursts and leakages [3,4], as shown in Figure 1. Solar energy or mains electricity [5] is typically used to power the remote terminal unit to ensure continuous operation.
Based on different sensor types, the current solutions for leakage detection in WDNs can be mainly divided into two categories: (i) flow data-based methods [6] and (ii) pressure data-based methods [7]. For the former one, WDNs in cities are often divided into zones to construct district metering areas (DMAs) [8,9]. Flow meters are employed to record the flow in and out of each zone and generate the total water utilization of a given DMA. The real water consumption is calculated by summing all the readings from consumer meters. By analyzing the difference between these two sets of data, it is possible to determine the volume of non-revenue water, indicating potential leakages in the WDN [10]. However, using DMAs to detect leakage requires an effective network partition scheme [11]. Determining the inlet and outlet pipes is complex in real-world operations, and optimizing DMAs remains a significant challenge. Additionally, synchronizing readings from WDNs and customer meters is difficult, leading to significant errors in non-revenue water estimation. The pressure data-based methods rely on WDN topology and pressure measurements. Leakages are detected either by comparing the measured pressure with predictions computed according to hydraulic models or by estimating pressure drops and identifying corresponding patterns [12].
In the past, various data-driven approaches have been proposed to detect leakages in WDNs. According to data usage, these algorithms can be broadly categorized into signal prediction methods and classification methods. The idea behind prediction methods is to estimate future water demand, consumption, flow, or pressure based on historical data [13]. These predictions are then compared with actual measurements to detect discrepancies, indicating potential anomalies. Typical algorithms employed include the Kalman filter (KF) [14] and its variants [15] and expectation maximization (EM) algorithms [16]. These methods typically assume that the hydraulics are accurately modeled and the system parameters, such as pipe roughness coefficients, are known a priori. In practice, developing a hydraulic model that accurately matches a real WDN system is challenging, and the system parameters can only be roughly estimated and tend to change drastically as pipe age increases. In classification methods, different feature sets are extracted from the collected data, and classifiers are trained to identify the unique characteristics of different event types, such that normal or anomalous events can be categorized for coming measurements [17,18,19]. Recently, popular machine learning methods, such as AEs and CNNs, have been employed for leakage detection in WDNs. In essence, features are extracted by an AE or a CNN, and a loss function is defined to determine whether the coming data represent a leakage pattern. The advantage of classification methods is that they are data-driven and do not require in-depth knowledge about WDN topology and the corresponding parameters. They only need to learn from the historical data regarding the leak and no-leak data patterns [20].
In this paper, we propose a transformer-based leakage detection model that learns the distribution of normal pressure data. Transformers have recently proven to be powerful deep learning models in applications such as speech recognition, natural language processing, and computer vision processing [21,22,23]. Here, we introduce such a model to capture the detailed changes and identify similar patterns in pressure data series. We split the pressure data into sequences, with each sequence representing one week of data (e.g., if one pressure datum is received in half an hour, we have, in total, 336 data a week). The position of each data point within the sequence is encoded as position embeddings, which, together with the pressure data, are fed to the model. An encoder is then applied to learn the contextual features of the pressure data and map them into a latent vector representation. Given the periodic nature of daily water demand and corresponding pipe pressure variations, the transformer’s attention mechanism relates different positions of similar features to compute a representation of the data sequence. The decoder then transforms these latent vectors back into a new sequence by sampling from the latent space. Leakages are identified by calculating the residuals between the generated sequence and the input data.
One advantage of the proposed method is that it can perform classification by using only pressure data, eliminating the need for network partitioning and pipeline parameters. Additionally, it incorporates global information due to the long-term dependencies in the pressure data. Our experimental results show that the proposed method significantly improves leakage detection performance. Figure 2 provides an overview of the architecture of the proposed model. The main contribution of this study is the development of a transformer-based model for detecting leakages in WDNs. We comprehensively study the features of pressure drops and apply data normalization across each leakage scenario to better capture leakage patterns. Furthermore, we concatenate the position embeddings with the pressure data, rather than adding them together within the model, to better capture the correlation between historical and daily interval pressure data.
The structure of the paper is organized as follows: Section 2 introduces related work on leakage detection in WDNs. Section 3 describes the problem of leakage detection, including its challenges and difficulties. Section 4 provides an overview of the proposed model and detailed components. In Section 5, we describe the datasets used in our experiments and the result analysis of this work. Conclusions and some potential future research directions are given in Section 6.

2. Related Work

Traditional leakage detection techniques predominantly depend on manual procedures and external auxiliary equipment, such as electromagnetic scanners [24], infrared thermography [25], and ground-penetrating radar [26]. However, employing these techniques for leakage detection typically proves inefficient due to extensive time requirements, labor costs, and limited coverage. Recent advancements in sensor technology and intelligent algorithms have significantly propelled the innovation of leakage detection approaches within WDNs. Recent advancements in leakage detection methods have approached the problem by treating leak and no-leak data as a binary classification issue. Popular machine learning algorithms, such as Principle Component Analysis (PCA) [27], Support Vector Machine (SVM) [28], K-Nearest Neighbors (KNNs) [29], and CNNs [30,31], have been introduced for this purpose. Additionally, Deep Neural Networks (DNNs) have proven to be effective tools for data classification and have been employed for leakage detection [32].
In a study by Leonzio et al. [33], an AE was designed to encode input pressure data into compressed features and then decode the feature back to reconstruct a new data sequence. This approach aims to identify the absence of leaks by ensuring that the output of the trained network closely resembles the input pressure data, thereby flagging any significant deviations as potential leaks. One notable advantage of AEs is their reliance solely on pressure-series data acquired in the absence of leaks. Another study [34] integrated Long Short-Term Memory (LSTM) neural networks into a Recurrent Neural Network (RNN) architecture to discern various patterns in measurement data associated with different types of leakages. In a separate investigation [35], an Artificial Neural Network (ANN) was employed to predict water flow and pressure, and a leakage warning was activated when the variance between the actual and predicted data surpassed a predefined threshold. However, these approaches often depend on local pressure information and fail to learn the long-term dependencies within pressure-series data. To address these limitations, we propose a transformer-based model capable of capturing long-term dependencies within pressure data in our work.

3. Problem Definition

In WDNs, leakages can manifest as either incipient or abrupt, as discussed in [36]. Incipient leaks typically develop gradually and persist over time, with relatively small leakage volumes. These leaks may gradually escalate until their detection prompts intervention measures. Conversely, abrupt leaks, such as pipe bursts, release large quantities of water within a short timeframe. Figure 3 shows pressure and demand data in various leakage scenarios. The left column of the figure, comprising Figure 3(a1–d1), represents demand data assumed to remain constant during a leak. In the middle column, Figure 3(a2–d2) depict pressure data without leaks, exhibiting periodic variations corresponding to changes in demand. Figure 3(a3–d3) represent pressure data during leak events. Notably, small leaks may not cause a discernible pressure drop, complicating identification, as seen in Figure 3(a3,d3). Conversely, large leaks typically result in significant pressure drops, facilitating detection, as illustrated in Figure 3(b3,c3).
We receive pressure data every half hour, with each timestamped sequence consisting of T data points processed. A sequence of size T is structured as
X = x 1 , x 2 , , x T ,
where X R T × 1 , T is the length of the sequence, and x t , for t = 1 , 2 , , T , denotes the pressure data with index t in the pressure series. With a total of 48 data points per day, we employ a window approach for analysis. At time step t, we select a window of pressure data containing the last L pressure data with a stride of 1. Each analyzed window consists of L 1 historical pressure data and one current pressure reading. The analyzed window at time step t is organized as
x t = x t L , x t L + 1 , , x t ,
where x t R L × 1 , L is the window size, x t denotes the current pressure, and x t L , x t L + 1 , , x t 1 denotes the historical pressure data. Additionally, , x t 48 × 2 , x t 48 , x t represents the daily interval pressure data, enabling the comparison of pressure data at the same time across different days. The objective is to train a model capable of distinguishing pressure data patterns during normal operation from those indicative of various leakage scenarios.

4. Proposed Method

In this work, we developed a transformer-based model for leakage detection in WDNs, as depicted in Figure 2. The architecture comprises an encoder module responsible for processing the input sequence and extracting latent feature representations, while the decoder module generates a new pressure sequence by dynamically attending to these latent features. To capture the dependency of the pressure series, an attention mechanism is employed, and a sigmoid function serves as the activation function in the transformer. Similar to the approach outlined in [33], we train the model by using no-leak pressure data to reconstruct input pressure without anomalies. Given the varying operating pressures across different WDN scenarios, we normalize the pressure data of each scenario to the range [ 0 , 1 ] . This normalization ensures that all features share the same scale to make the model more robust. Such normalization is defined as
x t x t min ( X ) max ( X ) min ( X ) + ϵ ,
where min ( X ) and max ( X ) are the minimum and maximum values in the time series of each scenario. ϵ is an extremely small constant value to prevent zero division. Many data mining methods learn under the assumption that the training data are independent and identically distributed (i.i.d.) [37]. We use the Kolmogorov–Smirnov (KS) statistics [38] to evaluate whether the sequences are i.i.d. in this work. The two sequences can be assumed to have the same distribution if the probabilistic value (p-value) of KS is larger than 0.025 [39]. In Figure 4a,d, the data from the two scenarios are not i.i.d., as the p-value equals 0. In Figure 4b,c, the data from the two scenarios are i.i.d., as the p-value equals 0.178 . This demonstrates that after normalization, the data are i.i.d., which ensures that the datasets have the same distribution before applying the proposed model.

4.1. Transformer Model

Each transformer stack in Figure 2 consists of a self-attention network, a normalization layer, a feed-forward network, and key operations. Pressure data serve as the input to the first self-attention network by concatenating the position embeddings. Subsequently, the input and output of the first self-attention network are aggregated via the normalization layer and residual connection, and such a process is repeated for other transformer stacks. We describe the main components of the transformer as follows.

4.1.1. Position Embeddings

In the transformer model, position embeddings play a crucial role in capturing the positional information of trends or patterns within the data. Following the approach outlined in [40], we generate position embeddings by using sine and cosine functions, expressed as
PE ( p o s , 2 k ) = s i n ( p o s / ( 1000 2 k / d model ) ) ,
PE ( p o s , 2 k + 1 ) = c o s ( p o s / ( 1000 2 k / d model ) ) ,
where p o s is the data index in a sequence, k is the dimension of the input data sequence, and d model is the dimension of the output embeddings. Each dimension of the positional encoding corresponds to a sinusoid. To preserve the original values of input pressure data, we concatenate the position embeddings with the pressure data x t to form the input to the encoder, defined as
x t c a t = x t pe t = x t L x t L + 1 x t p e t L p e t L + 1 p e t ,
where x t c a t R L × 2 is the input to the encoder and pe t R L × 1 represents the position embeddings of x t . In Figure 5, we compare the original pressure data with data incorporating the position embeddings. The top figure clearly shows a distinct pressure drop indicative of a leakage, easily identifiable. Conversely, the bottom figure displays the input data after combining the pressure data with position embeddings. Despite significant pressure changes, the leakage point appears blurred due to the influence of position embeddings, making detection challenging.

4.1.2. Attention

The attention mechanism serves as the pivotal component within a transformer architecture, enabling each token in a sequence to glean insights from others. This mechanism weighs the importance of different tokens to capture contextual information and long-term dependencies effectively. Mathematically, the attention operation for a d-dimensional input feature is defined as
Attention ( Q , K , V ) = softmax ( Q K T d k ) V ,
where Q ,   K , and V are the query, key, and value matrices, respectively. The operation involves calculating the similarity between Q and K by using a scaled-dot product, followed by scaling the similarity by d k to stabilize the weights. The resulting similarity matrix is then multiplied by the value matrix V to obtain the weighted attention feature. In the encoder module, the input pressure sequence X t c a t is utilized to generate the matrices Q, K, and V for the attention operation. Subsequently, the encoding serves as keys and values for attention operations in the decoder module. Here, the query matrix comprises pressure data with the same timestamps on different days. Figure 6 illustrates the local attention layers, wherein Q, K, and V represent linear transformations of the input sequence. To efficiently handle the input pressure x t R 336 × 1 , we convert it into a matrix of size ( 48 × 7 ) . Here, 48 denotes the number of pressure data points in a day, while 7 signifies the number of days in a week. The pressure data at corresponding timestamps across different days are utilized as input for the decoder module.

4.1.3. Feed-Forward Network

Let us assume that x is the output of the attention layer. Each stack within our model incorporates a fully connected feed-forward network (FFN), serving as a position-wise function. The FFN comprises two linear transformations, and the first one is followed by a Rectified Linear Unit (ReLU) activation function, defined as
y t D = ( ReLU ( x W 1 + b 1 ) ) W 2 + b 2 ,
where W * and b * , with ∗ denoting index 1 or 2, represent the weight and bias parameters, respectively.
To showcase the capability of learning the long-term dependencies in pressure series with the proposed model, we use the t-distributed stochastic neighbor embeddings (t-SNE) tool [41] to visualize the latent features on the Hanio network after the encoder module. As shown in Figure 7, each point represents a latent feature at a time step and the color of points varies according to the change in the time steps. Figure 7a displays the t-SNE embeddings of the latent features with one week of pressure data. It reveals that adjacent latent features are closely situated, indicating a strong correlation between nearby time steps, and vice versa. Figure 7b illustrates the t-SNE embeddings of the latent features at the same time on different days. Subplots A and B depict the embeddings of pressure data at 24:00 and 6:00 separately from Monday to Friday over a two-month period, and subplots C and D give the embeddings of pressure data at 24:00 and 6:00 on the corresponding weekends. It can be observed that the embeddings in each subplot can easily be clustered; hence, the daily periodicity of the data plays an important role in feature learning. In addition, the embedding patterns differ significantly between weekdays and weekends due to variations in water demands. Finally, Figure 7c compares the t-SNE embeddings of pressure data during normal operation and with leakages, denoted by points in green and orange, respectively. The distribution of latent features due to leakage data closely resembles that of normal pressure data, potentially leading to significant loss when attempting to reconstruct pressure from leakage data.

4.2. Leakage Detection

After the transformer decoder, we utilize a linear transformation to reconstruct the pressure sequence, given by
y t = Sigmoid ( y t D W 3 + b 3 ) ,
where the output of the decoder y t D is used as the input of the linear transformation. Subsequently, the mean square error (MSE) loss function is employed to identify the leakages in WDNs. During the training stage, the output signal is generated by using the data from the no-leak scenario, and we compute the peak value of MSE loss for each window.
leak = 1 , if y t x t 2 A 0 , otherwise
and
y t x t 2 = 1 n t = 1 n ( y t x t ) 2 ,
where y t and x t indicate the output and input of the decoder module, respectively. We use A as the threshold, which is the a r g m a x value of the loss during training in the no-leak scenario. In Figure 8, we visualize the pressure after normalization and the test loss of one scenario for the proposed model trained on the Hanoi network, showing the pressure and loss for each time step. It is apparent that the loss exhibits a strong correlation with peaks in pressure. Moreover, there is a high correlation of loss across different time steps. This characteristic allows the model to effectively detect leakages in each scenario after being trained on a no-leak scenario.

4.3. Evaluation Metrics

In this work, we utilize detection accuracy and F1-score, which are defined below, to evaluate the performance of the proposed model.
Accuracy = TP + TN TP + FP + TN + FN ,
F 1 = 2 × Precision × Recall Precision + Recall ,
and
Precision = TP TP + FP , Recall = TP TP + FN ,
where TP (true positive) represents the number of leakages that are correctly detected, and FP (false positive) indicates the number of leakages erroneously assigned, TN (true negative) denotes the number of normal pressure data correctly predicted, and FN (false negative) signifies the number of normal pressure data erroneously assigned. Accuracy measures the proportion of correct detection out of the total leakages, providing an overall assessment of performance. On the other hand, the F1-score considers both false positives and false negatives, providing a balanced evaluation of precision and recall.

5. Experiment and Results

5.1. Datasets

Three network topologies, Hanoi, Net1, and Anytown, were employed to generate the demand and pressure data in our experiments [42]. An EPANET-compatible Python package named “wntr” [43] was used for simulation and analyzing the WDNs. The pipeline network used in the dataset is provided as an EPANET INP file. The three network topologies are depicted in Figure 9. In these diagrams, the connection points and water sources correspond to junctions in the network. Each junction is equipped with a Pressure Transducer (PT) and a Flow Transducer (FT), which generate pressure and flow data. All junctions are interconnected via pipelines. It is important to note that the model parameters (such as pipe length, diameter, and roughness) vary for each network. Each scenario was generated based on the pressure-driven nodes water demand model (PDD). This approach allowed us to accurately simulate and analyze the behavior of WDNs under different conditions.
At the same time, the periodicity of daily demand was approximated based on the Fourier series of the real historical water demand of Water Supply Enterprise [42]. Figure 10 illustrates the presence of weekly periodic and seasonal trends in the data. The variance attributed to daily usage was modeled by incorporating random noise. The daily periodicity captures the fluctuations in water demand within a week, while the seasonal trend accounts for variations in water demand due to changing seasons. The inclusion of random noise accounts for fluctuations caused by unpredictable factors, such as human activity (e.g., the opening and closing of valves) and pipeline maintenance activities. If a leakage occurs in the pipeline, we observe a drop in pressure first at the junction closest to the leakage point. This pressure transient then propagates throughout the network, reflecting the transient response to the leakage event.
We generated datasets for 10 different scenarios in each network, where one of the datasets contained no-leak data and was used as our training dataset. In the operation of water supply networks, pressure data are typically uploaded to the central station every 30 min or occasionally hourly. For this work, all datasets were generated for 30 min intervals for one year, i.e., 17,520 data points for each dataset. Table 1 provides a detailed description of the leakage scenarios simulated on each network.

5.2. Implementations

Based on the aforementioned datasets, we conducted a study to evaluate the performance of the proposed model for leakage detection. We utilized detection accuracy and F1-score (abbreviated as Acc and F1 in the result tables) as performance evaluation metrics. For comparison, we implemented the Autoencoder method described in [33] and the basic self-attention model described in [40]. The AE method has previously been applied to several datasets on the Hanoi network, achieving a detection accuracy of 89 % , as reported in [33]. The implementation of the basic self-attention model aims to showcase the advantages of the attention mechanism over the AE method.
The training parameters used in this work include MSE as the loss function and the Adam optimizer with a learning rate of 0.001. We applied an Early Stopping strategy with a patience of 10. Each dataset was trained for 15 epochs with a batch size of 8. All experiments were conducted on a single NVIDIA RTX 3090 GPU.
Regarding our model, we implemented it by adding the position embeddings with pressure data directly, denoted by Propose-1, and by concatenating the position embeddings with pressure data, denoted by Propose-2, to illustrate the performance under different position embedding fusing schemes.

5.3. Results

The performance of four methods, namely, AE, Attention, Propose-1, and Propose-2, in different leakage scenarios on three networks (Hanoi, Net1, and Anytown) is shown in Table 2. Overall, the attention-based methods (Attention, Propose-1, and Propose-2) performed better than the AE. Both the accuracy and F1-score of the proposed methods were higher than those of the AE- and basic attention-based methods. Notably, the proposed method that concatenates the position embeddings with pressure data (Propose-2) outperformed the method consisting in simply adding them together (Propose-1).
Table 3 provides the average performance of different methods across the three network topologies. The proposed transformer model, which concatenates the position embeddings with pressure data, achieved the best performance in all metrics on both the Hanoi and Anytown networks. On the Net1 network, the F1-score of the Propose-1 method was slightly higher than that of the Propose-2 method. Overall, the average performance of the proposed methods significantly surpassed that of the AE method [33]. These results underscore the robustness of the proposed methods in detecting both small and large leakages across different network topologies. Furthermore, the proposed methods exhibited better detection accuracy, false alarm rates, and miss detection rates compared with the AE method.
In addition to evaluating different methods, we analyzed the impact of window size on the leak detection results. This analysis was performed by using a random selection of datasets from the Hanoi network. Figure 11a–c give the performance of the implementation using the window size of one day, two days, and one week for various epochs. It is evident that larger window sizes tended to yield better detection performance. The best results were observed when utilizing one week of data. This can be attributed to the fact that richer contextual information can be leveraged when using one week of data, thereby enhancing the capability of the model to detect anomalies. Notably, the performance achieved with a window size of two days was comparable to that of one week when longer epochs were used. In Figure 11d–f, we compare the results from the pressure data collected at the same hour on different days with the window sizes of two days, one week, and one month for various epochs. It can be observed that increased data utilization led to improved detection performance. Specifically, as more data were incorporated, the model became better equipped to discern anomalies within the dataset.

6. Conclusions

In this paper, we introduce a transformer-based leakage detection model for WDNs, capitalizing on the strengths of attention architectures to capture long-term dependencies and contemporaneous information in pressure time series. Unlike simply adding the position embeddings and pressure data together, we concatenate them to further enhance detection performance. We evaluated our model on three different WDNs, and the experimental results demonstrate its superiority over the AE method in terms of detection accuracy. Additionally, the F1-score metric was used to validate the model, according to which our model again outperformed the AE method. In our future work, we aim to assess the performance of the proposed method on real data acquired from urban cities. Additionally, we plan to explore the fusion of other types of modality information, such as hydrophone signals, as an avenue for further enhancement.

Author Contributions

Conceptualization, J.L. and X.Z.; Methodology, J.L., J.Y. and X.Z.; Software, J.L. and J.Y.; Validation, J.L., C.W. and X.Z.; Formal analysis, J.L., C.W. and X.Z.; Investigation, J.L. and C.W.; Resources, J.L., C.W. and X.Z.; Data curation, J.L. and J.Y.; Writing—original draft, J.L.; Writing—review & editing, C.W., J.Y. and X.Z.; Visualization, J.L., J.Y. and X.Z.; Supervision, X.Z.; Project administration, X.Z.; Funding acquisition, X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the National Natural Science Foundation of China under Grant 61971186, and the 2019 Hunan High-level Talent Aggregation Project—Innovative Talents of grant number Hunan Science People [2019] No. 7.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Berardi, L.; Giustolisi, O. Calibration of design models for leakage management of water distribution networks. Water Resour. Manag. 2021, 35, 2537–2551. [Google Scholar] [CrossRef]
  2. Mora-Rodríguez, J.; Amparo Lopez-Jimenez, P.; Ramos, H.M. Intrusion and leakage in drinking systems induced by pressure variation. J. Water Suppl. Res. Technol.—AQUA 2012, 61, 387–402. [Google Scholar] [CrossRef]
  3. Tornyeviadzi, H.M.; Seidu, R. Leakage detection in water distribution networks via 1D CNN deep autoencoder for multivariate SCADA data. Eng. Appl. Artif. Intell. 2023, 122, 106062. [Google Scholar] [CrossRef]
  4. Soldevila, A.; Boracchi, G.; Roveri, M.; Tornil-Sin, S.; Puig, V. Leak detection and localization in water distribution networks by combining expert knowledge and data-driven models. Neural Comput. Appl. 2022, 34, 4759–4779. [Google Scholar] [CrossRef]
  5. Gui, Y.; Wang, Y.; He, S.; Yang, J. Self-powered smart agriculture real-time sensing device based on hybrid wind energy harvesting triboelectric-electromagnetic nanogenerator. Energy Conv. Manag. 2022, 269, 116098. [Google Scholar] [CrossRef]
  6. Rahmat, R.F.; Satria, I.S.; Siregar, B.; Budiarto, R. Water pipeline monitoring and leak detection using flow liquid meter sensor. IOP Conf. Ser. Mater. Sci. Eng. 2017, 190, 012036. [Google Scholar] [CrossRef]
  7. Alves, D.; Blesa, J.; Duviella, E.; Rajaoarisoa, L. Data-driven leak localization in WDN using pressure sensor and hydraulic information. IFAC-PapersOnLine 2022, 55, 96–101. [Google Scholar] [CrossRef]
  8. Charalambous, B. Experiences in DMA redesign at the Water Board of Lemesos, Cyprus. In Proceedings of the IWA Specialized Conference Leakage, Halifax, NS, Canada, 12–14 September 2005. [Google Scholar]
  9. Mamade, A.; Loureiro, D.; Covas, D.; Coelho, S.T.; Amado, C. Spatial and temporal forecasting of water consumption at the DMA level using extensive measurements. Procedia Eng. 2014, 70, 1063–1073. [Google Scholar] [CrossRef]
  10. Adu-Manu, K.S.; Adjetey, C.; Apea, N.Y.O. Leakage Detection and Automatic Billing in Water Distribution Systems Using Smart Sensors. In Digital Transformation for Sustainability: ICT-Supported Environmental Socio-Economic Development; Springer: Cham, Switzerland, 2022; pp. 251–270. [Google Scholar]
  11. Zhang, T.; Yao, H.; Chu, S.; Yu, T.; Shao, Y. Optimized DMA partition to reduce background leakage rate in water distribution networks. J. Water Resour. Plan. Manag. 2021, 147, 04021071. [Google Scholar] [CrossRef]
  12. Mohammed, E.G.; Zeleke, E.B.; Abebe, S.L. Water leakage detection and localization using hydraulic modeling and classification. J. Hydroinform. 2021, 23, 782–794. [Google Scholar] [CrossRef]
  13. Chan, T.K.; Chin, C.S.; Zhong, X. Review of current technologies and proposed intelligent methodologies for water distributed network leakage detection. IEEE Access 2018, 6, 78846–78867. [Google Scholar] [CrossRef]
  14. Jung, D.; Lansey, K. Water distribution system burst detection using a nonlinear Kalman filter. J. Water Resour. Plan. Manag. 2015, 141, 04014070. [Google Scholar] [CrossRef]
  15. Karray, F.; Garcia-Ortiz, A.; Jmal, M.W.; Obeid, A.M.; Abid, M. Earnpipe: A testbed for smart water pipeline monitoring using wireless sensor network. Procedia Comput. Sci. 2016, 96, 285–294. [Google Scholar] [CrossRef]
  16. Ye, G.; Fenner, R.A. Weighted least squares with expectation-maximization algorithm for burst detection in UK water distribution systems. J. Water Resour. Plan. Manag. 2014, 140, 417–424. [Google Scholar] [CrossRef]
  17. Kang, J.; Park, Y.J.; Lee, J.; Wang, S.H.; Eom, D.S. Novel leakage detection by ensemble CNN-SVM and graph-based localization in water distribution systems. IEEE Trans. Ind. Electron. 2017, 65, 4279–4289. [Google Scholar] [CrossRef]
  18. Srirangarajan, S.; Allen, M.; Preis, A.; Iqbal, M.; Lim, H.B.; Whittle, A.J. Wavelet-based burst event detection and localization in water distribution systems. J. Signal Process. Syst. 2013, 72, 1–16. [Google Scholar] [CrossRef]
  19. Mounce, S.R.; Mounce, R.B.; Jackson, T.; Austin, J.; Boxall, J.B. Pattern matching and associative artificial neural networks for water distribution system time series data analysis. J. Hydroinform. 2016, 16, 617–632. [Google Scholar] [CrossRef]
  20. Romano, M.; Kapelan, Z.; Savić, D.A. Automated detection of pipe bursts and other events in water distribution systems. J. Water Resour. Plan. Manag. 2014, 140, 457–467. [Google Scholar] [CrossRef]
  21. Rahali, A.; Akhloufi, M.A. End-to-End Transformer-Based Models in Textual-Based NLP. AI 2023, 4, 54–110. [Google Scholar] [CrossRef]
  22. You, J.; Korhonen, J. Transformer for image quality assessment. In Proceedings of the 2021 IEEE International Conference on Image Processing, Anchorage, AK, USA, 19–22 September 2021; pp. 1389–1393. [Google Scholar]
  23. Wang, Y.; Mohamed, A.; Le, D.; Liu, C.; Xiao, A.; Mahadeokar, J.; Seltzer, M.L. Transformer-based acoustic modeling for hybrid speech recognition. In Proceedings of the ICASSP 2020—2020 IEEE International Conference on Acoustics, Speech and Signal Processing, Virtual, 4–9 May 2020; pp. 6874–6878. [Google Scholar]
  24. Galili, I.; Kaplan, D.; Lehavi, Y. Teaching Faraday’s law of electromagnetic induction in an introductory physics course. Am. J. Phys. 2006, 74, 337–343. [Google Scholar] [CrossRef]
  25. Bach, P.M.; Kodikara, J.K. Reliability of infrared thermography in detecting leaks in buried water reticulation pipes. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2017, 10, 4210–4224. [Google Scholar] [CrossRef]
  26. Chandrasekar, V.; Chen, H.; Tan, H. Rainfall estimation from ground radar and TRMM Precipitation Radar using hybrid deep neural networks. Geophys. Res. Lett. 2019, 46, 10669–10678. [Google Scholar]
  27. Duzinkiewicz, K.; Borowa, A.; Mazur, K.; Grochowski, M.; Brdys, M.A.; Jezior, K. Leakage detection and localisation in drinking water distribution networks by multiregional PCA. Stud. Inform. Control 2008, 17, 135. [Google Scholar]
  28. Ravichandran, T.; Gavahi, K.; Ponnambalam, K.; Burtea, V.; Mousavi, S.J. Ensemble-based machine learning approach for improved leak detection in water mains. J. Hydroinform. 2021, 23, 307–323. [Google Scholar] [CrossRef]
  29. Yu, T.; Chen, X.; Yan, W.; Xu, Z.; Ye, M. Leak detection in water distribution systems by classifying vibration signals. Mech. Syst. Signal Process. 2023, 185, 109810. [Google Scholar] [CrossRef]
  30. Mei, P.; Li, M.; Zhang, Q.; Li, G. Prediction model of drinking water source quality with potential industrial-agricultural pollution based on CNN-GRU-Attention. J. Hydrol. 2022, 610, 127934. [Google Scholar] [CrossRef]
  31. Zhou, M.; Pan, Z.; Liu, Y.; Zhang, Q.; Cai, Y.; Pan, H. Leak detection and location based on ISLMD and CNN in a pipeline. IEEE Access 2019, 7, 30457–30464. [Google Scholar] [CrossRef]
  32. Javadiha, M.; Blesa, J.; Soldevila, A.; Puig, V. Leak localization in water distribution networks using deep learning. In Proceedings of the 2019 6th International Conference on Control, Decision and Information Technologies (CoDIT), Paris, France, 23–26 April 2019; pp. 1426–1431. [Google Scholar]
  33. Leonzio, D.U.; Bestagini, P.; Marcon, M.; Quarta, G.P.; Tubaro, S. Water Leak Detection and Localization using Convolutional Autoencoders. In Proceedings of the ICASSP 2023—2023 IEEE International Conference on Acoustics, Speech and Signal Processing, Rhodes Island, Greece, 4–10 June 2023; pp. 1–5. [Google Scholar]
  34. Kim, H.; Park, M.; Kim, C.W.; Shin, D. Source localization for hazardous material release in an outdoor chemical plant via a combination of LSTM-RNN and CFD simulation. Comput. Chem. Eng. 2019, 125, 476–489. [Google Scholar] [CrossRef]
  35. Wu, Y.; Liu, S.; Wu, X.; Liu, Y.; Guan, Y. Burst detection in district metering areas using a data driven clustering algorithm. Water Res. 2016, 100, 28–37. [Google Scholar] [CrossRef]
  36. Punukollu, H.; Vasan, A.; Srinivasa Raju, K. Leak detection in water distribution networks using deep learning. ISH J. Hydraul. Eng. 2023, 29, 674–682. [Google Scholar] [CrossRef]
  37. Dundar, M.; Krishnapuram, B.; Bi, J.; Rao, R.B. Learning classifiers when the training data is not IID. In Proceedings of the IJCAI, Hyderabad, India, 6–12 January 2007; pp. 756–761. [Google Scholar]
  38. Wilcox, R.R. Some practical reasons for reconsidering the Kolmogorov-Smirnov test. Br. J. Math. Stat. Psychol. 1997, 50, 9–20. [Google Scholar] [CrossRef]
  39. Huang, X.; Shang, H.L.; Siu, T.K. A nonlinearity and model specification test for functional time series. arXiv 2023, arXiv:2304.01558. [Google Scholar]
  40. Waswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.; Polosukhin, I. Attention is all you need. In Proceedings of the NIPS, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  41. Van Der Maaten, L. Accelerating t-SNE using tree-based algorithms. J. Mach. Learn. Res. 2014, 15, 3221–3245. [Google Scholar]
  42. Vrachimis, S.G.; Kyriakou, M.S. LeakDB: A benchmark dataset for leakage diagnosis in water distribution networks. In Proceedings of the WDSA/CCWI Joint Conference Proceedings, Kingston, ON, Canada, 23–25 July 2018; Volume 1. [Google Scholar]
  43. Klise, K.A.; Murray, R.; Haxton, T. An Overview of the Water Network Tool for Resilience (WNTR). In Proceedings of the WDSA/CCWI Joint Conference Proceedings, Kingston, ON, Canada, 23–25 July 2018. [Google Scholar]
Figure 1. Various sensors are mounted on pipelines to acquire hydraulic data and transmit them to data centers to detect leakages.
Figure 1. Various sensors are mounted on pipelines to acquire hydraulic data and transmit them to data centers to detect leakages.
Sensors 24 06294 g001
Figure 2. The overall architecture of the proposed model. (1) Instead of performing the direct addition of position embeddings and inputs as in most attention mechanisms, we concatenate position embeddings with pressure data. (2) We consider the dependency of contemporaneous pressure data and take them as the input to the decoder. (3) We use the MSE loss function to evaluate the reconstruction residuals.
Figure 2. The overall architecture of the proposed model. (1) Instead of performing the direct addition of position embeddings and inputs as in most attention mechanisms, we concatenate position embeddings with pressure data. (2) We consider the dependency of contemporaneous pressure data and take them as the input to the decoder. (3) We use the MSE loss function to evaluate the reconstruction residuals.
Sensors 24 06294 g002
Figure 3. A comparison of pressure data between normal operation and leakage of the pipe network in several scenarios. The left column (a1,b1,c1,d1) and middle column (a2,b2,c2,d2) show the water demands (in m3) and pressure (in m) under normal operation. The right column (a3,b3,c3,d3) shows pressure changes in different leakage scenarios.
Figure 3. A comparison of pressure data between normal operation and leakage of the pipe network in several scenarios. The left column (a1,b1,c1,d1) and middle column (a2,b2,c2,d2) show the water demands (in m3) and pressure (in m) under normal operation. The right column (a3,b3,c3,d3) shows pressure changes in different leakage scenarios.
Sensors 24 06294 g003
Figure 4. Comparison of pressure (in m, normalized) before and after normalization. Lines with different colors represent different scenarios. (a) Concatenated pressure data in different scenarios without normalization. (b) Concatenated pressure data in different scenarios with normalization. (c) Pressure data with the same distribution according to KS statistics after normalization for each scenario. (d) Pressure data with different distributions according to KS statistics.
Figure 4. Comparison of pressure (in m, normalized) before and after normalization. Lines with different colors represent different scenarios. (a) Concatenated pressure data in different scenarios without normalization. (b) Concatenated pressure data in different scenarios with normalization. (c) Pressure data with the same distribution according to KS statistics after normalization for each scenario. (d) Pressure data with different distributions according to KS statistics.
Sensors 24 06294 g004
Figure 5. A comparison of pressure data and data with the addition of position embeddings. The top figure illustrates the pressure data (in m, normalized) without the addition of the position embeddings, and the bottom one presents the corresponding pressure data after the addition of the position embeddings. The leakage point is indicated by the area shaded in orange.
Figure 5. A comparison of pressure data and data with the addition of position embeddings. The top figure illustrates the pressure data (in m, normalized) without the addition of the position embeddings, and the bottom one presents the corresponding pressure data after the addition of the position embeddings. The leakage point is indicated by the area shaded in orange.
Sensors 24 06294 g005
Figure 6. Local attention layers of encoder and decoder. In the attention layer of the encoder, Q, K, and V are obtained from the historical pressure data. In the first attention layer of the decoder, Q, K, and V are obtained from the pressure data at the same time on different days. In the second attention layer of the decoder, Q is obtained from pressure data at the same time on different days, and K and V are obtained from the historical pressure data.
Figure 6. Local attention layers of encoder and decoder. In the attention layer of the encoder, Q, K, and V are obtained from the historical pressure data. In the first attention layer of the decoder, Q, K, and V are obtained from the pressure data at the same time on different days. In the second attention layer of the decoder, Q is obtained from pressure data at the same time on different days, and K and V are obtained from the historical pressure data.
Sensors 24 06294 g006
Figure 7. Latent features from normal pressure and leakage data. Each point represents a latent feature at a time step, and the color of the points varies according to the change in the time steps. (a) The t-SNE embeddings of the latent features with one week of pressure data; (b) the t-SNE embeddings of the latent features at the same hour of different days; (c) the comparison of the t-SNE embeddings of the pressure data under normal operation and the data for leakages, denoted by points in green and orange, respectively.
Figure 7. Latent features from normal pressure and leakage data. Each point represents a latent feature at a time step, and the color of the points varies according to the change in the time steps. (a) The t-SNE embeddings of the latent features with one week of pressure data; (b) the t-SNE embeddings of the latent features at the same hour of different days; (c) the comparison of the t-SNE embeddings of the pressure data under normal operation and the data for leakages, denoted by points in green and orange, respectively.
Sensors 24 06294 g007
Figure 8. Visualization of leakage detection. The green line denotes pressure data, the yellow line denotes MSE loss between input pressure and reconstructed pressure (in m, normalized), and the orange shade denotes leakages.
Figure 8. Visualization of leakage detection. The green line denotes pressure data, the yellow line denotes MSE loss between input pressure and reconstructed pressure (in m, normalized), and the orange shade denotes leakages.
Sensors 24 06294 g008
Figure 9. The network topologies of Anytown, Hanoi, and Net1. Each point represents a junction with Pressure Transducer and Flow Transducer.
Figure 9. The network topologies of Anytown, Hanoi, and Net1. Each point represents a junction with Pressure Transducer and Flow Transducer.
Sensors 24 06294 g009
Figure 10. The components of the demand data. (a) The fundamental periodicity to simulate the seasonal trends; (b) random noise used to generate the variance in daily usage and operations on the pipelines; (c) daily consumption-based demand periodicity.
Figure 10. The components of the demand data. (a) The fundamental periodicity to simulate the seasonal trends; (b) random noise used to generate the variance in daily usage and operations on the pipelines; (c) daily consumption-based demand periodicity.
Sensors 24 06294 g010
Figure 11. Performance comparison of historical pressure data (ac) and pressure data at the same hour on different days (df) for different window sizes. Various training epochs were considered. We use the green color to indicate the performance of “F1-score” and blue to indicate the performance of “Accuracy”. The window size was set to [1 day, 2 days, and 7 days] for the historical pressure data, and to [2 days, 7 days, 30 days] for the pressure data at the same hour on different days.
Figure 11. Performance comparison of historical pressure data (ac) and pressure data at the same hour on different days (df) for different window sizes. Various training epochs were considered. We use the green color to indicate the performance of “F1-score” and blue to indicate the performance of “Accuracy”. The window size was set to [1 day, 2 days, and 7 days] for the historical pressure data, and to [2 days, 7 days, 30 days] for the pressure data at the same hour on different days.
Sensors 24 06294 g011
Table 1. Leakage details for the simulation with Hanoi, Net1, and Anytown networks. “S-m” denotes the mth scenario. “A” denotes “Abrupt Leakage”. “I” denotes “Incipient Leakage”. The first row represents various scenarios under different network topologies, and the second row represents the leakage type and leakage duration in the corresponding scenario. There may be different leakages at different times within a scenario in one year.
Table 1. Leakage details for the simulation with Hanoi, Net1, and Anytown networks. “S-m” denotes the mth scenario. “A” denotes “Abrupt Leakage”. “I” denotes “Incipient Leakage”. The first row represents various scenarios under different network topologies, and the second row represents the leakage type and leakage duration in the corresponding scenario. There may be different leakages at different times within a scenario in one year.
ScenarioLeakage Details
Hanoi
S-1A: 4 October 09:00∼9 December 16:00
S-2I: 15 November 10:00∼14 December 01:30
S-3I: 21 February 11:30∼11 June 01:00
S-4I: 18 October 06:30∼2 December 17:00
S-5A: 18 May 15:00∼15 December 06:30
S-6I: 8 October 15:00∼31 December 05:00
S-7A: 7 February 18:30∼9 August 06:00
S-8A: 23 August 16:30∼21 December 05:30
S-9A: 31 January 23:00∼29 December 07:30
Net1
S-1A: 21 April 13:30 08∼17 April 16:30;   A: 30 April 11:00∼9 May 13:30
S-2A: 11 December 21:00∼30 December 07:00
S-3A: 30 March 20:00∼17 July 09:30;   A: 11 December 17:30∼18 December 16:00
S-4I: 26 July 17:00∼8 September 06:30;   I: 10 May 08:00∼8 August 23:00
S-5A: 11 November 22:30∼3 December 09:30
S-6A: 16 August 14:30∼18 October 05:30;   A: 9 November 04:30∼7 December 22:30
S-7A: 19 March 07:30∼3 September 12:30;   I: 17 June 17:00∼1 September 16:30
S-8I: 2 October 23:30∼28 November 15:30;   I: 29 December 02:30∼30 December 07:30
S-9A: 12 February 04:00∼31 March 12:30;   I: 29 January 04:00∼3 June 03:30
Anytown
S-1A: 12 December 07:30∼30 December 16:30;   A: 28 February 00:00∼7 November 02:30
S-2I: 25 September 14:30∼27 September 13:30;   I: 11 March 00:00∼16 June 04:30
S-3I: 24 April 19:00∼23 June 11:30;   A: 30 June 17:00∼29 October 02:30
S-4I: 17 May 08:30∼2 December 19:30;   I: 11 November 10:00∼14 December 03:30
S-5I: 17 May 14:30∼28 October 10:00;   A: 3 August 03:00∼31 October 02:30
S-6I: 23 January 18:30∼10 December 09:30;   A: 1 February 20:00∼3 October 19:30
S-7A: 28 November 17:30∼31 December 11:30;   I: 10 May 06:30∼7 December 16:00
S-8I: 24 February 16:00∼29 August 10:30;   A: 31 May 23:30∼7 October 02:30
S-9I: 19 July 06:30∼18 November 19:00;   I: 12 December 23:30∼25 December 09:00
Table 2. Results of our model on various datasets. “S-m” denotes the mth scenario. “Attention” denotes the simple multi-head self-attention model, which only has the encoder module, and “−1” and “−2” denote the methods where the position embeddings are added to and concatenated with the pressure data.
Table 2. Results of our model on various datasets. “S-m” denotes the mth scenario. “Attention” denotes the simple multi-head self-attention model, which only has the encoder module, and “−1” and “−2” denote the methods where the position embeddings are added to and concatenated with the pressure data.
DatasetAutoencoder [33]Attention [40]Propose-1Propose-2
AccF1AccF1AccF1AccF1
HanoiS-127.9441.9390.1194.8094.8797.7495.6197.77
S-279.9288.4481.7189.9386.1692.5687.5893.38
S-384.9989.0888.5089.2494.2997.0694.2397.02
S-492.2026.8596.1298.0292.0295.8491.8395.74
S-598.1898.8798.3499.1698.2799.1398.2099.09
S-690.5877.5892.7296.2296.9598.4597.1698.74
S-789.9194.3790.1794.8395.0797.4794.8597.35
S-893.1796.3294.2097.0197.4698.7197.8598.91
S-992.5596.1399.0999.5499.4699.7399.4499.72
Net1S-171.8883.5179.5588.6194.8197.3394.8497.35
S-272.1682.8679.5288.5993.0995.4893.2195.48
S-332.3148.8482.1690.2198.5499.2699.0799.53
S-431.1246.0575.8686.2795.1097.4894.8697.36
S-561.0975.3867.4680.5790.9795.2795.3291.49
S-667.3967.6269.8182.2291.5678.3992.8378.85
S-783.4089.7287.3293.2396.9398.4497.0698.51
S-868.7425.7254.8970.8888.2292.8088.0592.70
S-977.9274.9689.4794.4495.2997.5995.7297.81
AnytownS-181.0747.9083.3790.9382.1890.2293.2096.48
S-227.3136.4481.7489.9581.7089.9381.7089.93
S-343.6760.7982.3890.3392.0795.8792.0895.87
S-464.3874.2197.0398.4997.0098.4897.0098.48
S-582.0784.2083.4290.9690.9295.2490.9395.25
S-658.5173.4095.0897.4795.6397.7698.2999.13
S-750.5567.1678.3887.8889.7394.5995.5797.73
S-834.2156.4382.7590.5692.7496.2395.0097.44
S-948.1465.0093.1196.4387.1393.1293.0896.41
Table 3. Results of our method compared with the Autoencoder on various datasets. “Attention” denotes the simple multi-head self-attention model that only has the encoder module, and “−1” and “−2” denote the methods where the position embeddings are added to and concatenated with the pressure data. The score is the average of all scenarios shown in Table 2.
Table 3. Results of our method compared with the Autoencoder on various datasets. “Attention” denotes the simple multi-head self-attention model that only has the encoder module, and “−1” and “−2” denote the methods where the position embeddings are added to and concatenated with the pressure data. The score is the average of all scenarios shown in Table 2.
MethodMetricHanoiNet1Anytown
Autoencoder [33]Accuracy 89.00 62.89 54.43
F1-score 78.84 66.07 62.83
Attention [40]Accuracy 93.44 76.22 86.36
F1-score 96.52 86.12 92.56
Propose-1Accuracy 94.95 93.83 89.90
F1-score 97.41 94.67 94.60
Propose-2Accuracy 95.19 94.55 92.98
F1-score 97.52 94.34 96.30
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Luo, J.; Wang, C.; Yang, J.; Zhong, X. A Transformer-Based Approach to Leakage Detection in Water Distribution Networks. Sensors 2024, 24, 6294. https://doi.org/10.3390/s24196294

AMA Style

Luo J, Wang C, Yang J, Zhong X. A Transformer-Based Approach to Leakage Detection in Water Distribution Networks. Sensors. 2024; 24(19):6294. https://doi.org/10.3390/s24196294

Chicago/Turabian Style

Luo, Juan, Chongxiao Wang, Jielong Yang, and Xionghu Zhong. 2024. "A Transformer-Based Approach to Leakage Detection in Water Distribution Networks" Sensors 24, no. 19: 6294. https://doi.org/10.3390/s24196294

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop