**Approaches to Failure Risk Analysis of the Water Distribution Network with Regard to the Safety of Consumers**

#### **Katarzyna Pietrucha-Urbanik \* and Barbara Tchórzewska-Cie´slak**

Department of Water Supply and Sewerage Systems, Faculty of Civil, Environmental Engineering and Architecture, Rzeszow University of Technology, Al. Powstancow Warszawy 6, 35-959 Rzeszow, Poland; cbarbara@prz.edu.pl

**\*** Correspondence: kpiet@prz.edu.pl; Tel.: +48-17-865-1703

Received: 22 September 2018; Accepted: 14 November 2018; Published: 17 November 2018

**Abstract:** Contemporary risk assessment makes reference to current world trends, whereby there is increased emphasis on safety. This paper has thus sought mainly to present new approaches to failure risk assessment where the functioning of a water distribution network (WDN) is concerned. The framework for the research involved here has comprised of: (a) an analysis of WDN failure in regard to an urban agglomeration in south-east Poland; (b) failure rate analysis, taking account of the type of a water pipe (mains, distribution, service connections (SC)) and months of the year, with an assessment of results in terms of criterion values for failure rate; (c) a determination—by reference to analyses performed previously—of the compatibility of experts' assessments in terms of standards of failure and obtained results, through rank analysis; and (d) the proposing of a modified Multi-Criteria Decision Analysis with implementation of an Analytical Hierarchy Process, to allow failure risk assessment for the WDN to be performed, on the basis of the calculated additive value of obtained risk. The analysis in question was based on real operating data, as collected from the water distribution company. It deals with failures in the WDN over a period of 13 years in operation, from 2005 to 2017.

**Keywords:** safety of water supply consumers; risk; water supply system; failure risk analysis; decision making model; risk assessment methodology

#### **1. Introduction**

The second half of the twentieth century brought many major accidents and disasters relating to the functioning of public water supply systems (WSS) in urban and industrial agglomerations. In this regard, there can be no doubt that WSS operations are subject to risk [1–6], hence the key importance of risk analysis that determines the location and size of such risk, as well as the actions to be taken with a view to its reduction or elimination [7]. Activities allowing these relations to be studied fall under the heading of risk management, which should be organized and comprehensive, in relation to both the entire WSS and elements associated with it [8]. The risk function is a measure of security loss [9]. There is always the possibility of a domino effect, so events are related to risk escalation [7]. In this regard, the specialist scientific literature makes it clear that methods of quantitative analysis and risk assessment form the basis for safety management in regard to WSS [8,9]. Results of hybrid risk-based decision making gain implementation by reference to pipe design parameters, with authors in work [10] prioritizing pipes in line with their rehabilitation needs. Comparisons of the use of a vulnerability assessment model based on Bayesian Belief Networks and the related uncertainty assessment model were in turn performed in [11], with respect to the emergency management of systems supplying drinking water.

Research on risk in technical systems includes the class of cognitive and practical methods of analysis and assessment, which constitute important elements of comprehensive research into their safety [12]. The following trends of safety research can be distinguished, as risk analysis, where assessment models form the basis for building decision models; and risk engineering, where the assessment of design options in terms of safety is the basis for choosing the best solution [13–15]. A water supply system's (WSS) loss of safety may result directly from the failure of its individual subsystems or elements, such as water intakes, pumping stations, the water distribution network (WDN) or its utilities [16–20]; from the failure of other technical systems (e.g., sewerage, energy, water structures) [21]; from undesirable extreme natural phenomena like floods and droughts; or from the incidental pollution of water sources [22,23]. Risk analysis of the WSS should be preceded by analysis of the reliability of all subsystems in terms of interruptions to water supply [24,25], as well as failure to meet quality requirements for health posing threats to consumers of water [26].

In water supply companies, records of breaks should be gathered, as these come to represent the basic source of information for reliability and safety analyses. Water supply companies should thus have guidelines about the way they collect information, also in the form of expert opinions, so that these can be used in the decision-making model [27–30]. The registers concerned should not only relate to the number of undesirable events, but should also contain precise data on the locations and causes of failure, their duration, repair time, possible consequences and other information required if the basic indicators used in the analysis of the safety of the WSS are to be determined [31–34], with these inter alia including the numbers of failures over the examined period of operation of the WSS, average working time without failure, average repair time, etc. [35–37]. Failures may be due to mistakes in design, construction or operation [38]. On the other hand, the causes of failure can be classified on the basis of the phenomena giving rise to them [39]. The formation of failures in a WDN is in fact a complex problem and each time requires a detailed analysis of the root causes [40–42]. Failures of the WDN can arise from errors directly attributable to activities of the designer, the contractor or the operator of the water supply system, through incorrect assumptions in the design and shortcomings at the stage of performance, mismatched material or improper service [43,44]. Also, the failure of a WDN may be caused by factory defective materials, defective sealing or anti-corrosive coatings [45,46]. There are also environmental causes reflecting meteorological conditions, including sudden changes in temperature and landslides, as well as causes resulting from the functioning of the WDN in combination with improper monitoring [47,48].

The process of risk analysis in regard to the operational safety of the WDN takes place via steps comprising of: determination of the WDN type, limiting values for rates of failure of water supply and the nuisance of performed repairs. These are followed by determination of the types of protection related to the functioning of the WDN, as well as risk levels, where these are assigned to the categories of tolerated, controlled and unacceptable risks [3].

The literature review shows that analysis based on failure rate is suggested for a WDN, as a proper tool for decision support when it comes to managing failures in such a system. The main objective of the work detailed here has then been the proposing of methods by which to identify and assess the risks associated with the functioning of a WSS in Poland, in relation to different implemented approaches. The results of the research allowed for the identification of areas and procedures able to mitigate risk of failure effectively.

#### **2. Legal Regulations Regarding Safety of the Water Supply for Consumers**

It is important to underline that the main task of water supply companies is to provide consumers with water of adequate quantity and quality which meets the requirements of the Drinking Water Directive (Council Directive 98/83/EC of 3 November 1998 on the quality of water intended for human consumption), with its latest amendments including Commission Directive (EU) 2015/1787 of 6 October 2015. In turn, in 2004, the World Health Organization (WHO) published guidelines for the application of water safety plans in which proposed analysis took in the chain of water supply intended for human consumption, inter alia, the water distribution network.

Analyses and assessments of WSS failure have been conducted in many countries since the second half of the last century [49–57], with obtained conclusions used to improve the programming, design, implementation and operation of the WSS; to inspire improvement and amendment of technical regulations, design standards, guidelines and instructions for the performance and acceptance of facilities; and to raise levels of technical knowledge and professional qualifications among designers, contractors and operators [58]. A very important issue is determination of the criterion value of risk, as a risk may be considered controlled when expected losses resulting from failure are acceptable to both a water company and the consumers of its water [2].

Risk acceptance criteria can be used in the decision-making process regarding the operation of such a system [3]. These criteria should take into account requirements relating to the reliability of subsystem operation, in terms of both quantity and quality, in accordance with applicable legal regulations, as well as social and economic conditions [59–61].

The management of the WSS should be improved constantly, through the pursuit of risk analysis and assessment and risk management in accordance with the recommendations of the WHO and existing regulations. The offer of water supply companies in terms of reliability of operation and water supply system safety should likewise improve constantly, notwithstanding changing economic and legal conditions, and not least in the circumstances of increasing pressure imposed by environmentalists and tougher quality standards relating to water for consumption [62,63].

For the purposes of Poland's Act on Crisis Management, a water supply system falls within the category of critical infrastructure, representing a system of key importance to the functioning of society and the state. As improper functioning of the above systems may pose a threat to human health or life, a high level of reliability and safety must be assured. The Crisis Management Act regulates the principles for the development of crisis management plans aimed at preventing crisis situations from arising, but also reacting should emergencies arise and ensuring the removal of consequences. Steps in the critical infrastructure protection process include risk assessment and indications as to priority activity, with a hierarchy being developed in line with the results of risk assessment carried out. The subject matter under consideration is thus of key importance in ensuring the proper functioning of a WSS.

#### **3. Materials and Methods**

*3.1. The Failure Risk Approaches of Failure Framework and Data Source*

The factors used in analyzing the failure rate among WDN were:


The analysis presented represents a starting point for analysis of failure in a WDN. With this aim in mind, the research started with failure-rate analysis in respect of the aforementioned factors, with analysis of failure rates in line with risk acceptance criteria. To compare different criteria in the assessment of failure rates, and to establish the compatibility of expert assessments, a rank analysis was applied. After this, a proposal for modified Multi-Criteria Decision Analysis as regards assessment of the risk of failure in a WDN was presented.

Data regarding failures of the WDN cover a period of 13 years' operation, from 2005 to 2017. The analysis was carried out with Statsoft software [64]. Operational data were provided from the functioning of a real water supply system controlled by a municipal water company. Data on technical documentation and information obtained from the managers and services were also used.

#### *3.2. Description of the Study Area*

The distinguished water supply system is located in Poland's Subcarpathian province. The supplied city is located on the right bank of the Vistula, in south-eastern Poland, where it covers a total area of 86 km<sup>2</sup> and is located at 50◦34 N and 21◦40 E (Figure 1). The city has an urbanization rate of 555 inhabitant/km<sup>2</sup> and is at 160 m above sea level. The area within the city boundary is dominated by arable land, which accounts for some 5500 ha or as much as 65% of the total. About 3000 ha (36% of the area) is genuinely urban in character. The most limited form of cover (accounting for just 140 ha) in turn involves industrial areas, which thus represent no more than about 2% of the city. Climatically, this city is in the zone of lowlands and foothills. This denotes hot summers, relatively small amounts of rainfall (circa 600 mm) and winters that are not especially severe.

**Figure 1.** Location of the examined water supply system in Poland's Subcarpathian Province.

#### *3.3. Characteristics of the Research Object*

The urban water supply under study is currently used by the entire population of the distinguished city of some 51,000 people, as well as by some 3000 people living in other localities. Currently, the coverage by this water supply is close to 100%, with only single houses not being covered by the collective municipal water supply. In total, the length of the WDN is of 309.95 km.

The transmission of water from the clean water tanks takes place via two mains of diameters 400 and 500 mm. As the sections of mains connect twice in connection chambers, switch offs are possible in case of emergency. The water intake consists of two pumping intakes: the first consists of five drilling wells with a capacity of Qemax = 183 m3/h and the other consisting of 22 drilling wells with a capacity of Qemax = 715 m3/h. Expressed per day, this gives a value of 17,160 m3/day. The volume of water taken from the intake is 2.92 million m3/year. The Water Treatment Plant (WTP) is fed from a Quaternary aquifer with a free water table from a depth of about 15 m below ground level (BGL). The designed maximum capacity of the WTP is 715 m3/h. The treated water flows from a clean tank to two water mains by means of three pumps. Two of these have a capacity of 440 m3/h and a third one of 280 m3/h. Their lifting capacity is 58 m.

The city's WDN consists of 16.8 km of mains and 191.7 km of distribution network. The water household connections network consists of 5000 connections and has a length of 100.85 km. The WDN is characterized by a mixed structure. The city center has a network with many loops, thanks to which, when a failure occurs, there is a possibility of residents continuing to be supplied with water thanks to a changing in the direction of water flow and inflow from another water line. The areas of suburban housing estates, in which there are single-family houses, are mostly supplied from a branched network. Investment activities in the area of construction and modernization of the WDN have been engaged in over the last few years, thanks to which more and more settlements are characterized by a looped system.

The largest share, about 43% of the entire WDN, is in the form of pipes less than 20 years old, but more than 11 years old. Approximately 90.2 km of the network is made of materials less than 10 years old, and this represents 31% of the entire WDN. The material structure of WDN is as follows: grey cast iron (37.7%), polyvinyl chloride (24.7%), galvanized steel (19.7%) and polyethylene (17.9%). The majority of house connections are made of cast iron and steel. It is only recently that pipes made of PVC (polyvinyl chloride) and PE (polyethylene) have been used. At present, no pipes forming the WDN are more than 40 years old. About 30.09% of the entire WDN is made up of pipes less than 10 years old, while about 42.7% is 11–20 years old and 26.4% of the entire WDN is above 20 years old.

#### *3.4. Methods*

#### 3.4.1. Failure Rate Analysis

The analysis of failure rate in the WDN was performed by reference to the following formula [65,66]:

$$
\lambda\_j = \frac{n\_j(t, t + \Delta t)}{N\_j \times \Delta t} \tag{1}
$$

where *nj*(*t*, *t* + Δ*t*) is the number of all failures in the time interval Δ*t* for the *j*th type of network; *Nj* is the length of the WDN of the *j*th type of network (km) and *j* is the type of WDN (e.g., for *M—*mains, *R*—distribution pipes, *SC*—service connections).

Standards as regards failure rate were proposed in the following works [6,67]. The former standards [6] provide that failure rates should not exceed criterion values as follows. In the case of mains *<sup>λ</sup>Mcrit* is of less than 0.3 failures/(km−1·year−1), as compared with less than 0.5 for the distribution pipe <sup>λ</sup>*D*crit, and ≤1.0 failures/(km−1·year−1) for service connections *<sup>λ</sup>SCcrit*. In [67], proposals for the classification of criteria values of failure rates for the whole WDN were presented and classified in terms of reliability, with a high failure rate concerning low reliability when *λlr\_crit* ≥ 0.5 failures/(km−1·year−1), high reliability when *<sup>λ</sup>hr\_crit* ≤ 0.1 failures/(km−1·year−1), and average reliability between the criteria values mentioned 0.1 < *<sup>λ</sup>ar\_crit* < 0.5 failures/(km−1·year<sup>−</sup>1).

#### 3.4.2. A Rank Analysis of Failure-Rate Criteria

To check the correlation between levels of compliance for individual criteria used to assess monthly failure rates, a rank analysis was applied, with its task being to determine correlations between two failure-rate criteria. Prior to the correlation calculation, data were sorted according to the degree of specific statistical characteristic, expressing them on ordinal scales, in line with the limit failure criteria given in Section 3.4.1.

The Spearman correlation coefficient is calculated as follows [1,68]:

$$r\_S = 1 - \frac{6 \times \sum d\_i^2}{n(n^2 - 1)}\tag{2}$$

where *di* is the difference between the two ranks of each observation and *n* the number of observations.

The failure rates for each month were ordered from the lowest to the highest for each year, with a rank assigned, whereby unity denotes the lowest failure rate and non-exceeded criterion of failure rate. The two separate assessments accorded with criteria presented in either [27] or [67].

#### 3.4.3. Seasonal Analysis of Failure Rates

To assess variability due to the action of the factor of season, seasonal analysis was carried out after [69], by reference to:

• the seasonal index *Si* given by:

*Water* **2018**, *10*, 1679

$$S\_i = \frac{\overline{y\_i} \times d}{\sum\_{i=1}^d \overline{y\_i}},\tag{3}$$

where the arithmetic mean for failure rate in homonymous sub-periods (January, February,..., December) (*yi*) is:

$$\overline{y\_i} = \frac{\sum\_{i=1}^{d} k\_i}{12},\tag{4}$$

where *d* is an annual cycle of fluctuations within which the monthly sub-periods were distinguished (*d* = 12, e.g., in January, February, etc.), and *ki* the failure rate in a given month and year.

• absolute levels of seasonal fluctuations for individual sub-periods, calculated using:

$$
\mathbf{g}\_i = \mathbf{S}\_i \times \overline{\mathbf{y}} - \overline{\mathbf{y}} \tag{5}
$$

where *gi* is the absolute level of seasonal fluctuation and *y* the average value given by:

$$
\overline{y} = \frac{\sum \overline{y}\_i}{d \times t'} \tag{6}
$$

where *t* is the number of years of observation.

• the standard deviation (SD) characterizing absolute levels of seasonal variation, with these representing an assessment of variability due to the factor of season over a period of 13 years in operation:

$$SD(\mathcal{g}\_i) = \sqrt{\frac{\sum\_{i=1}^d \mathcal{g}\_i^2}{d}}.\tag{7}$$

With these dependent relationships taken account of, seasonal fluctuations in the occurrence of failures around WDNs were determined.

3.4.4. A Proposed Modified Multi-Criteria Decision Analysis Implementing an Analytic Hierarchy Process for Risk Assessment as Regards Failures in a WDN

Modified Multi-Criteria Decision Analysis (mMCDA) entails a choice of criteria influencing the risk of failure in a WDN, and the future occurrence thereof. The method suggested is based on risk-criteria grouping as regards failure in a WDN, with assessment then carried out by reference to determined points values under the Analytic Hierarchy Process (AHP) method [1,40,70,71]. It was assumed that risk means a measure by which to assess a hazard or threat resulting either from probable events beyond our control or from the possible consequences of a decision. Impacts are distinguished through the additive value of risk, which includes the category criterion of the size of possible financial losses resulting where failures arise. Evaluation criteria weights, as criterion of financial losses. The risk is interpreted by us in terms of expected losses.

In this way, the appropriate risk measure is calculated using Equation (8).

$$
\tau A = \sum\_{j=1}^{m} w\_j \times F\_j(A)\_\prime \tag{8}
$$

where *r* is the additive value of risk, *wj* the point weight for each subcategory criterion *j* relating to design, performance or operation, or social or financial environment or surroundings, where *j* = {1, 2, ... , *m*}, and *Fj* means category preference for alternatives taken into account by the method considered.

Depending on its influence on the risk index, each category was assigned a percentage weight that took account of importance in line with the Analytic Hierarchy Process (AHP), introduced by Thomas L. Saaty [72]. The procedure for using the AHP involves definition and analysis of the decision problem and goal-setting decisions. A set of criteria that have to be comparable are then established. The method also accepts criteria expressed quantitatively and qualitatively. Another important step in the procedure overall is the selection of relevant alternatives and the determination of implications of variations in the defined criteria. It is then possible to structure the hierarchical model. Input data are subject to comparisons of selected elements in pairs, with the advantage of one element over another determined in this way, in line with a nine-point pairwise comparison scale. The final decision is based on a synthesis of partial evaluation and selection of the best variant, through the creation of overall assessment scales using criteria and partial evaluation of alternatives. This relative preference is determined on a linear scale and verbally, taking into account the same significance or the strength of any advantage. Individual preferences and specific degrees of advantage in line with the Saaty scale are as shown in Table 1 [72,73].



Giving a relative preference means that the selection and evaluation of individual parameters are closer to reality. An expert's opinion is used to increase the substantive correctness of results. In the first place, the pairwise comparison of criteria allows them to be ordered qualitatively and as a matrix is being constructed in a quantitative way from the value of 1/9 to that of 9, with seventeen possible evaluations thus provided for.

In pairwise comparison by reference to an *n* by *n* matrix (*A*), so-called reversible pairwise comparisons are made, as already mentioned, and can be written as [72,73]:

$$A = \begin{bmatrix} a\_{\vec{i}\vec{l}} \end{bmatrix} \text{ for } i, j = 1, \dots, n \tag{9}$$

where

$$a\_{ij} = \frac{1}{a\_{ji}}.\tag{10}$$

and 
$$a\_{ii} = 1 \tag{11}$$

the matrix is consistent, if: *aij* <sup>=</sup> *wi*

$$a\_{i\bar{j}} = \frac{w\_i}{w\_{\bar{j}}} \tag{12}$$

The matrix takes the following form:

$$A = \begin{bmatrix} 1 & a\_{12} & \dots & a\_{1n} \\ \frac{1}{a\_{12}} & 1 & \dots & a\_{2n} \\ \vdots & \ddots & \ddots & \ddots \\ \vdots & \ddots & \ddots & \vdots \\ \frac{1}{a\_{1n}} & \frac{1}{a\_{2n}} & \dots & 1 \end{bmatrix} \tag{13}$$

As a result of calculations of the comparison matrix in pairs, vectors of the priorities *w* = (*w1*, ..., *wn*), concerning the significance of elements, are obtained.

The obtained priority vectors can be presented in the form of a matrix of normalized grades, using Saaty's fundamental comparison scale:

$$Aw = \begin{bmatrix} \frac{w\_1}{w\_1} & \frac{w\_1}{w\_2} & \cdots & \frac{w\_1}{w\_n} \\ \frac{w\_2}{w\_1} & \frac{w\_2}{w\_2} & \cdots & \frac{w\_2}{w\_n} \\ \vdots & \ddots & \ddots & \vdots \\ \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot \\ \frac{w\_n}{w\_1} & \frac{w\_n}{w\_2} & \cdots & \frac{w\_n}{w\_n} \end{bmatrix} \times \begin{bmatrix} w\_1 \\ w\_2 \\ \vdots \\ \cdot \\ \cdot \\ w\_n \end{bmatrix} = n \times \begin{bmatrix} w\_1 \\ w\_2 \\ \vdots \\ \cdot \\ \cdot \\ w\_n \end{bmatrix} \tag{14}$$

hence equality occurs:

$$Aw = nw\tag{15}$$

To check if the matrix is consistent, the principal eigenvalue *λmax* corresponding to the greatest value of the eigenvector, should be calculated in line with Equation (16):

$$
\lambda\_{\text{max}} = \frac{1}{w\_{\text{i}}} \sum\_{j}^{n} a\_{i\text{j}} w\_{j\text{\textquotedblleft}j\text{\textquotedblright}} \tag{16}
$$

The matrix of pairwise comparisons *A* = (*aij*) is said to be consistent when its principal eigenvalue is close to *n*. To assess the compatibility of deviations, a Consistency Index (*CI*) is determined, in line with the dependence relationship:

$$CI = \frac{\lambda\_{\text{max}} - n}{n - 1},\tag{17}$$

Next, a Consistency Ratio (*CR*) is determined, defining the degree to which comparison of the validity of characteristics is characterized by incompatibility. This is given by:

$$CR = \frac{CI}{RI'} \tag{18}$$

where *RI* is the Random Index, depending on the number of comparable items, according to the matrix dimensions listed in Table 2.

**Table 2.** *RI* values corresponding to matrix dimensions.


To calculate weighting factors and consistency indexes, with a view to calculations performed being checked for their accuracy, use was made of methods involving the arithmetic and geometric means, as well as matrix multiplication. Depending on the designated consistency ratio, preferential information can be adopted or rejected. If the value of *CR* is greater than the permitted 0.1, an inconsistent matrix is present, and preferential information will need to be verified by decision makers. In such a case, actions will need to be taken to reduce low-quality observations and inconsistencies relating to them. It is very important that a decision problem taken up should not be too difficult and should be in line with assumptions adopted.

What is described here can combine with the experience and knowledge of experts to form a basis for expert opinions as regards the risk of failure. Indeed, the preparation of the categories and subcategories of criteria presented in Table 3 involved cooperation with designers, contractors and operators of the studied WDN. The key variables were also based on the relevant literature cited in the text. Expert opinion provides an opportunity for the experience of experts in a given field to be brought together, such that all the most important factors affecting the risk values associated with failures in a WDN may then be taken into account. Values for assessment subcategories of criteria are adopted in line with the importance of the damaged pipe, using a scale whereby a low hazard is on 1, a moderate hazard on 2 and a severe hazard on 3. If a given factor is not present in an analysis, a value of 1 for the subcategory is assumed.

Table 3 sets out proposed categories and subcategories of criteria for the analysis and identification of areas at risk of WDN failure [6,10,23,71].

For the mMCDA method a three-step scale to obtain the additive value of a risk category was adopted, risk assessment thus being provided for via implementation of AHP in line with a scale comprising tolerable risk (2.567 ÷ 3.06), controlled risk (3.06 ÷ 6.02) and unacceptable risk (6.02 ÷ 7.701). The proposed scale was based on the opinions of experts, who on the basis of their knowledge, experience and data in litteris assess risk categories. Assessment of the additive value of risk obtained supports the taking of decisions that relate to the operation or modernization of the system. Should a value indicating tolerable risk be obtained, no extra actions are required and the system is running in a proper and reliable way, with no preventive actions in the system needed either. Controlled risk means that the system is permitted to function but under the condition that modernization or renovation will commence. If an unacceptable level is found to have arisen, an immediate undertaking should be given, to the effect that the additive value of the risk category will be reduced. It should be noted that the consequences of failure to the water distribution subsystem are what cause periodic breaks in the delivery of water supply that is anyway of inadequate quality.




Notes: Above-standard network monitoring: full monitoring of the WDN through measurement of water pressure and flow rate, possession of specialized apparatus to detect water leaks by acoustic methods, unrestricted communication with the public via a phone line active 24 h a day, monitoring of water quality in a WDN by means of a protection and warning system. Standard: simplified monitoring of WDN by way of pressure measurement, inability to respond to minor leaks, water quality tests in WDN conducted. None: lack of monitoring of network and water quality.

#### **4. Results and Discussion**

#### *4.1. Failure Rate Analysis*

The failure risk analysis is found to contribute significantly to a reduction of failure overall, and a limiting of their consequences. The analysis question was based on operational data obtained over 13 years of observation. The *λWN* values for failure rates over the network as a whole (Figure 2), and for mains *λ<sup>M</sup>* (Figure 3), distribution pipes *λ<sup>D</sup>* (Figure 4), and SCs *λSC* (Figure 5), were determined in line with Equation (1). The failure rate for WDN pipes in subsequent months of the analyzed 2005–2017 period was then determined. To show seasonal fluctuations, Figure 6 was presented, with this supplying basic statistical characteristics for monthly values where failure rates are concerned. Results from seasonal calculations are as shown in Figure 7.

**Figure 2.** The failure rate for the whole network over the distinguished 2005–2017 time period.

**Figure 3.** The failure rate for mains over the distinguished 2005–2017 period.

**Figure 4.** The failure rate for distribution pipes over the distinguished 2005–2017 period.

**Figure 5.** The failure rate for SCs over the distinguished 2005–2017 period.

**Figure 6.** A box-whisker plot of the mean values for failure rates along the WDN in each month of the year and in line with the following operational data for the 2005–2017 period.

**Figure 7.** Relative seasonal fluctuations of failure rates in WDN.

Over the analyzed period of time, the average number of failures amounted to 1862 (on average 143 failures per year of observation), while the corresponding average failure rate is 0.522 failures/(km−1·year−1). Before the analysis was commenced, verification of operational data was performed, in respect of the classification of data on the failure rate of individual types of pipes and changes resulting from repairs and modernizations. The determined SD for the whole network was of 0.171 failures/(km−1·year<sup>−</sup>1), as compared with 0.178 failures/(km−1·year<sup>−</sup>1) for the mains, 0.113 for distribution pipes, and 0.483 failures/(km−1·year−1) for SCs. Where the whole network is taken into account (in line with the data presented in Figure 2), the highest failure rates were found to have occurred in 2010 and 2012, with 199 and 260 failures. The lowest failure rates occurred in 2008 and 2015, with 78 and 85 failures respectively. The highest failure rate in 2010 may be related to flooding, which took place twice that year and gave rise to consequences as regards liquidation that caused the number of failures to increase. The detailed analysis in turn indicated that the high failure rate in 2012 reflected low temperature, which caused the freezing of SC. Dependence is observed in failure-rate distribution over the month-long span of time in the analyzed years, and this is primarily seen to be due to changes in ground temperature in the winter-spring period. In 2012, about 24% of all failures (about 63 in absolute terms) occurred in February, with this mainly reflecting changes in ground temperature during periods of the winter season. An increased number of failures also occurs in July, with this probably owing to heat-driven drying of soil leaving pipes more prone to cracking. Such a situation arises with the superficial laying of pipes and low water flow at night then causing the freezing of water in the pipes more prone to frost. Over the last few years, the length of distribution pipes and SCs has increased significantly. Since 2005, about 22.2 km of water-supply connections and 45.8 km of distribution pipes have been constructed. It is worth underlining that failures of the main network do not cause a break in water supply because the mains consists of two pipelines. According to data presented (Figures 3 and 5), water supply connections (*λSCavg* = 1.012 failures/(km−1·year−1) are most often damaged, while the main network is least often damaged (*λMavg* = 0.22 failures/(km−1·year<sup>−</sup>1). The occurrence of more failures in distribution pipes and SCs is related to the age and type of material from which they are made. Over the years, the failure rate of distribution pipes has declined; attesting to modernization work on the WDN in the city. According to the data presented in Figure 4, in recent years there has been a tendency for the failure rate along distribution pipes to decrease. The significant reduction in failure rate occurred after 2010, it was due to the start of the modernization of the WDN. The distribution network meets the criteria that the failure rate should be less than 0.5 failures/(km−1·year<sup>−</sup>1), except in the year 2010 when the criterion value of about 12% was slightly exceeded. In the case of SC there was a decrease in the failure rate after 2012. In that period SCs have met the standards (*λSC* ≤ 1.0 failure/(km−1·year−1), and, as was mentioned above, this reflects modernization and a change in the material from which the network is made, given the use of the PVC and PE pipes less prone to failure and characterized by lower failure rates than pipes made from steel or cast iron. Taking into account the criteria presented in work [67] and the mean failure rate, the network as a whole is characterized by low reliability, with *λ* ≥ 0.5 failures/(km−1·year<sup>−</sup>1), while distribution pipes and SCs are of average reliability, with mean values for failure rates between 0.1 and 0.5 failures/(km−1·year−1). The results of calculations show that, in every year of the analyzed period, due to seasonal fluctuations (as Figure 6 shows), the total number of failures in February was higher than the monthly average (100%) by approximately 47.6%, in January by 25.3% and in July by 22.2%. In turn, in April the value is lower than the average by 31.7%, and so on (Figure 7). Such tendencies are also present in other systems [44]. Absolute seasonal fluctuation in *gi* informs us that, in January, the total number of failures is higher than in an average month (at *y* = 0.569 failures/(km−1·year<sup>−</sup>1)), and therefore by 0.271 failures/(km−1·year<sup>−</sup>1). In turn, in April, the figure is lower than the average by 0.18 failures/(km−1·year−1) (Figure 8). SD for absolute seasonal variation levels, over the period of 13 years in operation is at the level of 0.123 failures/(km−1·year<sup>−</sup>1).

The box-whisker plot for mean values of failure rates allowed for assessment overall, and the most marked variation was shown for the winter month of January, which is characterized by a SD equal to 0.75. The most limited differentiation in turn occurs in the summer month of July, for which the SD value is at the level of 0.39. The detailed analysis in the last year of observation indicates that the highest indicator values for failure rate characterizes pipes made of steel, representing 0.23 failures/(km−1·year <sup>−</sup>1), as compared with the lowest failure rate determined for PE, at the level of about 0.09 failures/(km−1·year <sup>−</sup>1).

The Spearman's rank correlation coefficient was used to compare the assessment of the different criteria in regard to each monthly failure rates. When the obtained values for coefficients were compared, a high level of agreement regarding the assessment and classification of failure rate was only found in the case of mains (at the level of 0.589, and hence significant at *p* < 0.05). In the case of service connections, the average value achieved for the relationship was of 0.395. The lack of failure rates below 0.1 failures/(km−1·year <sup>−</sup>1), caused the same assessment according to two criteria for the whole network and for the distribution pipes, and there was also functional dependence noted between these two assessments.

**Figure 8.** Absolute seasonal fluctuations of failure rates in WDN, in (no. of failures/(km−1·year<sup>−</sup>1)).

#### *4.2. Results of Failure Risk Assessment for WDN*

The performed analysis focused in on a section of the distribution network along with traffic difficulties caused by repairs of the WDN which arise very often. The examined section is of total length 2.4 km, and forms part of the WDN in the city under consideration supplying 50,000 inhabitants. The expert analysis revolved around six main criteria, which were compared in terms of pairs of individual criteria, in line with the scale developed that is presented in Table 4.


**Table 4.** Matrix construction and weighting calculation for categories associated with the failure risk assessment.

As the determined *CR* is acceptable (<0.1), the results obtained may be adopted. The preference vectors determined on this basis are shown in Table 5 [72].


#### **Table 5.** A characterisation of the network.

According to Table 2, the value of *r*(*A*) was 4.72, which corresponds to the category of a controlled risk level. The obtained value indicates that some improvement in the work of the WDN, or repair of certain sections of the WDN, should be considered to minimize risk of failure. Directions proposed for the WDN operator responsible for the monitoring of proper functioning Included the introduction of preventive measures, with a view to conditions leading directly to a WDN threat not arising. At this stage, the possibilities of undesirable events occurring should be analyzed from the point of view of the reliability of the water supply system, through water reserving, and the monitoring of failure around the WDN–as can be achieved by introducing repair brigades that are properly organized and fully ready 24 h a day. Introduction of a satisfactory level of protection, with a view to protecting the consumers of drinking water using the public WSS, also seeks to minimize possible losses. Minimizing the effects of unintentional events is also advocated, with water taken from alternative sources if necessary and the process of informing the public about interruptions in water supply checked for its effectiveness.

#### **5. Conclusions and Perspectives**

The issues presented may prove of great interest to authorities managing the operation of a WSS in water supply companies, as well as to the scientific community, doctoral students and students of technical and agricultural universities, employees of design companies and people who deal with water treatment technologies, and the operation of WDN and internal installations.

The methodology in question can gain direct application even in relation to water supply systems, given the relative universality of the approach presented here. Moreover, the presented method by which to pursue failure analysis should be a major element of any comprehensive management. If numerous failures arise in a WDN, the water company is obliged to modernize and renovate, so as to minimize pipeline failures. In its initial stages, managing of WDN safety entails creating a database of weak points of system functioning, with particular emphasis placed on frequency of occurrence and associated negative effects. In safety management, decisions are made as regards the selection of risk protection measures, implementation into operational practice and checks needing to be carried out in regard to the effectiveness of applied solutions.

Further work in this area will be focused on researching exposure to failure using operational data, field investigations and analyses of experts, with this facilitating identification of particular areas with high and unacceptable failure rates, in this way achieving a classification of sections of the network risk-mapped as in need of renovation. A further important aspect will entail analysis of failure assessment in the context of the determination of acceptance criteria vis-à-vis failure. The suggested method of failure risk assessment for WDNs is among expert methods that may be deployed, and may form part of the decision making process as regards plans for the modernization of a WSS. It can also be used where operational data prove insufficient and can be combined with the experience and knowledge of experts as a basis for opinions regarding the risk of failure. Furthermore, this type of analysis is very helpful in classifying those segments of a WDN that need repairing. The multi-criteria methods in failure risk assessment involve cooperation between designers, contractors and operators of the WDN, so this denotes an opportunity to combine the experience of experts in given fields. The methods proposed here from work based on operational data and expert knowledge can form a basis for a comprehensive risk management program that forms part of the water safety plans recommended by the WHO, which will be obligatory, given the need for modern standards regarding the safety of drinking water to be met.

**Author Contributions:** All authors equally contributed to the development of this manuscript.

**Funding:** This research was funded by Faculty of Civil and Environmental Engineering and Architecture, Rzeszow University of Technology, 35-959 Rzeszow, Poland.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Flow Velocity Distribution Towards Flowmeter Accuracy: CFD, UDV, and Field Tests**

#### **Mariana Simão 1,\*, Mohsen Besharat 1, Armando Carravetta <sup>2</sup> and Helena M. Ramos <sup>1</sup>**


Received: 22 October 2018; Accepted: 6 December 2018; Published: 8 December 2018

**Abstract:** Inconsistences regarding flow measurements in real hydraulic circuits have been detected. Intensive studies stated that these errors are mostly associated to flowmeters, and the low accuracy is connected to the perturbations induced by the system layout. In order to verify the source of this problem, and assess the hypotheses drawn by operator experts, a computational fluid dynamics (CFD) model, COMSOL Multiphysics 4.3.b, was used. To validate the results provided by the numerical model, intensive experimental campaigns were developed using ultrasonic Doppler velocimetry (UDV) as calibration, and a pumping station was simulated using as boundary conditions the values measured in situ. After calibrated and validated, a new layout/geometry was proposed in order to mitigate the observed perturbations.

**Keywords:** experiments; ultrasonic Doppler velocimetry (UDV); flowmeters; computational fluid dynamics (CFD); pipe system efficiency

#### **1. Introduction**

Worldwide, water companies use several flowmeters to measure the amount of water distributed. This equipment is quite vital for the management of water companies since important improvements are made according to the data provided by the available measures. The data provided by these devices also influences several performance indicators regarding the management of the system, such as non-revenue water (NRW) and water balances. The NRW is the volume of treated water that is not purchased. The water balances are important tools to detect leaks throughout the supply and distribution processes. Both these tools require accurate measurements [1].

The flow measurements can be correlated to the system efficiency. Usually, the systems are in part driven by gravity and by pressure differences, which require a pumping station. If the measurement accuracy is guaranteed, a higher energy efficiency level is possible to achieved, making possible a working period plan in the lower energy tariffs depending on the regularization ability and the water needs downstream [2,3].

Thus, the correct measurement by using flowmeters is an extremely important factor in terms of hydraulic system accuracy and management. Amongst other domains, the measured flow values are one of the key issues to detect leaks in pipe systems. Therefore, for any hydraulic system manager, the accuracy in flow measurements has significant impacts regarding both planning and investment decisions. Even when following all the constraints imposed by manufactures, incongruences can be identified in the field [4]. These irregularities suggested many times the occurrence of leaks in pipe systems. This conclusion raises a new problem that was not yet well identified [5].

Hence, this research has the objective to analyze different variables, namely the pipe layout and the way it can disturb the flow and flowmeter accuracy. The purpose is to assess how vertical or horizontal curves, expansions and reductions, among other geometries can influence the flow and, consequently the measurements. To fulfil this goal, the influence of several perturbations were identified, using an electromagnetic flowmeter and ultrasonic Doppler velocimetry (UDV), and compared with the computed velocity profiles, using CFD models. The numerical model was calibrated and validated using the same conditions as the experimental facility. The numerical simulations showed good approximation with the velocity measurements for two different geometries. To evaluate the accuracy of the numerical results, several experimental tests, using two different geometries, were firstly developed, identifying perturbations in the flow measurements, followed by analysis in a real case study.

#### **2. Electromagnetic Flowmeters**

Flowmeters are one of the most important and used devices to measure accurately, the volume flow rate [6]. Nowadays there are several types of flowmeters, but the most important for flow measurement are the electromagnetic and the ultrasonic ones. They are characterized for their high accuracy and self-monitoring [7]. According to the authors of [1], electromagnetic flowmeters are only disturbed by existing particles that may change the magnetic properties of each fluid. Properties like temperature, viscosity and the fluid density do not affect the measurements. However, as mentioned in [8], a steady regime is a necessary condition to guarantee accurate measurements.

These flowmeters have two different elements: the primary and the convertor (Figure 1). The first element corresponds to a hollow circular pipe with coils along its length and is set in the pipeline [9,10]. The flow passing through the section, creates an electromagnetic field which is proportional to the volume flow rate. The convertor is the brain element: it creates a magnetic field, reads the voltage, displays the data, and generates outputs. The convertor displays the volume flow rate and the amount of volume passed through.

**Figure 1.** Electromagnetic flowmeters components: (**a**) primary element; (**b**) convertor (adapted from [1]).

According to several manufactures, electromagnetic flowmeters assure an accuracy higher than 0.2%, as long as the flow velocities are higher than 1 m/s and the installation requirements are fulfilled. The accuracy of electromagnetic flowmeters depends on the velocity, according to Equation (1) [1] and as represented in Figure 2.

$$\begin{cases} -40Il + 6\% & \text{ $\mathcal{U}$  < 0.1 \text{ m/s} $}\\ 8\left(\mathcal{U} - 0.55\right)^2 + 0.38\% & \text{0.1 \text{ m/s} \le \mathcal{U}$  \le 0.5 \text{ m/s}}\\ -0.41I + 0.6\% & \text{0.5 \text{ m/s} \le \mathcal{U} < 1 \text{ m/s}}\\ 0.2\% & \text{\mathcal{U} \ge 1 \text{ m/s}}\end{cases} \tag{1}$$

**Figure 2.** Electromagnetic flowmeter accuracy curve [1].

In order to measure the volume flow rate, the Faraday principle is applied in electromagnetic flowmeters. In 1831, Michael Faraday discovered that if an electric conductor is moving in a magnetic field, perpendicular to the direction of the motion, an electrical current is induced and proportional to the magnetic field force, as well as the velocity. If the conductor is water, the flow passing through a magnetic field induces an electrical current proportional to the flow velocity. Figure 3 represents the operating principle, which is important to understand the following developments.

**Figure 3.** Operating principle of an electromagnetic flow meter [1].

Through mathematical manipulation, the electrical current, UE, induced by the flow passage is directly proportional to the value of the volume flow rate, *Q*, or *UE* ∼ *Q*. According to [1], the electrical current induced by the flow is taken into account only in the cross section defined by the electrodes, perpendicular to the flow. In other words, only the parallel component of the velocity is relevant for the volume flow rate measurement [11]. This means that the tri-dimensional nature of flow is disregarded.

The flow velocity profile is not the same throughout the entire cross section. For that reason, to prevent over or under-evaluations of the flow velocity, the suppliers of this equipment use a weighting factor, *W*. Figure 4 represents the weighting factor distribution in a cross section.

**Figure 4.** Weighting factor distribution *W* in the electrode plane [1].

In Figure 4, each point in the cross section has different weighting factors associated. The sum of the product between the velocity and the respective weighting factor corresponds to the electrical current, which is proportional to the volume flow rate. Although it is a good method to determine the volume flow rate in a homogenous constant magnetic field along symmetric velocity profiles, this formulation does not provide good results for non-symmetric velocity profiles. In these cases, it would over evaluate some values and under evaluate others [1], leading to a inaccurate volume flow rate. To avoid this, the suppliers of the equipment consider a magnetic induction field, *B*, inversely proportional to the weighting factor *W*, Equation (2):

$$\mathcal{W} \times B = \text{const} \tag{2}$$

According to Equation (2), for a cross section region in which the weighting factor is small, the magnetic induction field is increased, and vice versa. This action ensures good results even for non-symmetrical velocity profiles [2].

#### **3. Equipment and Layout**

#### *3.1. Flow Meter Installation*

According to several authors [1,2,4], electromagnetic flowmeters (Figure 5) are only disturbed by the existence of particles that might change the magnetic properties of the fluid. Thus, it is necessary to guarantee the existence of linear flow paths, to insure the measurement accuracy. It is therefore necessary to meet the minimum installation requirements for each geometry, specified by each manufacture. Figure 5 presents the installation requirements imposed by the suppliers for the most common geometries.

**Figure 5.** *Cont.*

**Figure 5.** Installation requirements for different pipe layouts.

#### *3.2. UDV Installation*

The UDV operating principle is the MET-FLOW approach—an ultrasonic probe is placed near the pipe wall with a certain slope. The ultrasound is emitted and travels across the pipe cross section. When the ultrasound hits a fluid particle, some energy of the ultrasound disperses and produces an echo, which reaches the probe. Then, by a mathematical manipulation, the equipment delivers a velocity value. Figure 6 presents the UDV operating principle.

**Figure 6.** Ultrasonic Doppler velocimetry (UDV) operating principle (adapted from [12]).

The UDV uses an ultrasonic wave in order to provide the velocity profile. At a certain time (e.g., *t*1), a burst is emitted. This burst propagates inside the liquid. At time *t*2, the burst touches the particle. If the sizes of the particle are much smaller than the wave length, only a very small echo is generated (scattering effect). This echo goes back in direction to the transducer, while the main energy continues its propagation [11,13]. At time *t*3, the echo reaches the transducer. The depth of the particle, *Depth* = *C*/2(*t*3 − *t*1), can be determined from the traveling time (*t*3 − *t*1), where *C* is the sound velocity of the acoustic wave in the liquid. Following each emission, the echo signal is sampled at a fixed delay after the emission. This delay defines the depth. However, it the particle moves between the successive emissions the sampled values taken at time *ts* will change over the time [14]. Depending of the shape of the emitted signal, these values may form a sinusoidal signal. The frequency, *Fd*, of this sinusoidal signal, which is named Doppler frequency, is directly connected to the velocity of the particle, which is given by the Doppler equation [14]:

$$V = \frac{F\_d C}{2F\_t C \alpha \sigma} \tag{3}$$

where *Fe* is the frequency of the emitted burst.

UDV offers instantaneously a complete velocity profile. Unfortunately, as the information is available only periodically, the maximum velocity (*Vmax*) exists for each pulse repetition frequency (*Fpr f*) [11]:

$$V\_{\text{max}} = \frac{F\_{\text{prf}}C}{4F\_{\text{c}}Cos\theta} \tag{4}$$

Thus, the maximum measurable depth (*Pmax*) is also defined by the pulsed repetition frequency Equation (5), and consequently the product of *Pmax* and *Vmax* is constant, and is given by Equation (6) [13,14]:

$$P\_{\max} = \frac{\mathcal{C}}{2F\_{prf}}\tag{5}$$

$$P\_{\text{max}} \times V\_{\text{max}} = \frac{\mathcal{C}^2}{8F\_{\text{\%}}Cos\theta} \tag{6}$$

In the presence of air, the equipment signal is not able to read the signal, therefore, the probe needs to be well installed. Thus, the probe is set in a specific probe holder (Figure 6), which is a plastic rectangle with several holes, where each of it has a certain angle associated. This probe holder has two functions: to guarantee the stability to the probe, since it makes possible to attach it to the pipe; and also make sure that there is no air between the pipe and the probe. This is accomplished by inserting a gel in the hole where the probe will be installed, always ensuring contact with it.

Moreover, since UDV uses very sensitive equipment, specially to electromagnetic noise, its reading may be compromised by the electromagnetic field induced by the flowmeters.

#### *3.3. Experimental Facility*

In order to assess the computational results provided by the CFD model, intensive campaigns were developed in an experimental facility, schematically represented in Figure 7, and adapted to two continuous geometries and operating conditions. Regarding this pipe system layout, UVD measurements were made in section A (in Figure 7a). Two different volume flow rates, corresponding to 100 and 12 m3/h were tested. In each UDV measurement, the velocity was captured in 100 different points in a total of 100 profiles.

**Figure 7.** *Cont.*

**Figure 7.** System layouts 1 and 2: (**a**) facility scheme; (**b**) lab installation, with flow direction identified by the blue arrow.

The results obtained are presented in Table 1 for section A (Figure 7). The flowmeter error decreases as the velocity increases, as it happens for *Q* = 100 m3/h for both layouts.



The velocity profiles measured in section A, using the UDV, are presented in Figure 8 for 100 m3/h and 12 m3/h, respectively.

'ĞŽŵĞƚƌLJϮ

'ĞŽŵĞƚƌLJϭ

**Figure 8.** *Cont.*

**Figure 8.** UDV profiles related to geometries 1 and 2: (**a**) for 100 m3/h; (**b**) for 12 m3/h.

#### **4. CFD Model**

#### *4.1. Governing Equations*

Computational fluid dynamics (CFD) provide a qualitative and quantitative prediction of fluid flows by means of mathematical and numerical methods. These simulation tools represent an important technological advance towards the detailed understanding of the flow, allowing theoretical considerations regarding the physical behavior of the flow, with mathematical formulations for tri-dimensional modelling analyses [7]. These models make possible, not only to study the behavior of turbulent and laminar flows, but also the multiple forms of exchanges of energy, flow phases, vorticity and turbulence levels [9].

The CFD model used in this work was COMSOL Multiphysics 4.3.b, which presents accurate results for several fluid flow problems [10]. COMSOL is a finite element method (FEM) software, which uses the mass conservation and the RANS (Reynolds averaged Navier–Stokes) equations as governing flow equations:

$$\frac{\partial \rho}{\partial t} + \frac{\partial (\rho u)}{\partial x} + \frac{\partial (\rho v)}{\partial y} + \frac{\partial (\rho w)}{\partial z} = 0 \tag{7}$$

$$
\rho \overline{\mathfrak{u}\_{j}} \frac{\partial \overline{u}\_{i}}{\partial \mathbf{x}\_{j}} = \rho \overline{\mathfrak{g}}\_{i} + \frac{\partial}{\partial \mathbf{x}\_{j}} \left[ -\overline{\mathfrak{p}} \delta\_{ij} + \mu \left( \frac{\partial \overline{u}\_{i}}{\partial \mathbf{x}\_{j}} + \frac{\partial \overline{u}\_{j}}{\partial \mathbf{x}\_{i}} \right) - \rho \overline{u'\_{i} u'\_{j}} \right] \tag{8}
$$

FEM is a computational method that divides an object into smaller elements. Each element is assigned to a set of characteristic equations that are then solved as a set of simultaneous equations to estimate the behavior of the object [15]. From the available turbulence models, the *k*-*ε* model was selected. The *k*-*ε* models [16] are the most common and most used models worldwide mostly for industrial applications due to its good convergence rate and relatively low memory requirements. This model solves two variables: *k*, the turbulence kinetic energy and *ε*, the rate of dissipation of turbulence kinetic energy. This turbulence model relies on several assumptions, the most important of which is that the Reynolds number is high enough. It is also important that the turbulence is in equilibrium in boundary layers, which means that the production equals the dissipation. These assumptions limit the accuracy of the model because they are not always true, since it does not respond correctly to flows with adverse pressure gradients that can result in under predicting the spatial extension of recirculation zones [17]. Furthermore, in the description of rotating flows, the model often shows poor agreement with experimental data [18]. In most cases, the limited accuracy is a fair trade-off for computational resources saved compared to more complex turbulence models.

While it is possible to modify the *k*-*ε* model so that it describes the flow in wall regions, this is not always desirable because of the very high resolution requirements. Instead, analytical expressions are used to describe the flow at the walls [15]. These formulations are known as wall functions. Wall functions ignore the flow field in the buffer region and analytically compute a nonzero fluid velocity at the wall [17–19].

Thus, by using wall functions, the wall lift-off in viscous units *δ*<sup>+</sup> *<sup>w</sup>* needs to be checked. This value alerts if the mesh at the wall is fine enough and should be 11.06 everywhere. If the mesh resolution in the direction normal to the wall is too coarse, then this value will be greater than 11.06, and a finer boundary layer mesh needs to be applied in these regions. The second variable that should be checked when using wall functions is the wall lift-off *δ<sup>w</sup>* (in length units) [18]. This variable is related to the assumed thickness of the viscous layer and should be small relative to the surrounding dimensions of the geometry. If it is not, then the mesh in these regions must be refined, as well.

The wall functions in COMSOL are such that the computational domain is assumed to start a distance *δ<sup>w</sup>* from the wall (Figure 9).

**Figure 9.** Computational domain starts a distance *δw* from the wall [13].

Nevertheless, in all simulations the fluid was water, with constant density and viscosity equal to 999.62 Kg/m<sup>3</sup> and 1.0097 × <sup>10</sup>−<sup>6</sup> <sup>m</sup>2/s, respectively.

In the CFD model, three types of boundary conditions were assigned: inlet, outlet and solid walls. For the inlet boundary condition, the pressure was set, as for the outlet condition, the average velocity. The no-slip condition was considered, which stated that the walls were impermeable. Thus, the boundary conditions used to defined the inlet condition, are governed by the set of the following equations [16]:

$$\begin{aligned} p &= p\_0\\ \left[ \left( \mu + \mu\_T \right) \left( \nabla \mu + \left( \nabla \mu \right)^T \right) - \frac{2}{3} \rho kl \right] \times n &= 0\\ k &= \frac{3}{2} \left( \mathcal{U}\_{\text{ref}} I\_T \right)^2 & \varepsilon = \mathcal{C}\_{\mu}^{3/4} \frac{k^{3/2}}{L\_T} \end{aligned} \tag{9}$$

where *p*<sup>0</sup> is the input value (of pressure), *IT* is the turbulent intensity, *LT* corresponds to the turbulence length scale, *l* is the mixing length defined by [20], and *Uref* is the reference velocity scale.

For the outlet condition, the boundary conditions are expressed by Equation (10), where *U*<sup>0</sup> corresponds to the average velocity (input value) and, the first equation represents the normal outflow velocity magnitude:

$$\begin{aligned} \boldsymbol{\mu} &= \boldsymbol{\mathcal{U}}\_0 \mathbf{n} \\ \nabla \boldsymbol{k} \times \boldsymbol{n} &= \mathbf{0} \qquad \nabla \boldsymbol{\varepsilon} \times \boldsymbol{n} = \mathbf{0} \end{aligned} \tag{10}$$

#### *4.2. Mesh Definition and Solution Convergence*

As for the mesh definition, the geometry was discretized into smaller units, called mesh elements. Its resolution and element quality are important aspects to take into account, when validating the model, since the decreasing of resolution can originate low accuracy results [21]. Meanwhile, low mesh element quality can lead to convergence issues [22–24].

All calculations have been performed on a PC (Intel 5, CPU 3.90 GHz, RAM 8 GB) with 4 cores and threads running in parallel. The model default uses a physic controlled mesh, which defines, automatically, the size attributes and operations sequences necessary to create a mesh adapted to the

problem. This mesh is automatically created and adapted for the model's physics settings. The default physics-controlled meshing sequences create meshes that consist of different element types and size features, which can be used as a starting point to add, move, disable, and delete meshing operations. Each meshing operation is built in the order it appears in the meshing sequence to produce the final mesh [23]. Customizing the meshing sequence helps to reduce memory requirements by controlling the number, type, and quality of elements, thereby creating an efficient and accurate simulation [25].

For the fluid-flow model, since the mesh is adapted to the physic setting, the mesh is finer than the default one, with a boundary layer (Figure 10a) in order to solve the thin layer near the solid walls where the gradients of the flow variables are high [26–29].

**Figure 10.** Physics-controlled mesh used: (**a**) Boundary layer mesh; (**b**) Refinement near a curve.

To ensure a proper and acceptable accuracy of the results, COMSOL uses an invariant form of the damped Newton method. Starting with *Z*0, the linear model (MUMPS) is solved for the Newton step (*δZ*) [16]. Afterwards, a new iteration is calculated, according to Equation (7), where *λ* is the damping factor:

$$\begin{aligned} Z\_1 &= Z\_0 + \lambda^\prime \delta Z \\ |\lambda^\prime| &< 1 \end{aligned} \tag{11}$$

The model estimates the error of the new iteration and, if the error of the current iteration is bigger than the previous one, the code decreases the damping factor and a new iteration process restarts. This procedure will occur until either the error is smaller than the error calculated in the previous iteration or the damping factor reaches its minimum value (i.e., 1 × <sup>10</sup>−4). When a successful step is reached, the algorithm computes the next iteration. The iteration process finish when the relative tolerance exceeds the relative computed error. The model stops the iteration when the relative error is smaller than 1 × <sup>10</sup>−<sup>3</sup> and the damping factor is equal to 1. Otherwise the solution would not converge and the iteration would continue. Figure 11 presents the convergence solution reached.

**Figure 11.** Convergence solution.

#### *4.3. Calibration and Validation*

According to the input data and the respective geometries, presented in Table 2 and in Figure 12, numerical simulations were made and compared with the experimental tests.


**Table 2.** Boundary, mesh and study conditions.

**Figure 12.** Model configuration: (**a**) geometry 1; (**b**) geometry 2.

Figure 13 presents the velocity profiles obtained in the CFD model and in the experimental tests, for the same section of the UDV measurements.

**Figure 13.** Comparison of velocity distribution profiles obtained through CFD model and experiments: (**a**) for 100 m3/h; (**b**) for 12 m3/h.

The velocity profiles in Figure 13, related to UDV, presents some spikes. This behavior is justified by the presence of fluctuations in the velocity and the existence of vortexes, both characteristic of deviations on the flow direction. Despite this, the results present a good approximation between the experimental data and the CFD results. In Figure 14 is represented the velocity contours in the pipe cross section for the two different volume flow rates.

**Figure 14.** Velocity contours in the pipe cross section: (**a**) for 100 m3/h; (**b**) for 12 m3/h.

In order to assess the associated errors of the volume flow rate measured by the flowmeter, if the velocity distribution is equal to the numerical model, several automatic procedures were implemented. The first step involves the definition of limits presented in Figure 4. The relevant data for this analysis was the one associated to the electrodes cross section of the flowmeter. The data redrawn from the model is a plan with the three coordinates, x, y and z and the average velocity, U. Each two coordinates are associated a velocity and each limit to a certain weight. Therefore, the velocity at points located near the limits (with a maximum error of 0.5%) was multiplied by the corresponding weighting factor (Figure 15).

**Figure 15.** Weighting function distribution of uniform magnetic field point electrode electromagnetic flowmeter. Calculated limits (black continuous lines) analogous to Figure 4. Data provided by the model represented by the blue ticks.

The subsequent step is the interpolation of points between limits, which were multiplied by the correspondent weighting factor. This procedure was made for all limits except the ones that are closer to the electrodes. The limits near the electrodes were very difficult to assess, since closely to the electrodes the weight was not entirely known. Thus, another assumption was taken, i.e., the weighting factor was calculated according to Figure 16. This function was defined and validated through the available experimental data. The x variable is the value resulted from the subtraction of the number of the total data point with the ones in the region near to the electrodes section.

**Figure 16.** The best function for the calculus of the factor for the region near the electrodes to fit the experiments (blue marks represent the experimental data used).

Knowing the remaining factor, the velocity was obtained through the sum of the velocity of each point, multiplied by the corresponding weighting factor, and then divided by the sum of the weighting factors. The error was then determined (Table 3) by applying Equation (12).

$$Error[\%] = \frac{\mathcal{U}\_{\text{calculated}} - \mathcal{U}\_{\text{boundary}} \text{ condition}}{\mathcal{U}\_{\text{boundary}}} \times 100\tag{12}$$

The theoretical error of the flowmeters ought to increase with the decrease in the volume flow rate. Thus, it is clear that the error associated to 12 m3/h are bigger than 100 m3/h (Table 3). For the volume flow rate of 100 m3/h it is verified that the error associated to geometry 1 is the smallest one. From the two experimental tests, geometry 2 corresponds to the worst scenario, since the pipe is not long enough to dissipate the flow perturbations caused by the profile vertical curves.

The difference between CFD and the lab tests is twofold: minor installation problems, such as the position of the gasket inserted in between flanges causing obstructions to the reading, and the level of detail of the CFD model.


**Table 3.** Summary of the errors calculated with the volumetric method for both geometries.

The error associated to the position of the gasket is important and, in several situations, avoidable errors. In these experiments it is mathematically improbable to assume that the errors could be preventable. However, in practice, since the number of gaskets is much smaller, these errors can be disregarded if engineering good practices are follow.

#### **5. Case Study**

#### *5.1. Geometry Layout*

The pumping station layout is presented in Figure 17, where almost 90% of the total water volume is pumped. The flow arrives by the drive pipe, i.e., right to left side of Figure 17c, and follows through the pump (represented by the green section) by a vertical pipe in the top of the drive pipe. Afterwards, the flow reaches the delivery pipe with a 30◦ angle, sidewise, after passing two 90◦ curves.

(**b**)

(**c**)

**Figure 17.** Pumping station layout (**a**); photographs of the different parts of the hydraulic system (**b**); modelling geometry—pump location identified by the green pipe section, flowmeter identified by the red line, flow direction identified by the blue arrow (plan view on the left and 3D view on the right) (**c**).

#### *5.2. Simulations*

The problems detected in situ were assumed to be correlated to the flowmeters in the pumping station. As the flow passes through the pump, the disturbances from upstream can be disregarded, i.e., considering only the perturbations caused by the pump. Thus, since the pressure is measured just downstream of the pump, the geometry used for simulating the disturbances is presented in Figure 18 and the related characteristics of the pipe and the flowmeter are presented in Table 4.

**Figure 18.** System layout modelling geometry—with the flowmeter section identified by the red arrow and flow direction identified by the blue arrow.

**Table 4.** Characteristic of the main pipe and flowmeter.


The simulations were performed by first defining the geometry, after which was necessary to identify the characteristics of the fluid (i.e., water) and the boundary conditions. Lastly the mesh definitions were chosen. The boundary conditions and other important features are described in Table 5.

**Table 5.** System layout existing situation: simulation input values and characteristics.


Figure 19 presents the streamlines velocities and the velocity distribution in the cross section of the flowmeter, respectively. In the flowmeter section, the streamlines appear to be parallel to each other.

**Figure 19.** Streamlines simulation along the hydraulic circuit for the pipe branch system layout (flowmeter section identified by the red arrow), in m/s (**a**); Velocity distribution in the cross section of the flow meter (**b**).

The velocity changes largely within the cross section, from zero near the wall to over 1.7 m/s in the right lower side, where the velocity attains the highest value. This behavior was not expected because the flow passes through two 90◦ curves after the pump, therefore, it would be reasonable to anticipate that the perturbations induced would be, in practice, neglected, according to manufactures experience. Nevertheless, it is important not to forget that the second curve presents a rotation in axis z. Therefore, the non-symmetric velocity is due to the existing geometry (Figure 19b).

For each point, provided by the numerical model, the correspondent weighting factor was estimated. The velocity measured by the flowmeter was calculated dividing the product of the velocity with the weighting factors by the sum of all the weighting factors. The error associated to the flowmeter was about 0.71%, determined according to Equation (8). The error calculated by the manufacture, through water balances, was close to −1%. Since the errors are alike, the model is considered to be validated and the results accurate.

#### *5.3. Solution*

In order to achieve a solution that presents a lower error, 3 different procedures can be adopted: (1) changing the geometry but maintaining the values of volume flow rate and pressure; (2) changing the inlet and outlet conditions but maintaining the geometry; and (3) using both solutions. The chosen procedure was the first one, since the volume flow rate and pressure demanded downstream remain the same.

The previous results showed that the existing layout would not be enough to guarantee the dissipation effects and the flow uniformity. For that reason, a new pipe with 20 m of length was added in order to make possible the complete dissipation of disturbances. Considering the flowmeter in the same location, as the previous simulation (i.e., the flowmeter located between 24 m of straight pipe upstream and 4 m downstream), the new proposed geometry corresponds to Figure 20.

**Figure 20.** System layout proposed modelling geometry—the flowmeter section represented by the red arrow; flow direction identified by the blue arrow (added pipe in red).

For the same boundary conditions and mesh features, presented in Table 5, the streamlines obtained are presented in Figure 21a as well as the velocity distribution across the electrodes plan (Figure 21b).

**Figure 21.** Streamlines simulation along the hydraulic circuit for the proposed system layout (flowmeter section identified by the red arrow), in m/s (**a**); velocity distribution in the electrodes cross section for the proposed geometry (**b**).

This profile presents minor perturbations compared to the one in Figure 18b. However, the velocity distribution is not exactly symmetric. The perturbation is still noticeable several meters ahead of the last singularity. For this amount of volume flow rate and pressure, the error associated to this simulation is 0.37%, which is closer to the reference value that the flowmeters suppliers assure for a flow velocity higher than 1 m/s (i.e., 0.2%).

#### **6. Conclusions**

The measurement problem has a significant influence in the correct management of technical, hydraulic and economic efficiency of water companies. Since the hydraulic circuits have, normally, two flowmeters, a water balance is possible. These balances are important tools to detect leaks throughout the supply and distribution processes. If the measurement is not accurate enough, the balances do not match and, subsequently, these tools loss their importance.

The procedure used to analyze the associated errors, revealed to be enough showing good results for the experiments and the case study. Nevertheless, more scenarios need to be studied in order to assess if the procedure developed can be applied for every case.

Regarding the simulation results, the model presented an error very similar to the one verified in the experimental tests developed by [30]. According to the results achieved, the errors calculated are very significant for an equipment with high accuracy. Consequentially, errors associated to the installation and the geometry are very important with relevant issues that are not always taken into account by engineers, responsible for the design, nor by teams responsible for the installation of this equipment. If these factors are not addressed properly, the flow measurement is not accurate.

Regarding the proposed solution, the geometry to minimize the installation errors is not feasible, since the facility is already built according to manufacturer's conditions. This could be overcome if well-defined automatic tools were applied to estimate the uncertainty of flow measurements. Using CFD models, a faster error approximation from a certain type of geometry, volume flow rate, and pressure could be obtained.

Moreover, this research intends to assess that the installation requirements proposed by the manufactures are not sufficient to dissipate such uncertainties. Thus, it is reasonable to emphasize the importance of flow measurement for water companies, regarding more efficient and rational water use and management.

**Author Contributions:** The author H.M.R. has contributed with the idea, in the revision of the document and in supervising the whole research. M.S. and M.B. developed the experimental tests and the CFD modelling and the analysis of the results between CFD and experiments. H.M.R. and A.C. revised the final document.

**Funding:** This research was funded by REDAWN, EAPA\_198/2016.

**Acknowledgments:** The authors wish to thank to the project REDAWN (Reducing Energy Dependency in Atlantic Area Water Networks) EAPA\_198/2016 from INTERREG ATLANTIC AREA PROGRAMME 2014–2020 and CERIS-IST and the Hydraulic Laboratory, for the support in the conceptual developments and experiments.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
