**Use of Artificial Intelligence to Improve Resilience and Preparedness Against Adverse Flood Events**

#### **Sara Saravi 1,\*, Roy Kalawsky <sup>1</sup> , Demetrios Joannou <sup>1</sup> , Mónica Rivas Casado <sup>2</sup> , Guangtao Fu <sup>3</sup> and Fanlin Meng <sup>3</sup>**


Received: 12 February 2019; Accepted: 6 May 2019; Published: 9 May 2019

**Abstract:** The main focus of this paper is the novel use of Artificial Intelligence (AI) in natural disaster, more specifically flooding, to improve flood resilience and preparedness. Different types of flood have varying consequences and are followed by a specific pattern. For example, a flash flood can be a result of snow or ice melt and can occur in specific geographic places and certain season. The motivation behind this research has been raised from the Building Resilience into Risk Management (BRIM) project, looking at resilience in water systems. This research uses the application of the state-of-the-art techniques i.e., AI, more specifically Machin Learning (ML) approaches on big data, collected from previous flood events to learn from the past to extract patterns and information and understand flood behaviours in order to improve resilience, prevent damage, and save lives. In this paper, various ML models have been developed and evaluated for classifying floods, i.e., flash flood, lakeshore flood, etc. using current information i.e., weather forecast in different locations. The analytical results show that the Random Forest technique provides the highest accuracy of classification, followed by J48 decision tree and Lazy methods. The classification results can lead to better decision-making on what measures can be taken for prevention and preparedness and thus improve flood resilience.

**Keywords:** Artificial Intelligence; machine learning; flood; preparedness; resilience; flood resilience

#### **1. Introduction**

Climate change is expected to increase the frequency and intensity of extreme events, including flooding. Across the world, flooding has an enormous economic impact and cost millions of lives. The number of large scale natural disasters have significantly increased in the past few years; this results in considerable impact to human lives, environment and buildings, and substantial damage to societies. During these disasters, vast quantities of data are collected on the characteristics of the event via governmental bodies, society (e.g., citizen science), emergency responders, loss adjusters and social media, amongst others. However, there is a lack of research on how this data can be used to inform how different stakeholders are/can be directly or indirectly affected by large scale natural disasters pre-, during and post-event disaster management decisions. There is a growing popularity and need for the use of Artificial Intelligence (AI) techniques [1] that bring large-scale natural disaster data into real practice and provide suitable tools for natural disaster forecasting, impact assessment, and societal resilience. This in turn will inform on resource allocation, which can lead to better preparedness and

prevention for a natural disaster, save lives, minimize economic impact, provide better emergency respond, and make communities stronger and more resilient.

The majority of the work done in the area of AI in flooding has been on the use of social media [2] (e.g., Facebook, Twitter or Instagram) where status update, comments and photo sharing have been used for data mining to improve flood modelling and risk management [3–5]. The author of [6] has used Artificial Neural Networks (ANN) in flash flood prediction using data from soil moisture and rainfall volume. Further research [7] has focused on the use of the Bursty Keyword technique combined with the Group Burst algorithm to retrieve co-occurring keywords and derive valuable information for flood emergency response. AI has also been used on images provided by citizens affected by flooding for emergency responders to have situational awareness. In [8], the authors explored the use of algorithms based on ground photography shared within social networks. Use of specific algorithms for satellite images or aerial imagery [9] to detect flood extent was also explored. Within this context, the resolution of the imagery collected is of key relevance to detect features of interest due to the complexity of the imagery acquired in urban areas [10]. Some studies have focused on the analysis of high resolution, real-time data processing to derive flood information [11,12].

Overall, the majority of disaster-monitoring methods are based on change detection algorithms, where the affected area is identified through a complex elaboration on images from pre- and post-event. Change detection can be applied to the amplitude or intensity, filtered or elaborated versions of the amplitude [2,13,14]. For example, in [15], a technique based on change detection applied to quantities related to the fractal parameters of the observed surface was developed to address change detection. In [16], information extracted from images taken and shared on social media by people in flooded regions was combined with the embedded metadata within them to detect flood patterns. In this study, a convolutional inception network was applied on pre-trained weights on ImageNet to extract rich visual information from the social media imagery. A word embedding was used for the metadata to represent the textual information continuously and feed it to a bidirectional Recurrent Neural Networks (RNN). The word embedding was initialized using Glove vectors, and finally, the image and text features were concatenated to find out probability of the sample, including related information about flooding. Similarly, in [17], an AI system was designed to retrieve social media images containing direct evidence of flooding events and derive visual properties of images and the related metadata via a multimodal approach. For that purpose, an image pre-processing including cropping and test-set pre-filtering based on image colour or textual metadata and ranking for fusion was implemented. In [18,19], Convolutional Neural Networks and Relation Networks were used for end-to-end learning for disaster image retrieval and flood detection from satellite images.

#### **2. Methodology**

Flood management strategies and emergency response depend upon the type of area affected (e.g., agricultural or urban) as well as on the flood type (e.g., fluvial, pluvial or coastal). Resilience measures are generally deployed by governmental agencies to reduce the impact of flooding. The use of AI to derive flood information for specific events is well documented in the scientific literature. However, little is known about how AI could inform future global patterns of flood impact and associated resilience needs.

The main focus of this paper is on the use of AI and more explicitly Machine Learning (ML) applied to natural disasters involving flooding to estimate the flood type from the weather forecast, location, days event lasted, begin/end location, begin/end latitude and longitude, injuries direct/indirect, death direct/indirect and property and crop damage.

The proposed method uses historical information collected from 1994 to 2018, to learn the patterns and changes in various parameters' behaviours in flood events and make remarks for the future events. This paper focuses only on providing an insight on how floods behave differently in terms of damage. Using the historic data, the models developed adapt to all the changes over time by learning from past information and can provide high accuracy of classification. The proposed technique is highly

adjustable to use for estimating any other desired parameters, providing a detailed set of historic data. This technique combined with other proposed techniques from literature, such as satellite imagery, social media information, etc. can provide a very powerful tool for having insight to flood events and help with preparedness, reduce impact, and better decision making.

The flood pathways and key variables are first described, and data sourcing and ML techniques used in this study are then explained, and finally the model evaluation metrics are provided.

#### *2.1. Flood Pathways and Key Variables*

An important step in the process is to create influence maps as visual aids to illustrate how related variables interact and affect each other. Figure 1 indicates an overall causal loop diagram for a full flooding scenario. This map includes all the stakeholders and their interaction i.e., natural climate change, man-made facilities, businesses, public and governmental sectors, and social media.

**Figure 1.** An overall causal loop of flood pathways. Green refers to normal conditions, amber refers to caution for a probability of flooding, and red refers to a very high risk or event of flooding.

The season, temperature, location of the area (highland/inland, coastal/urban), and rain/snowfall can affect the levels of the sea or river and reservoirs. Usage of water by energy suppliers, human/farming water demand can change the balance of the reservoirs and river water levels and indicate a warning sign for flooding. The use of social media and public awareness can help tackle the risks of a flooding event. When the flooding occurs, many sectors are affected i.e., road/rail way damage, gas/water pipe damage, power cut, farming damage, etc. Emergency response and access to food and local amenities are restricted. Grocery prices spike due to lack of supply and businesses are affected by physical building damage or lack of human resources. In this loop, public awareness, emergency responses (local/public), early release of sewerage system, and shelters can help save lives.

There are three states in the diagram in Figure 1: Green refers to normal conditions, amber refers to caution for a probability of flooding, and red refers to a very high risk or event of flooding.

#### *2.2. Data Collation and Preparation*

One of the most important requirements for this research was a detailed historic and inclusive data set, which was acquired from Federal Emergency Management Agency (FEMA) [20], National Oceanic and Atmospheric Administration (NOAA) [21] and National Climatic Data Centre (NCDC) [21]. The data used in this study covers the period of 1950 until 2018. However, the data of flooding events is recorded from the year 1994 onwards and is inclusive of all event types, i.e., heavy snow, thunderstorm, fog, hail, flood, high wind, etc. Table 1 summarises the different attributes used to build models within the ML based framework.

**Table 1.** Description of the attributes used to build models within the ML based framework to inform flood resilience and resistance actions (Source: NCDC-NOAA [21] and FEMA [20]).


The data sets collated were inspected for outliers and extreme values, missing data and redundant information via a bespoke MATLAB application known as Flood Data Aggregation Tool (FDAT) developed for this purpose by the authors. FDAT removes all existing outliers and missing data and re-orders the data based on specific categories chosen for the implementation of the ML techniques and it converts the alphanumeric and alphabetic data to numeric data using one-hot encoding.

The processed dataset is then divided into training and testing data sets. The training data set is used to develop the model whereas the testing data set is used to quantify the accuracy of the model built. A larger portion of data is separated for training and the remaining is used for testing and validation to ensure accuracy of the classification model built and software performance. Figure 2 shows an overview of the overall analytical process employed in this study. The raw data collected is fed to FDAT tool for data cleaning, normalisation, aggregation, and other pre-processing steps. The output data is divided into testing and training data and passed through the ML/data mining tool, the patterns are extracted, and the model is built, followed by analysis to verify its quality.

**Figure 2.** Work flow summarising the analytical steps followed. "Data" includes both data collation and extraction.

*Water* **2019**, *11*, 973

Figure 3 illustrates a sample of input data prepared for training, which is an output of the ADAT application. All the attributes have been described prior to data definitions. The detailed attributes can be found in Table 1.

**Figure 3.** A sample of input data prepared for training.

#### *2.3. Machine Learning—Model Development*

AI is human intelligence demonstrated by machines and ML is an approach to achieve AI. In this study, the focus will be on supervised ML to learn from historic data, find clustered data, and build classification model for future events. This type of ML works particularly best when used in combination with historic data (results included). For this purpose, a number of data mining tools such as: Weka [22], MATLAB [23] and Orange [24] have been deployed. The reason for using two softwares (Weka and Orange) for this purpose is to test more ML techniques with various training and testing dataset sizes. The data is divided into two parts. The first will be used for training and generating the model, and the second will be used for testing and verification.

Several models were developed using different ML techniques to be able to measure and compare their performance and accuracy and choose the best. These techniques included Random Forest (RF), Lazy, J48 tree, Artificial Neural Network (ANN), Naïve Bayes (NB), and Logistic Regression (LR). The class for the model in all cases was set as "event type" (Table 1), which included flash floods, coastal floods, lakeshore floods and other kinds of floods. The independent attributes in all models were weather forecast, location, injuries direct, injuries indirect, death direct, death indirect, property damage (\$) and crop damage (\$) (Table 1). Two of these ML methods used i.e., RF and NB, are tested in both softwares (Weka and Orange) to ensure the accuracy of results.

#### 2.3.1. Random Forest (RF)

RF [25] is a collaborative learning technique. It is a combination of the Bagging algorithm and the random subspace method and deploys decision trees as the basis for classifier. Each tree is made from a bootstrap sample from the original dataset. The key point is that the trees are not exposed to trimming, allowing them to partly overfit to their own sample of the data. To extend the classifiers at every branch in the tree, the decision of which feature to divide further is limited to a random sub-data from the full data set. The random sub-data is chosen again for each branching point.

#### 2.3.2. Lazy

Lazy [25] learning is a ML approach where learning is delayed until testing time. The calculations within a learning system can be divided as happening at two separate times: training and testing (consultation). Testing time is the time between when an object is introduced to a system for an action to be taken and the time when the action is accomplished. Training time is before testing time during which the system takes actions from training data in preparation for testing time. Lazy learning refers to any ML process that postpones the majority of computation to testing time. Lazy learning can improve estimation precision by allowing a system to concentrate on deriving the best possible decision for the exact points of the instance space for which estimations are to be made. However, lazy learning must store the entire training set for use in classification. In contrast, eager learning need only store a model, which may be more compact than the original data.

#### 2.3.3. J48 Decision Tree

A decision tree is an analytical machine-learning model that estimates the target value of a new test sample data based on several characteristic values of the training data. The nodes within a decision tree represent the attributes, the branches between the nodes represent the probable values that the attributes in training data may have, and the terminal nodes represent the final classification value of the attribute to be estimated.

J48 is an open source Java implementation of the C4.5 algorithm in the Weka data mining tool. In order to classify a new item, the J48 Decision tree [26] first has to generate a decision tree based on the training data attributes. Therefore, when it encounters a training set, it categorises the attribute that separates different samples most clearly. This feature allows most about the data instances to be classified and contains the highest information gain.

Amongst the possible features, if there is any value for which there is no uncertainty, which the data instances falling within its category have the same value for the target variable, then that branch terminates and will be assigned to the target value obtained.

#### 2.3.4. Artificial Neural Networks (ANN)

An ANN [25] is a data processing system that is inspired by the way neurons in biological brain systems process information, which facilitates a computer to learn from the information provided. The crucial component of this system consists of a large number of greatly interrelated processing features (neurones) working uniformly to solve problems. An ANN system is developed without any precise logic. Basically, an ANN system adapts and changes its configuration based on the pattern within the information that flows through the network during the learning phase, and very similar to human beings, it learns by example. An ANN is primarily trained with a large amount of data. Training involves feeding input data and stating what the output would be. ANN use numerous principles, including gradient-based training, fuzzy logic, genetic algorithms, and Bayesian methods.

ANNs are designed to identify patterns in the given information. Particularly classification task which is to classify data into pre-defined classes, clustering task which is to classify data into distinctive undefined groups), and estimation task which is to use past events to estimate future ones.

One of the challenges of using ANNs is the time it takes to train the networks, which can be computationally expensive for more complex tasks. Another challenge is that the ANNs are like a black box, in which the user can feed in information and receive a built model. The user can modify the model, but they do not have access to the exact decision-making process.

#### 2.3.5. Naïve Bayes (NB)

NB [25] is a simple learning algorithm that uses Bayes' rule along with a theory that the features are provisionally independent given the class. Although this independence theory is usually affected in practice, NB usually delivers competitive classification precision. NB is commonly used in practice because of its computational efficiency and many other desirable features such as low variance, incremental learning, direct prediction of posterior probabilities, robustness in the face of noise, and robustness in the face of missing values.

NB provides a system to use the information from training data to estimate the future probability of each class y given an object x. These estimations can be used for classification or other decision support applications.

#### 2.3.6. Logistic Regression (LR)

LR [25] is a mathematical model for estimation of the probability of an episode happening based on the given input data. LR provides a tool for applying the linear regression methods to classification problems. LR is used when the target variable is categorical. Linear regression estimates the data by defining a straight-line equation to model or estimate data points. LR does not look at the relationship between the two variables as a straight line. Instead, LR uses the natural logarithm function to find the relationship between the variables and uses test data to find the coefficients. The function can then estimate the future results using these coefficients in the logistic equation. LR uses the concept of odds ratios to calculate the probability. This is defined as the ratio of the odds of an event happening to its not happening.

#### *2.4. Model Evaluation Metrics*

The system has been trained with several different combinations; however, the final system uses one based on the selected attributes, which was an output of the classifier attribute evaluation from an ML tool. All ML models developed were validated using evaluation criteria, i.e., confusion matrix [25], Mean Absolute Error (MAE) [25] and Root Mean Squared Error (RMSE) [25]. These metrics are used for summarising and assessing the quality of the ML model.

A confusion matrix summarises the classifier performance with regards to the test data. It is a two-dimensional matrix, indexed in one dimension by the actual class of an object and in the other by the class that the classifier allocates, and the cells represent: true positives (TP), false positives (FP), true negatives (TN) and false negatives (FN) identified in a classification. Multiple measures of accuracy are derived from the confusion matrix i.e., specificity (SP), sensitivity (SS), positive estimated value (PPV) and negative estimated value (NPV). These are calculated as follows:

$$SP = T\text{N} / (T\text{N} + FP) \tag{1}$$

$$SS = TP / (TP + FN) \tag{2}$$

$$\text{PPV} = \text{TP} / (\text{TP} + \text{FP}) \tag{3}$$

$$NPV = TN / (TN + FN) \tag{4}$$

The MAE is the mean of the absolute value of the error per instance over all samples in the test data. Each estimation error is the difference between the true value and the estimated value for the sample. MAE is calculated as follows:

$$MAE = \frac{\sum\_{i=1}^{n} |y\_{est,i} - y\_i|}{n} \tag{5}$$

where *yi* is the true target value for test sample *i*, *yest*,*<sup>i</sup>* is the estimated target value for test sample *i*, and *n* is the number of test samples.

The RMSE of a model with respect to a test data is the square root of the mean of the squared estimation errors over all samples in the test data. The estimation error is the difference between the true value and the estimated value for a sample. RMSE is calculated as follows:

$$RMSE = \sqrt{\frac{\sum\_{i=1}^{n} \left(y\_{\text{est},i} - y\_i\right)^2}{n}} \tag{6}$$

where *yi* is the true target value for test sample *i*, *yest*,*<sup>i</sup>* is the estimated target value for test sample *i*, and *n* is the number of test samples.

#### **3. Model Training and Testing Results**

The original data consisted of 126,315 samples. After removing the outliers and filtering using ADAT application, 69,558 instances were narrowed down to be used for learning. The data was then divided into two parts: a larger section (data from 1994 to 2017) for training purposes and the smaller section (data from 2018) for testing purposes. A scattered plot of the training data for event type can be seen in Figure 4.

**Figure 4.** Visual distribution of flood types in training data.

The test data for Orange software consists of 164 instances with target feature of "Event-Type", of which 44 instances are coastal flood, 58 instances are flash flood, 53 are flood and 9 are lakeshore flood. The test data for Weka software consists of 3478 instances of which 100 are coastal flood, 2104 are flash flood, 1266 are flood and 8 are lakeshore flood. The testing dataset size can vary depending on the user desire and performance of the ML software and hardware capabilities. Four different types of ML techniques in Orange and four techniques in Weka are tested and evaluated in order to be able to choose the best performing technique. The techniques tested are RF, Lazy, J48 tree, ANN, NB, and LR. An overview of the model training and testing process is illustrated in Figure 5, which has been implemented in Orange. First, the training data is passed through different classification techniques (i.e., NN, LR, RF and NB) to build the classification models, then the models are tested using the test data. Finally, the evaluation results are produced and can be analysed and/or visualized (i.e., confusion matrix and scatter plot).

**Figure 5.** Visual overview of the model training and testing process in Orange.

The results of the models and their performance are discussed below.

Based on the confusion matrix, the RF model using Orange software classified 7 out of 9 instances as Lakeshore Flood, 49 out of 53 as Flood, 32 out of 58 as Flash Flood and 44 out of 44 as Coastal Flood correctly. The correctly classified instances in total was 132 (80.49%). According to the proportion of the classifications on the test data, the RF was ahead of all other techniques. Figure 6 shows the evaluation results and confusion matrix for the RF model based on the supplied test set. Based on the confusion matrix, the RF model using Weka software classified 1850 out of 2104 as Flash Flood, 820 out of 1266 as Flood, 100 out of 100 as Coastal Flood and six out of eight instances as Lakeshore Flood correctly. The correctly classified instances in total are 2776 (79.83%). The MAE is 0.13 and RMSE is 0.27. The RF technique provides best results as compared to the techniques tested in Weka. Figure 7 indicates a visual classifier error for the RF model. The diagram shows the distribution of correctly classified instances in coloured clusters, where the bigger clusters (shown in crosses) are the correctly classified instances and the smaller clusters (shown with small squares) are the misclassified instances.


**Figure 6.** Evaluation Results for Random Forest model in Weka.

**Figure 7.** Visual classifier error for Random Forest model in Weka. The crosses indicate correctly classified instances and squares refer to misclassified instances.

The confusion matrix result (Figure 8) indicates that the Lazy model (Weka) correctly classified 1858 out of 2104 as Flash Flood, 809 out of 1266 as Flood, 99 out of 100 as Coastal Flood and six out of eight instances as Lakeshore Flood. The correctly classified instances in total was 2772 (79.70%). The MAE was 0.13 and RMSE was 0.27. The Lazy technique provides better results than Naïve Bayes.


**Figure 8.** Evaluation Results for Lazy model in Weka.

The confusion matrix (Figure 9) shows that the J48 model (Weka) classified 1882 out of 2104 as Flash Flood, 791 out of 1266 as Flood, 100 out of 100 as Coastal Flood and 0 out of 8 instances as Lakeshore Flood correctly. The correctly classified instances in total is 2773 (79.73% rate of success). The MAE is 0.14 and RMSE is 0.27; the J48 technique provides very similar results to Lazy.


**Figure 9.** Evaluation Results for J48 model in Weka.

Based on the confusion matrix (Figure 10) result for the ANN model (Orange), successful classifications are 7 out of 9 as Lakeshore Flood, 49 out of 53 instances as Flood, 27 out of 58 as Flash Flood and 44 out of 44 as Coastal Flood correctly. The total of correctly classified instances is 127 (77.44% rate of success).


**Figure 10.** Evaluation Results for Neural Network model in Orange.

The NB model using Orange software correctly classified nine out of nine instances of Lakeshore Flood, 45 out of 53 as Flood, 27 out of 58 as Flash Flood and 44 out of 44 as Coastal Flood according to the resulting confusion matrix (Figure 10). The total of correctly classified instances was 125 (76.22%). The number of confused instances as Flash Flood and Flood are of a slightly higher proportion compared to the ANN based on 164 instances considered for validation. Correspondingly the confusion matrix showed that the NB model built using Weka software (Figure 11) classified 1614 out of 2104 as Flash Flood, 853 out of 1266 as Flood, 89 out of 100 as Coastal Flood and seven out of eight instances as Lakeshore Flood correctly. The correctly classified instances in total was 2563 (73.69%). The MAE was 0.18 and RMSE was 0.29. This outcome indicates that the NB has a very high classification accuracy when trained on a small data set or larger data set as both software have produced similar results when trained and tested on both large and small data sets.

*Water* **2019**, *11*, 973


**Figure 11.** Evaluation Results for Naïve Bayes model in Weka.

The LR model (Orange) is completely disregarded as it has provided as small as 27.44% correctly classified instances (Figure 12).


**Figure 12.** Evaluation Results for Logistic Regression model in Orange.

Table 2 shows the predicative models' performance using the MAE and RMSE evaluation metrics.


**Table 2.** Summary of evaluation metrics of all classification models' performance in Orange and Weka.

#### **4. Discussion**

#### *4.1. Model Performance*

This paper takes advantage of historic data (23 years) collected from flood events to classify the type of the flood that is likely to happen in future. The data was filtered to remove outliers, correct the missing values, order the data, and more, using ADAT application. 69,558 instances (from years 1994 to 2017) were filtered as an output of the ADAT, which has been used in training the machine and the remaining 3642 instances (from 2018) were used for testing. 164 instances were used for test data in Orange software and 3478 instances were used as test data for Weka software with the class of "Event-Type". Five types of ML techniques were tested and evaluated in order to be able to choose the best performing technique. The techniques tested are RF, Lazy, J48 tree, ANN, NB, and LR. Based on the evaluation metrics resulting from the models, it can be concluded that the best performing technique used in Orange software (based on the 164 test cases) proves to be the RF, with RMSE result of 2.06, MAE of 0.19 and correctly classified instance rate of 80.49%, followed by ANN with RMSE result of 2.45, MAE of 0.23 and correctly classified instance rate of 77.44%, NB with RMSE result of 2.52, MAE of 0.24 and correctly classified instance rate of 76.22% and LR with RMSE result of 5.52, MAE of 0.73 and correctly classified instance rate of 27.44%. The best performing technique used in Weka software (based on 3478 test cases) proves to be the RF, with RMSE result of 0.27, MAE of 0.1345 and correctly classified instance rate of 79.83%; followed by J48 with RMSE result of 0.27, MAE of 0.14 and correctly classified instance rate of 79.73%, Lazy with RMSE result of 0.27, MAE of 0.13 and correctly classified instance rate of 79.70% and NB with RMSE result of 0.29, MAE of 0.17 and correctly classified instance rate of 73.69%.

The comparison of the evaluation metrics from the models built using both software tools with different test data sets indicates that the RF performs best amongst all other techniques followed by ANN.

Note that the generated model can be used to provide an insight into the number of flooding events. For example, on using future estimation of weather forecast for the next 10 years from an environment agency, the built model can provide an understanding of the patterns and number of flooding events, and type of flood to be expected for that period.

#### *4.2. Awareness, Preparedness and Resilience*

The results of most flooding scenarios indicate that the event is a significant threat to people's lives. Each of the emergency response elements shown in the influence map (Figure 1) such as emergency service, health service, community awareness and media coverage serve as a centre for the organisation of assistance supplies, which involves people who are trained to perform rescue tasks, of which one is for flood incidents. Flooding may result in the loss of a logistic centre and delay in responding to rescue operations. In this scenario, other elements must be used for coordination i.e., there may need to be a collaborative effort amongst response agencies from neighbouring districts and regions to help bring about normality to the affected areas. Sharing of resources and equipment to deal with a flood may be required if the local agencies are operating at their absolute capacity. In the UK, there is existing protocol that agencies adhere to for interagency collaboration and also inter-regional collaboration in times of crisis.

Flooding of infrastructure such as roads and bridges dramatically affect the ability of road users to get from a to b. This not only means that people will be unable to access these routes during a flood, but also that response agencies will be restricted in accessing certain positions, which is clearly problematic if urgent responses are needed. Even once the water withdraws, the residual deposits blocks the usage of the roadways and special equipment must be used to clear them. Flooding of businesses and schools could cause a disruption, preventing employees and students, respectively, from attending.

#### 4.2.1. Flood Type Awareness and Classification

Based on the results of this study, which classifies flood type, it can be concluded that most emergency responders can be alerted of flood type that is likely to happen in the area. If emergency responders are not local to the area, they may not to be aware of what consequences different flood types can have and what kind of assistance is needed, specific to that flood type. For example, they might only envision damage to infrastructure and overlook building damages. Therefore, improvement measures can be taken to train more qualified staff to reduce the impact flooding may have on homes and other infrastructure.

#### 4.2.2. Preparedness Planning

Depending on the flood type frequently happening to an area, plans can be set to reduce the threat and impact from flooding at the local level. The plan, called the preparedness plan, categorises the roles and responsibilities of each stakeholders i.e., emergency responders, firefighters, police, community and etc. that must take action in case of an emergency situation. By identifying the flood type that is likely to happen, the situation can be monitored and if it reaches an alarming level, a warning for evacuation can be issued to the people local to flood-prone areas. Also, the name of the closest shelter and evacuation route can be provided during an emergency.

#### 4.2.3. Resilience

Bringing human knowledge and AI together is an important way to build resilience. The advantage of this research is to help comprehend, prioritise, and respond to the potential impact of flooding based on flood types and protect the community and environment. Flood type classification based on weather forecast will allow for key decision- makers such as local councils and emergency response agencies to take action to put in place mitigation measures to decrease the potential impact of an oncoming event.

This is achieved through better understanding of flood types and to make a long-term strategic plan to prioritise the need for investment based on flood type risk and consequence to reduce impact on lives, infrastructures, finances, etc. Solutions that are resilient to a variety of flood types can be made, mitigation measures can be implemented, and prioritising locations which are at higher risk can be kept under surveillance leading up to an anticipated flood occurrence.

#### **5. Conclusions**

This paper describes a robust evaluation of state-of-the-art ML techniques to classify flood type based on weather forecast, location, days event lasted, begin/end location (name of the place), begin/end latitude and longitude, injuries direct/indirect, death direct/indirect and property and crop damage to classify the flood type. The use of ML on historic data in terms of flood type classification is used for the first time in this study. Extensive historic data has been filtered and used for training and testing purposes. Several models were built and compared using evaluation metrics i.e., RMSE, MAE and confusion matrix. The comparison of the evaluation metrics from the models built suggest that the RF technique outperforms other techniques in terms of RMSE, MAE and confusion matrix (accuracy rate of 80.49%), followed by ANN (accuracy rate of 77.44%). One of the benefits of this work is that the same tools and techniques can be used to classify and estimate many other parameters, which have been used in currently used training set i.e., location, potential financial damage, etc.

This study has focused on flooding as a sub-branch of natural disasters. Nevertheless, there are many other possibilities to apply AI in natural disasters and help build resilience. Data mining can be applied to help insurance companies, estimate level of compensations, estimate damage to crops and buildings, and estimate number of injuries and death in specific areas. By being able to estimate more accurately and learn from past events, many lessons can be learned and applied in building resilience against natural disasters. This will also improve public awareness and preparedness, and save lives, if faced with an adverse natural condition.

A constraint in this study is restrictive access to inclusive data. Most of the big data sets are in the hands of private companies, and there are no principles for data sharing. Accessing and collecting these data is difficult and expensive.

The results from this study show for the first time that ML can be used to analyze datasets from historical disaster events to reveal the most likely events that could occur, should similar events be experienced in the future. From the literature review, to the best of the authors' knowledge there is no equivalent set of data as the NCDC NOAA data from a UK source. It would be advantageous for the UK environmental agency to provide a detailed historic data from past natural disasters similar to the NCDC NOAA. This work has proven that the application of ML concept and if such data is made available from the UK, this ML method can be applied, and more advances can be made within the UK not only for flooding, but any type of natural disaster (based on provided data type) to help preparedness, raise awareness and build resilience in disaster management, especially in areas more prone areas to natural disasters.

The results are highly dependent on data quality and precision. If the data is not reliable or is "bad" data, the ML is trained on wrong information and therefore the results will be completely misleading. Missing information or parameter limitation can also adversely affect the model built.

For further study, building a predictive model for future events will be considered. Furthermore, the use of AI in more natural disaster areas and improving resilience in disaster management, especially in the UK, is strongly suggested.

**Author Contributions:** Conceptualization, S.S. and R.K.; Data curation, S.S. and D.J.; Formal analysis S.S. and D.J.; Funding acquisition, R.K.; Investigation, S.S.; Methodology, S.S.; Project administration, R.K. and G.F.; Resources, S.S.; Software, S.S.; Supervision, R.K.; Validation, S.S. and D.J.; Visualization, S.S.; Writing–original draft, S.S. and D.J.; Writing–review & editing, S.S.; R.K.; M.R.C.; G.F. and F.M.

**Funding:** This research was funded by the EPSRC for funding on BRIM (Building Resilience Into Risk Management), Ref: EP/N010329/1

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Communication* **Achieving Urban Flood Resilience in an Uncertain Future**

#### **Richard Fenner 1,\*, Emily O'Donnell 2, Sangaralingam Ahilan 3, David Dawson 4, Leon Kapetas 1, Vladimir Krivtsov 5, Sikhululekile Ncube <sup>5</sup> and Kim Vercruysse <sup>4</sup>**


Received: 26 April 2019; Accepted: 22 May 2019; Published: 24 May 2019

**Abstract:** Preliminary results of the UK Urban Flood Resilience research consortium are presented and discussed, with the work being conducted against a background of future uncertainties with respect to changing climate and increasing urbanization. Adopting a whole systems approach, key themes include developing adaptive approaches for flexible engineering design of coupled grey and blue-green flood management assets; exploiting the resource potential of urban stormwater through rainwater harvesting, urban metabolism modelling and interoperability; and investigating the interactions between planners, developers, engineers and communities at multiple scales in managing flood risk. The work is producing new modelling tools and an extensive evidence base to support the case for multifunctional infrastructure that delivers multiple, environmental, societal and economic benefits, while enhancing urban flood resilience by bringing stormwater management and green infrastructure together.

**Keywords:** blue-green infrastructure; flood risk management; sustainable; drainage systems; resilience; systems

#### **1. Introduction**

Achieving Urban Flood Resilience requires solving a number of interrelated engineering, environmental and socio-political challenges to achieve the transformative change needed in urban stormwater and flood risk management. This will involve dealing with the future uncertainties associated with extreme weather events driven by global warming, and the consequences of increasingly rapid urbanisation. Recognising these constraints, new approaches are urgently needed, which are based on adaptive and flexible designs of a range of traditional grey infrastructure (e.g., underground pipes, detention tanks and lined drainage channels) and innovative blue-green solutions (e.g., swales, rain gardens, wetlands, green roofs and restored urban streams). In moving towards source control techniques which provide infiltration, attenuation and storage that mimic the predevelopment hydrology of an area, the paradigm of urban flood management can be switched from threat to opportunity. This emphasises the resource potential of urban stormwater (for example for water supply and energy generation), as part of a wider "system of systems" of urban infrastructure which can be

managed interoperably. However, to deliver these changes planning and adoption barriers must be overcome and solutions must be embraced by the communities they serve.

The Urban Flood Resilience research project was launched in 2016 and comprises a consortium of nine UK Universities supported by government agencies, engineering consultants and planning authorities responsible for managing urban water and flood risk [1] The aim is to provide the methodologies and tools needed to make transformative change possible through adoption of a whole systems approach to urban flood and water management. This is being done through the development of the next generation of hydrosystems models [2] that bridge the interfaces between urban/rural and engineering/natural hydrological systems. Flexible adaptation pathways are being assessed using a multiple benefits approach [3] to determine the most effective mix of blue-green and grey systems for any given location and time. Capturing the resource potential of stormwater through rainwater harvesting [4] and local energy recovery using micro-hydropower [5] is being examined as part of a multifunctional systems approach to urban flood management [6]. The consortium is also considering the importance of the interfaces between planners, developers, engineers and beneficiary communities by investigating citizen's interactions with blue-green infrastructure [7]. The results of the research are being applied and demonstrated in a series of case study locations, including Newcastle and Ebbsfleet, supported by effective Learning Action Alliances which are helping translate these findings into practice [8].

#### **2. Preliminary Results**

While the work is ongoing this short communication presents a summary of some of the new research outputs not yet reported elsewhere.

#### *2.1. Contribution of Blue-Green Infrastructure to Improving Natural Capital*

The impact of future urban intensification on Natural Capital is being investigated by Heriot Watt, Cambridge and Newcastle Universities, focusing on the London Borough of Sutton. Natural Capital refers to the stock of natural features/assets, e.g., freshwater, land, soil, minerals, air, seas, habitats, biodiversity and processes, which together provide the foundation for the flows of ecosystem services [9]. The work is calculating the extent to which blue-green infrastructure systems such as rain gardens, swales and green roofs can mitigate natural resource depletion associated with new development. This is done using the Natural Capital Planning Tool [10] and a geographic information system (GIS) analysis to assess natural capital indicators, such as flood risk regulation, at different spatiotemporal scales.

Research focuses on the residential area of Carshalton, where recent development has led to the existing drainage network reaching its capacity and extreme storm events have caused local flooding incidents. The local authority is planning to mitigate additional flood risk associated with a plan for developing 3000 new homes with new blue-green infrastructure while also providing a "natural capital uplift". Two development approaches to stormwater management have been tested consisting of a "grey" pipe-based approach and a blue-green approach including green roofs, rainwater harvesting, rain gardens and street swales. The effectiveness of each approach is assessed using the CityCat hydrodynamics model [2].

The Natural Capital Planning Tool calculates the impact score for 10 selected ecosystem services together with an overall aggregated development impact score. The calculated impact scores are based on a range of indicator data such as population density, soil drainage class, size of green space sites and spatial land use information for the pre- and post development state of an area. Such information is automatically translated into impact scores based on an expert informed quantification model embedded in the Natural Capital Planning Tool. The tool also calculates theoretical minimum and maximum possible scores which show the potential of the site to lose or gain natural capital and associated multiple benefits.

Results show an overall negative development impact score of −17.35 resulting from the introduction of housing infrastructure on green field sites in Carshalton, with air quality and local climate regulation particularly affected. Compared to the theoretical minimum possible score of −21.09, this implies that the "grey" pipe-based approach will result in loss of several multiple benefits as this approach replaces most natural capital in the area. With the introduction of different blue-green approaches based on the adoption of sustainable drainage systems (SuDS) the overall development impact score, although still negative, is significantly reduced to −0.12. This is because some blue-green options such as road swales will enhance natural capital in some parts of the study area while other options such as green roofs will negatively impact on predevelopment land uses which have more potential for multiple benefit delivery compared to the green roofs option. A summary of the nature of the ecosystem service impact scores for each individual blue-green/SuDS intervention on the predevelopment land use are shown in Table 1. All blue-green approaches had a positive impact on aesthetic value and local climate regulation, with swales having the most positive impact overall.


**Table 1.** Natural capital impact scores of blue-green infrastructure options on ecosystem services.

\* n/a is not applicable.

For the blue-green/SuDS interventions hydrodynamic modelling showed an increase in water depth in the swales and a decrease in water depths over the northern part of the area. The interventions in the southern area were found to be less useful as this has less connectivity with the downstream parts of the catchment and the outfalls into the Wandle River. Overall the number of floodable properties was reduced by over 65%. The results highlight the trade-offs and synergies in multiple benefits associated with different combinations of blue-green options. It was found that grey development options increased the flood risk downstream compared to a three times reduction using blue-green options. Overall such Natural Capital assessments can help practitioners, local authorities, planning agencies and developers understand the interdependencies between the natural and urban environments, highlight the different environmental and social benefits that strategies may deliver and provide insight into the relative performance of a range of flood risk management measures.

#### *2.2. Adaptation Pathways for Drainage Infrastructure Planning*

Again using Carshalton to demonstrate new assessment procedures, Cambridge University has developed a roadmap for urban drainage adaptation over the next 40 years, shown conceptually in relation to changing climate inputs Figure 1.

#### *Water* **2019**, *11*, 1082

**Figure 1.** Assessment of adaptation pathways to meet uncertainties in future climate.

In comparing each of these possible future pathways additional criteria are used to complement conventional cost–benefit analysis, such as indicators of adaptiveness, flexibility, ease of implementation and a monetised and spatial evaluation of the wider multiple benefits that can be delivered by SuDS and blue-green options. The procedures are using the SWMM dynamic rainfall-runoff-routing simulation model (as developed by US Environment Protection Agency (EPA)) to identify when service thresholds are exceeded, triggering the need for further interventions. The integration of a variety of appraisal techniques offers a new perspective to help inform the timing and placement of blue-green interventions, while acknowledging future uncertainties. As an envelope of possible climatic and urbanisation rates are considered, a viable planning horizon becomes evident. The approach provides a pragmatic response to changing drivers of urban flooding allowing real options evaluation techniques to help determine the scale of interventions that are required and when they should be implemented.

#### *2.3. Interoperability*

Leeds University have introduced the concept "interoperability" to guide transition from local multifunctionality to city-scale multisystem flood management, through actively managing connections between infrastructure systems to convey, divert and store flood water. They define interoperability as "The ability of any water management system to redirect water and make use of other system(s) to maintain or enhance its performance function during exceedance events" [6].

The work is focusing on achieving a better understanding of how flood prone areas are linked to flood source areas within urban catchments and how opportunities for capturing or transferring stormwater along these pathways can be identified. A source to impact analysis is using a transferable modelling approach based on the CityCAT hydrodynamic model to systematically identify locations contributing most to flood hazard within a catchment. This is reported using a spatial analysis framework by synthesising and combing spatial data on (i) flood hazard, (ii) intervention efficiency and (iii) opportunities for interoperability at the catchment scale. Figure 2 illustrates the application of these ideas in the case study city of Newcastle.

**Figure 2.** Geographic information system (GIS) analysis of flood impacts across Newcastle.

The first image (flood hazard) depicts the potential flood hazard from a 1 in 50 year event mostly in the middle to lower part of the city centre. The second image (intervention efficiency) shows flood source areas, which are located mostly in the upper and lower part of the catchment. The third image (interoperability) illustrates the infrastructure systems that can potentially act as a water management asset (green) and systems where additional flood water should be avoided.

This approach can help identify different types of flood source areas and, when combined with information on infrastructure systems, can guide the selection of appropriate flood management solutions from a catchment perspective. Linking flood hazard to flood source areas provides insights into the hydrological processes and spatial interactions within the urban catchment, and can help prioritise locations for flood management intervention.

#### *2.4. Uncovering Implicit Perceptions of Blue-Green Interventions for Flood Resilience*

The Universities of Nottingham and the West of England are working to better understand the attitudes and preferences towards blue-green and grey infrastructure in public open space, moving beyond stated preference studies where attributes are openly articulated (e.g., through questionnaires) to developing new Implicit Association Tests (IATs) that measure hidden perceptions and subconscious attitudes.

Explicit and implicit preferences for SuDS in public greenspace were investigated using, respectively, a Likert scale test and an IAT, based on the method presented by Greenwald, McGhee, and Schwartz [11] and using the FreeIAT software (Adam W.Meade, North Carolina State University, Raleigh, CA, USA) [12]. In the IAT, participants responded to photographs (target concepts) illustrating public greenspace with and without SuDS and to positive and negative words representing evaluative attributes. Implicit preferences were calculated based on reaction times. Key findings from a trial with 44 participants in Bristol revealed significant differences between implicit and explicit preferences for public greenspace with and without SuDS. Overall, respondents tended to explicitly prefer greenspace without SuDS (70%, compared with 7% who explicitly preferred greenspace with SuDS) (Figure 3). In contrast, more respondents implicitly preferred public greenspace with SuDS (41%, compared with 30% who implicitly preferred greenspace without SuDS). This suggests that the respondents' subconscious attitudes are more favourable towards SuDS, rather than just greenspace. The combined explicit–implicit tests provide insight into perceptions of attractiveness, safety and tidiness associated with SuDS and public greenspace which may help blue-green infrastructure be designed in ways that may improve public acceptability of features.

**Figure 3.** Attitudes towards sustainable drainage systems (SuDS) in public greenspace evaluated via an Implicit Association Tests (IAT) and Likert scale test (explicit measure). Sample population consists of 44 respondents in Bristol, UK, who completed the tests in May–July 2018.

In related work, the Heriot-Watt University team are investigating the potential for SuDS retrofitting at the Houston Industrial Estate, Scotland, assessing the public awareness of SuDS technology, relevant regulations and barriers to retrofit using a questionnaire survey. Most companies were unaware of the Scottish Environmental Protection Agency's General Binding Rules, which provide statutory controls over certain low risk activities that may affect the water environment in Scotland (e.g., diffuse pollution), although most companies claimed familiarity with some SuDS techniques such as permeable paving and gravel filter drains. Many of the potential plot scale techniques, such as detention basins and larger scale rain gardens, were unfamiliar to most companies and there was a lot of confusion in the understanding of SuDS features. A site survey confirmed the presence or absence of these features on individual plots and compared them to the claims of the site owners. Approximately 40 SuDS features, including detention basins, swales, filter strips and gravel filter drains (and more), were claimed by premises on the industrial estate and yet were not identified as present when the research team conducted their survey. Conversely permeable block paving was ubiquitous across the development but not always recognised as present by the respondents. This work suggests that there is a clear need for sustained engagement and education to raise awareness of SuDS technology and overcome barriers to their adoption in areas where they need to be retrofitted.

#### *2.5. Urban Metabolism Modelling in Ebbsfleet Garden City*

Urban metabolism refers to the combination of the technical and socioeconomical processes that occur in cities and that result in growth, production of energy and elimination of waste. The metabolism-based modelling approach led by Exeter University overcomes issues commonly encountered by independent modelling and management of urban water systems (water supply, wastewater and surface water collection) by providing an integrated approach that considers the interconnections and interdependencies of all urban water subsystems. This approach increases resilience to extreme events, from floods to droughts, by enabling the integrated modelling of future water management intervention strategies, such as water harvesting and grey water recycling, and their impact on the performance of downstream infrastructure such as stormwater systems.

The study is using WaterMet2, a mass-balance conceptual urban metabolism modelling tool [13] to evaluate the sustainability performance of the urban water systems in the Ebbsfleet Garden City, over a predefined long-term planning horizon and for a range of future possible intervention strategies. The integrated evaluation of the water and wastewater management master plans of the local water companies (Thames and Southern Water) through WaterMet2 simulations will aid long-term decision making by illustrating the impact of different sustainability strategies, including options for wastewater treatment, on the urban water system.

The ultimate aim is to couple this work with a semi-quantitative system dynamics model (currently being coproduced by the Ebbsfleet Water Forum) to further assess sustainable water management options for the Ebbsfleet Garden City. The Ebbsfleet Water Forum, established by the Urban Flood Resilience consortium in 2017 and based on a 'Learning and Action Alliance' framework [8], is coordinated by Open University team members and the Ebbsfleet Development Corporation. The vision of the Forum is to incorporate blue-green infrastructure into development design from the outset, and champion urban flood resilience to encourage the realistic delivery of sustainable urban water management. The system dynamics model investigates sustainable water management options for the Garden City, including the reduction of residential potable water use, increased blue-green space and SuDS, and rainwater harvesting. An early causal loop diagram is presented in Figure 4. The system dynamics model is currently being codeveloped iteratively by the project team and Ebbsfleet stakeholders, and will be run under a range of future climate and socioeconomic conditions, and including a range of policy incentives, to enhance the capacity of local stakeholders to influence policy in a more sustainable direction.

**Figure 4.** Early version of the causal loop diagram created by the project team and members of the Ebbsfleet Water Forum illustrating variables and linkages in response to a discussion of sustainable water management options for the Ebbsfleet Garden City.

#### *2.6. Further Research*

Further research of the consortium not reported here includes work by Newcastle University on developing a new comprehensive model of urban hydrosystems, a study of suspended particulate matter and water quality by Heriot-Watt University illustrating ecosystem functioning of SuDS ponds, challenges associated with implementing SuDS through the strengthened English planning system led by the Open University, effective approaches to blue-green infrastructure community engagement by the University of the West of England and Exeter University's development of Rainwater Management Systems that concurrently reduce stormwater discharges and potable water consumption.

#### **3. Conclusions**

Achieving urban flood resilience is a multifaceted problem which requires integrated solutions across a range of disciplines: From advances in modelling the hydrodynamic performance of combined grey and blue-green interventions, and flexible engineering design, to social insights into public perceptions and community acceptability. The work reported in this paper has shown that adopting blue-green approaches which rely on storage and infiltration through vegetated surfaces can contribute to wider Natural Capital in areas subject to rapid urbanisation. Adaptive design pathways that help identify the right balance between grey pipe-based approaches and sustainable drainage systems, and how these can be managed interoperably across urban catchments, are key outputs to date, in addition to research into SuDS perceptions and sustainable water management options through system dynamics modelling, that generate insight into how the public and businesses understand the value of blue-green systems.

**Author Contributions:** Conceptualization, Writing—Review & Editing: R.F. and E.O.; Section 2.1: S.N., Table 1, Figure 1 and Section 2.2: L.K. and R.F., Section 2.3 and Figure 2: K.V. and D.D., Section 2.4: E.O. and V.K., Section 2.5: S.A.

**Funding:** This research was performed as part of an interdisciplinary project undertaken by the Urban Flood Resilience Research consortium (www.urbanfloodresilience.ac.uk). This work was supported by the Engineering and Physical Sciences Research Council (grant numbers EP/P004180/1, EP/P003982/1, EP/P004210/1, EP/P004237/1, EP/P004261/1, EP/P004296/1, EP/P004318/1, EP/P004334/1 and EP/P004431/1).

**Acknowledgments:** The authors acknowledge the contributions of these peoples (Colin Thorne, Scott Arthur, Stephen Birkinshaw, David Butler, Brian D'Arcy, Glyn Everett, Vassilis Glenis, Chris Kilsby, Jessica Lamond, Greg O'Donnell, Karen Potter, Tudor Vilcan, Nigel Wright) in developing the above work.

**Conflicts of Interest:** The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the result

#### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **A Conceptual Time-Varying Flood Resilience Index for Urban Areas: Munich City**

#### **Kai-Feng Chen \* and Jorge Leandro**

Chair of Hydrology and River Basin Management, Department of Civil, Geo and Environmental Engineering, Technical University of Munich, Arcisstrasse 21, 80333 Munich, Germany; jorge.leandro@tum.de

**\*** Correspondence: kaifeng.chen@tum.de; Tel.: +886 927-211-971

Received: 9 March 2019; Accepted: 16 April 2019; Published: 19 April 2019

**Abstract:** In response to the increased frequency and severity of urban flooding events, flood management strategies are moving away from flood proofing towards flood resilience. The term 'flood resilience' has been applied with different definitions. In this paper, it is referred to as the capacity to withstand adverse effects following flooding events and the ability to quickly recover to the original system performance before the event. This paper introduces a novel time-varying Flood Resilience Index (FRI) to quantify the resilience level of households. The introduced FRI includes: (a) Physical indicators from inundation modelling for considering the adverse effects during flooding events, and (b) social and economic indicators for estimating the recovery capacity of the district in returning to the original performance level. The district of Maxvorstadt in Munich city is used for demonstrating the FRI. The time-varying FRI provides a novel insight into indicator-based quantification methods of flood resilience for households in urban areas. It enables a timeline visualization of how a system responds during and after a flooding event.

**Keywords:** flood resilience index; flood resilience analysis; urban floods; flood risk assessment; flood inundation modelling

#### **1. Introduction**

According to worldwide evidence of the last decades, the frequency and severity of extreme flooding events in urban areas are increasing [1–3]. The characteristics of an urban environment, such as the high portion of impervious area and increased population density, raise the vulnerability to flooding [4,5]. Traditional engineering measures face great challenges in providing sufficient flood protection when facing a more severe and frequent flooding condition [6,7]. In response, current flood protection strategies move away from measures to increase flood proofing towards flood resilience [8]. Various approaches to improve resilience to urban flooding have been proposed recently across different continents, such as the Best Management Practices (BMPs), Low Impact Development (LID), or Sustainable Urban Drainage Systems (SUDS). Furthermore, policies for improving public awareness of flood risk, advocating flood insurance, automated warning systems, etc. have also been advocated [9]. These approaches aim to mitigate the flooding impacts in cities by maintaining a high level of system performance during flooding events and facilitating the recovery stage of the system after flooding, i.e., its resilience. This study aims to develop a novel methodology to assess the flood resilience level of households within an urban area with time during and after a flooding event by incorporating physical, social, and economic factors.

There are numerous studies evaluating the benefits of flood resilience-enhancing strategies. However, many of them focus on flood impact reduction instead of resilience. Indeed, various flood impact assessment techniques have been formulated according to a wide diversity of research purposes, availability of data, and accessibility of resources [10]. On the contrary, the assessment of flood resilience

faces many challenges, including its definition, dimensions used (e.g., social, economic, or physical aspects), and methods of quantification [11,12]. Nevertheless, there is a growing number of research projects and studies aiming at quantifying flood resilience using integrated [13] or multi-criteria [14] approaches, assessing climate variability [15] or the impact of infrastructure [16,17] while considering socioeconomic aspects [18]. Governance strategies for improving flood resilience have also been studied [19].

The study of resilience was originated in the field of ecology [20], where Holling defined it as the measure of the ability of an ecosystem to absorb changes and persist [21]. Since then, variations of the resilience concept started to emerge in different research fields. In the context of flood risk and flood management, various definitions have been introduced recently [22–27]. According to the literature, the definitions of flood resilience differ from each other. However, they generally comprise two major elements: 1. The coping capacity in the face of flooding, and 2. the recovery capacity after flooding. In this paper, these two major elements are adopted. Flood resilience is thus defined as "the capacity to withstand adverse effects following flooding events and the ability to quickly recover to the original system performance before the event".

Resilience assessment can be used to evaluate flood risk management strategies at a city scale [28–30]. However, there still exists no consensus on how to measure flood resilience [31]. One commonly applied approach to quantify resilience is to utilize indicators that measure the characteristics of a system facing urban flooding. De Bruijn defined a set of indicators for flood resilience quantification, which covers three aspects: The amplitude of reaction, the graduality of the increase of the reaction with increasingly severe flood waves, and the recovery rate [32]. These three aspects describe the state of system performance when facing flooding events. In addition, the value of indicators reflects the physical, social, and economic factors regarding flood risk management. Batica and Gourbesville developed an urban flood vulnerability and resilience assessment tool with indicators providing a comprehensive overview of vulnerability and resilience of a city and its community [33]. An index is proposed to describe resilience level by assigning grades (0 to 5) to different indicators according to the availability levels to various urban services when facing a 100-year flooding event. Mugume et al. quantified the resilience of urban drainage systems in the UK by applying the utility performance function combined with the depth–damage data for residential properties that relates the overall performance of a drainage system to flood depths [25]. Analogously, Lee and Kim proposed a resilience index for urban drainage systems in Korea based on flooding damage that resulted from damage functions calculated by multi-dimensional flood damage analysis [34]. However, both studies on urban drainage systems lack socioeconomic aspects when estimating resilience. Bertilsson and Wiklund developed a spatialized index to measure and visualize flood resilience changes in an urban area of Rio [35], incorporating five dimensions: Flood level, exposed population, susceptibility, material recovery, and flood duration.

Despite the already existing studies on flood resilience quantification, there is a lack of methods for assessing how a system's resilience level is affected during and after flooding. As discussed, most existing studies are not time-dependent. Therefore, the aim of this study is to propose a time-dependent method for quantifying flood resilience of households in urban areas.

Section 2 introduces the study area of Maxvorstadt in Munich city. In Section 3, the structure of the Flood Resilience Index (FRI) and the computation of each parameter are explained in detail. In Sections 4 and 5, the inundation and FRI modelling results for Maxvorstadt as well as the sensitivity analysis of the applied reference parameters are provided and discussed. Finally, in Section 6, the conclusion highlights the main advantage and limitation of the proposed FRI method and consideration for future work.

#### **2. Study Area and Data**

The study area of Maxvorstadt is one of the 25 boroughs within Munich city, located at the city center. The borough contains an area of 429.79 ha, and is composed of 69% of buildings, 7% of recreation

area, and 24% of road surface [36]. The geographical range of the study site is trimmed alongside the roads at the boundary of the administrative area of Maxvorstadt to exclude buildings crossing over multiple boroughs. Maxvorstadt consists of nine districts, which are Königsplatz, Augustenstraße, St. Benno, Marsfeld, Josephsplatz, Am alten nördlichen Friedhof, Universität, Schönfeldvorstadt, and Maßmannbergl (see Figure 1a). Table 1 shows the number of buildings and area within each district. Figure 1b illustrates the population and age distribution of each district. The population of Maxvorstadt lies mainly between 20 to 30 years old [36]. Like other urban areas, the majority of the surface area within Maxvorstadt is sealed. However, there are parks, cemeteries, and lawns composing 7% of the total area as green spaces. Furthermore, some buildings are constructed with green roofs or roof-top gardens, making up more green surface areas. Figure 1c shows the land use map of the study site.

The average yearly precipitation from 1981 to 2010 of Munich City was 944 mm [37]. Regarding rainfall events, the German Meteorological Office (Deutscher Wetterdienst, DWD) provides a dataset storing grids of return periods of heavy rainfall over Germany (Koordinierte Starkniederschlagsregionalisierung und -auswertung des DWD, KOSTRA-DWD). The dataset contains statistical rainfall intensity values as a function of the duration and return period. It is often applied to assess damages caused by severe design rainfalls with regard to their return period [38]. This paper applies the latest version of the dataset, the KOSTRA-DWD-2010R, which encompasses the time period from 1951 to 2010 and focuses on the 15 min duration rainfall events for various return periods (see Figure 1d).

**Figure 1.** (**a**) Location of the nine districts and buildings within Maxvorstadt; and (**b**) demographic structure with age distribution of Maxvorstadt. The size of the age pie chart is proportional to the population amount. (**c**) Land use map of Maxvorstadt; and (**d**) Rainfall Intensity Duration Frequency curve of Maxvorstadt. Data retrieved from KOSTRA-DWD-2010R database (Grid no. 92049) containing information from 1951 to 2010. This paper applies the 15 min duration rainfall events for various return periods.


**Table 1.** The amount of buildings and area for each district within Maxvorstadt.

<sup>1</sup> Am alten nördlichen Friedhof. <sup>2</sup> Number of buildings. <sup>3</sup> Building density [building/ha].

#### **3. Methods**

#### *3.1. Time-Varying Flood Resilience Index: FRI*

#### 3.1.1. Structure of the Flood Resilience Index

A time-varying Flood Resilience Index (FRI) is developed to quantify the resilience level of households in Maxvorstadt, ranging from 0 to 1 as the minimum and maximum value, respectively. The FRI quantifies the capacity to withstand the adverse effects during flooding and the ability to quickly recover from them at each timestep. Indicators reflecting physical, social, and economic dimensions are considered for computing the FRI.

The evaluation of the FRI is split into two phases: The event phase and the recovery phase, depending on the indoor water depth (see Figure 2). In the event phase, physical indicators from flood modelling, i.e., water depth, accumulated water depth, flooding duration, and water accumulation rate are incorporated to assess the flooding impacts. It is assumed that after each flooding event, when the indoor water depth recedes to zero, the recovery phase is initiated. Aside from the physical indicators, social indicators (i.e., percentage of households with children and percentage of elderly population) and economic indicators (i.e., household income) are considered to evaluate the recovery capacity, which facilitates the system to bounce back to the original performance level before flooding (FRI = 1). A description of each indicator and its computation will be introduced in the following section.

**Figure 2.** Illustration of the Flood Resilience Index (FRI) structure. t\* stands for the timestep when the indoor water depth returns to zero, which separates the event and recovery phases.

#### 3.1.2. Event Phase Indicators

When the indoor water depth is larger than zero, it is considered as an event phase. In this case, four physical indicators are considered for calculating the FRI: Water depth (*Ih*(*t*)), accumulated water depth (*IAWD*(*t*)), flooding duration (*ID*(*t*)), and water accumulation rate (*IWAR*(*t*)).

The water depth indicator indicates the severity of flooding at each timestep. The higher the water depth is, the more a household, human, and items are affected, and thus the less resilient the system becomes. A value is assigned to a reference parameter, which indicates the maximum water depth that the building can withstand (*hre f* [m]). The resilience level decreases as the indoor water depth rises, and once the indoor water depth exceeds the reference parameter, the water depth indicator becomes zero. Equation (1) computes the water depth indicator (*Ih*(*t*)), where variable *hin*(*t*) [m] is the indoor water depth at time *t*. The reference parameter of water depth *hre f* [m] is assigned a value of 0.5 m.

$$I\_h(t) = \begin{cases} 1 - \frac{h\_{\text{in}}(t)}{h\_{\text{ref}}}, & \text{if } h\_{\text{ref}} \ge h\_{\text{in}}(t) \\ 0, & \text{otherwise} \end{cases} \tag{1}$$

The water depth indicator shows the severity of the flooding at a certain timestep. However, it is also important to investigate the full scope of the impact that the flooding event has caused. Hence, the accumulated water depth indicator is developed. A reference parameter is inserted, stating the maximum accumulated water depth that the building can withstand (*AWDre f* [m]). Equation (2) calculates the accumulated water depth indicator (*IAWD*(*t*)) at every time step (every 10 s), where *ts* [s] is the starting time of the flooding event. *AWDre f* [m] is assigned a value of 3 m.

$$I\_{AWD}(t) = \begin{cases} 1 - \frac{\sum\_{t\_s}^{t} h\_{in}(t)}{AWD\_{ref}}, & \text{if } AWD\_{ref} \ge \sum\_{t\_s}^{t} h\_{in}(t) \\\ 0, & \text{otherwise} \end{cases} \tag{2}$$

The duration of the flooding event plays an important role in evaluating the FRI. The longer the flood lasts, the higher damage it will cause. Young points out several impacts that the long-lasting floods could bring to human health, including toxic chemical exposure, growing mold causing respiratory problems, and mosquitos carrying a variety of diseases [39]. In addition, financial damage, social losses, and impacted level of well-being, such as breakdowns of factories and transportations, can have a large impact on the society leading to a lower resilience level [40–42]. These adverse effects become more significant when the flood duration increases. The flooding duration indicator (*ID*(*t*)) is calculated by Equation (3), where *D*(*t*) [min] stands for the flooding duration until time *t*. The reference parameter *Dre f* [min] presents the maximum flooding duration that a household can withstand, which is assigned a value of 800 min.

$$I\_D(t) = \begin{cases} 1 - \frac{D(t)}{D\_{ref}}, & \text{if } D\_{ref} \ge D(t) \\ 0, & \text{otherwise} \end{cases} \tag{3}$$

The rising rate of the floodwater is one of the most influential factors which determines the damage magnitude caused by flooding events. For instance, the evacuation procedure should be executed within a limited time span. If the rising rate of the floodwater is high, the evacuation might be incomplete or executed with a reduced efficiency. Facilities with higher vulnerability to fast-rising water, such as schools and nursing homes, will then have a much lower resilience level. In this paper, a water accumulation rate indicator is considered at the rising stage of a flood. Equation (4) computes the water accumulation rate indicator (*IWAR*(*t*)), where *rrise*(*t*) [cm/min] stands for the water rising rate during time *t* and *t* − 1. The reference parameter *WARre f* [cm/min] represents the highest water rising rate that can be tolerated, which is assigned a value of 5 cm/min.

$$I\_{WAR}(t) = \begin{cases} 1 - \frac{r\_{\rm ric}(t)}{WAR\_{\rm ref}} & \text{if } WAR\_{\rm ref} \ge r\_{\rm ric}(t) \\ 0, & \text{otherwise} \end{cases} \tag{4}$$

#### 3.1.3. Recovery Phase Indicators

When the indoor water depth recedes to zero, the recovery phase is initiated. In this case, not only the physical indicators, but the social and economic ones are applied for calculating a recovery factor in order to enhance the FRI after flooding. The four physical indicators include flood severity (*If s*), total flooding depth (*ITFD*), total flooding time (*ITFT*), and maximum water accumulation rate (*IWRAmax*). The two social indicators are households with children (*IC*) and elderly population (*IE*). The economic indicator is household income (*II*). Seven indicators in total comprise the recovery factor, which is a product of seven exponential terms.

The concepts of the physical indicators in the recovery phase are similar to those in the event phase. However, there is a slight difference at the evaluation time frame. Instead of taking values for the numerators at current time steps, the maximum or the accumulated values during the previous event phase are considered. For flood severity (*If s*) and maximum water accumulation rate indicators (*IWARmax*), the maximum value of the indoor water depth and the water accumulation rate within the previous event phase are considered, respectively. As for total flooding depth (*ITFD*) and total flooding time indicators (*ITFT*), a cumulative value of the indoor water depth and total flooding duration within the previous event phase are considered. Equations (5)–(8) show the calculation of the four physical indicators, respectively. Variables *ts* and *te* represent the starting and ending timesteps of flooding in the previous event phase.

$$I\_{fs} = \begin{cases} \ e^{-\left(1 - \frac{t \cdot \max\_{\{t\_s\}} h\_{\text{in}}(t)}{h\_{\text{ref}}}\right)}, & \text{if } h\_{\text{ref}} \ge \max\_{\mathbf{t} \in \left[t\_s, t\_{\text{f}}\right]} h\_{\text{in}}(\mathbf{t})\\\ 1, & \text{otherwise} \end{cases} \tag{5}$$

$$I\_{TFD} = \begin{cases} \ e^{(1 - \frac{\sum\_{t\_s}^{t\_c} h\_{in}(t)}{AWD\_{ref}})}, & \text{if } AWD\_{ref} \ge \sum\_{t\_s}^{t\_c} h\_{in}(t) \\\\ 1, & \text{otherwise} \end{cases} \tag{6}$$

$$I\_{TFT} = \begin{cases} \ e^{-\left(1 - \frac{D\left(t\_{\mathcal{C}}\right)}{D\_{ref}}\right)}, & \text{if } D\_{ref} \ge D\left(t\_{\mathcal{C}}\right) \\ 1, & \text{otherwise} \end{cases} \tag{7}$$

$$I\_{WARmax} = \begin{cases} \text{e}^{\left(1 - \frac{\text{tra}}{\text{fc} \cdot \text{fc}} \frac{r\_{rise}(t)}{\text{WAR}\_{ref}}\right)}, & \text{if } \text{WAR}\_{ref} \ge \max\_{t \in [t\_s, t\_t]} r\_{rise}(t) \\\\ 1, & \text{otherwise} \end{cases} \tag{8}$$

Social and economic indicators are assigned to evaluate the recovery capacity from flooding for each household according to different districts within Maxvorstadt. The demographic and social–economic characteristics, such as race, gender, age, and income are principal drivers of a population's ability to recover from damaging flooding events [43–45]. The more children and elderly people within a district, the higher vulnerability to flooding and lower recovery strength the community has. Equations (9) and (10) show the calculation for the indicators of households with children (*IC*) and elderly population (*IE*), respectively. Furthermore, household income straightforwardly reflects the recovery strength from a flooding event. The more a household earns, the easier and faster it can recover from flooding by repairing or replacing the damaged goods. Equation (11) computes the income indicator (*II*).

$$I\_{\mathbb{C}} = \begin{cases} \varepsilon^{\left(1 - \frac{\mathbb{C}}{\mathbb{C}\_{ref}}\right)} & \text{if } \mathbb{C}\_{ref} \ge \mathbb{C} \\ & 1, \quad \text{otherwise} \end{cases} \tag{9}$$

$$I\_E = \begin{cases} \ e^{\left(1 - \frac{\tilde{E}}{E\_{ref}}\right)} & \text{if } E\_{ref} \ge E \\ & \mathbf{1}\_{\prime} \quad \text{otherwise} \end{cases} \tag{10}$$

$$I\_I = \begin{cases} \begin{array}{c} e^{\frac{1}{I\_{ref}}}, & \text{if } I\_{ref} \ge I \\\ e^1, & \text{otherwise} \end{array} \end{cases} \tag{11}$$

*C* [%] and *E* [%] stand for the percentage of households with children and elderly population, respectively, in the district that the household is seated in. Reference parameters, *Cre f* [%] and *Ere f* [%], are assigned values 20% and 12%, respectively, which provide the thresholds that the recovery capacity decreases as *C* and *E* increase. *I* [€] represents the annual household income, and reference parameter *Ire f* [€], assigned 80,000€, represents the threshold that the recovery capacity increases as *I* increases.

#### 3.1.4. Time Series of the Flood Resilience Index

Once the indicators for evaluating FRI in the event phase and the recovery factor in the recovery phase are calculated, the time series of FRI can be computed. Like the calculation of the indicators, the computation of the FRI time series should be divided into event and recovery phases.

In the event phase, the calculated indicators of water depth (*Ih*(*t*)), accumulated water depth (*IAWD*(*t*)), flooding duration (*ID*(*t*)), and water accumulation rate (*IWAR*(*t*)) are applied to evaluate the FRI at time *t* following Equation (12). *WF* stands for the weighting factor for each indicator, which determines the relative level of significance among the indicators. *WFh*, *WFAWD*, *WFD*, and *WFWAR* are assigned values of 3, 1, 3, and 2, respectively. *ts* and *te* represent the starting and ending timesteps of flooding in the event phase.

$$\begin{array}{l} \text{FRI}(t) = \\ \left[ \frac{\text{WF}\_{h}I\_{h}(t) + \text{WF}\_{A\text{VID}}I\_{A\text{VID}}(t) + \text{WF}\_{D}I\_{D}(t) + \text{WF}\_{W\text{AR}}\text{-IVAR}(t)}{(\text{WF}\_{h} + \text{WF}\_{A\text{VID}} + \text{WF}\_{D} + \text{WF}\_{W\text{AR}})} \right] \\ \text{if } t \in [t\_{ss}, t\_{\varepsilon}] \end{array} \tag{12}$$

In the recovery phase, a recovery factor (*RF*) is calculated based on the physical characteristics of flooding in the previous event phase, and the social and economic indicator values of a household and its corresponding district. Equation (13) computes the recovery factor, applying the indicators of flood severity (*If s*), total flooding depth (*ITFD*), total flooding time (*ITFT*), maximum water accumulation rate (*IWARmax*), households with children (*IC*), elderly population (*IE*), and household income (*II*). Weighting factors *WFf s*, *WFTFD*, *WFTFT*, *WFWARmax*, *WFC*, *WFE* and *WFI* are assigned values of 3, 1, 2, 1, 1, 2, and 3, respectively. A value of 0.001 is assigned as a scaling constant. At last, the FRI at time t is computed as the product of the recovery factor and the FRI at the previous timestep *t* − 1 (see Equation (14)). Note that the recovery phase will last until the FRI value reaches 1.

$$\begin{aligned} \mathbf{x} &= \{ \mathbf{f}s, \text{ TFD, TFT, WARmax, C, E, I} \} \\ \mathbf{RF} &= \begin{bmatrix} \prod (I\_x)^{\mathsf{WF}\_x} \end{bmatrix} \begin{bmatrix} \frac{0.001}{\sum \mathbf{W}\_x} \end{bmatrix} \end{aligned} \tag{13}$$

$$FRI(t) = FRI(t-1) \times RF,\text{ if } t \notin [t\_s, t\_c] \tag{14}$$

#### *3.2. Indoor Water Depth Modelling*

The indoor water depth modelling can be conducted based on a one-way coupling computation given the inundation modelling result from the 2D surface runoff model Parallel Diffusive Wave (P-DWave). It is assumed that floodwater flows into the buildings through doors with known location and it follows the fluid mechanics of discharge over a rectangular weir. Furthermore, the width of the door for every building is assumed to be 75 cm.

Figure 3 diagrammatizes the flow dynamic of water coming into the building. The flow should be analyzed by the upper and lower portion. At the upper portion of the flow, the income discharge is calculated by Equation (15), which describes a free discharge under a head of water equal to (*hout* <sup>−</sup> *<sup>h</sup>*in). At the lower portion of the flow, the income discharge is calculated by Equation (16), which describes a submerged discharge under a head of water equal to *h*in. *Qu* [m3/s] and *Ql* [m3/s] represent the upper and lower portions of the discharge, respectively. *Cd* stands for the discharge coefficient, which in this

case is assigned a value of 1. *L* (m) is the width of the door, assumed to be 0.75 m. Variable *hout* (m) and *hin* (m) represent the outdoor and indoor water level, respectively.

$$Q\_{\rm li} = \frac{2}{3} \mathbb{C}\_d \times L \times \sqrt{2g} \times (h\_{\rm out} - h\_{\rm in})^{\frac{3}{2}} \tag{15}$$

$$Q\_l = \ C\_d \times L \times h\_{in} \times \sqrt{2g \times (h\_{out} - h\_{in})} \tag{16}$$

**Figure 3.** One-way coupling computation of the flow dynamic of water coming into the building. The upper portion of the flow, in which the water level outside the building is higher than the water depth inside the building, is marked as 1, and the lower portion, in which the water level outside the building is equal to the water depth inside the building, is marked as 2. h\_outside and h\_inside stand for the water depth outside and inside the building, respectively. When the indoor water depth is higher than that of the outdoor surroundings, the computation remains the same, whereas the discharge becomes negative as the flow direction faces the opposite direction.

The total discharge, *Qt* [m3/s], is calculated by summing up *Qu* and *Ql*. When the outdoor water level recedes, *houtside* becomes lower than *hin*, hence, *Qu* and *Ql* become negative. From that point, the water is flowing outwards, shown as a negative value of *Qt*. After *Qt*(*t*) is calculated at time *t*, the water volume entering or exiting the building at time *<sup>t</sup>*, *Vflux*(*t*) [m3], can be calculated by Equation (17). Then, the water volume inside the house at time *t*, *V*(*t*) [m3], can be computed by Equation (18). At last, the indoor water depth at time *<sup>t</sup>*, *hin*(*t*) [m], is computed by Equation (19). <sup>Δ</sup>*<sup>t</sup>* [s] represents the computation time interval, which in this case is 10 s. *Area* [m2] stands for the building area.

$$V\_{flux}(t) = Q\_t(t) \cdot \Delta t \tag{17}$$

$$V(t) = V(t-1) + V\_{flux}(t) \tag{18}$$

$$h\_{\rm in}(t) = \frac{V(t)}{Area} \tag{19}$$

#### *3.3. Parallel Di*ff*usive Wave Model: P-DWave*

The Parallel Diffusive Wave Model, P-DWave, is the surface runoff model applied for flood inundation modelling in this study. It is a first-order finite volume explicit discretization scheme that takes the conservative form of the 2D Shallow Water Equations into account and neglects the inertial terms (see Equations (20) and (21)). *h* is the water depth [m] and *t* is the time [s]. Velocity is defined by *u*<sup>2</sup> = *ux* <sup>2</sup> + *uy* 2. *u* = *ux uy <sup>T</sup>* stands for the depth-averaged flow velocity vector [-], in which *ux* is the flow velocity in the *x* direction [m/s] and *uy* is the flow velocity in the *y* direction [m/s]. *R* represents the source/sink term (e.g., rainfall, inflow, surcharge, drainage [m/s]). *z* is the bed elevation [m]. The bed friction is approximated by Manning's formula (Equation (22)), in which *Sf* = *Sf x Sf y <sup>T</sup>* stands for the bed friction vector [-]. *Sf x* is the bed friction slope in the *x* direction [-] and *Sf y* is the bed

friction slope in the *y* direction [-]. *n* is the Manning's roughness coefficient [s/m1/3]. The modulus of the depth-averaged flow velocity vector is given by Equation (23), where *Swx* = *d*(*h* + *z*)/*dx* is the water level gradient in the x direction [-] and *Swy* = *d*(*h* + *z*)/*dy* is the water level gradient in the *y* direction [-]. Further details of the P-DWave model can be found elsewhere [46]. Note that neither the sewer system nor the infiltration processes are considered in this study. As such, not all water could be drained from the surface and the event phase could not be considered complete. Therefore, in order to enable the start of the recovery phase, we assume that the outdoor water depths after the end of the 60-min simulation time return automatically to zero. The one-way coupling indoor water depth simulation for each building is then conducted following this assumption.

$$\frac{dh}{dt} + \nabla(uh) = R\tag{20}$$

$$\text{g}\,\text{g}\,\text{V}(h+z) = \text{g}\,\text{S}\_f\tag{21}$$

$$
\begin{bmatrix} S\_{fx} \\ S\_{fy} \end{bmatrix} = \begin{bmatrix} \frac{n^2 \|u\|\_{\mathcal{U}}}{h^{s/\hbar}} \\ \frac{n^2 \|u\|\_{\mathcal{U}}}{h^{s/\hbar}} \end{bmatrix} \tag{22}
$$

$$|\mu| = \frac{h^{2/3} \left( S\_{\text{new}}{}^2 + S\_{\text{wy}}{}^2 \right)^{1/4}}{n} \tag{23}$$

#### **4. Results**

#### *4.1. Flood Inundation Modelling*

The maximum inundation modelling of the 15-min rainfall applying the data from the KOSTRA-DWD-2010R database is conducted for various return periods, providing the information of locations that are more likely to encounter severe flooding conditions in Maxvorstadt. (see Figure 4). Table 2 shows the average inundation depth on streets versus different return periods. By applying the one-way coupling indoor water depth computation, the maximum indoor water depth for each building can be seen in Figure 5, also showing the information of the percentage of buildings facing different levels of indoor flooding.

**Figure 4.** *Cont.*

**Figure 4.** Maximum inundation map in Maxvorstadt with various return periods, applying rainfall data from the KOSTRA-DWD-2010R database. Luisen Street, located at the center of Maxvorstadt, faces the most extreme flooding condition with the range crossing over two blocks and maximum water depth over 50 cm in the case of a 100-year flooding event.


**Figure 5.** Maximum indoor water depth modelling given flooding events with various return periods by applying the one-way coupling computation. The indoor water depth in buildings with color green: 0 cm, yellow: 0–5 cm, orange: 5–10 cm, and red: above 10 cm. Percentage stands for the amount of building in each level of indoor flooding.

#### *4.2. Parameter Sensitivity Analysis*

A sensitivity analysis for the reference parameters is conducted for the building which encounters the most severe indoor inundation caused by a 100-year flooding in district Königsplatz. The alteration of each indicator by changing its corresponding reference parameter is examined by comparing the case of adding and subtracting 50% from the original values of the reference parameter. The differences can be detected by comparing the case applying the original set of reference parameters (shown in blue dashed lines) and the upper edges of the areas. The results of the sensitivity analysis in the event (see Figure 6) and recovery phase (see Figure 7) are shown below.

**Figure 6.** Sensitivity analysis of the four reference parameters for each physical indicator in the event phase by comparing the case of adding and subtracting 50% from the original reference parameters. Blue dashed line stands for the case when applying the original set of reference parameters. Annotation in each graph illustrates which reference parameter is examined by either adding or subtracting 50% from its original value.

**Figure 7.** Sensitivity analysis of the seven reference parameters for each physical, social, and economic indicator in the recovery phase by comparing the case of adding and subtracting 50% from the original reference parameters. Blue dashed line stands for the case when applying the original set of reference parameters. Annotation in each graph illustrates which reference parameter is examined by either adding or subtracting 50% from its original value.

#### *4.3. Flood Resilience Index*

Figure 8 shows the mean FRI curves as an aggregated result for every household in Maxvorstadt according to different reference parameters (either by adding or subtracting 50% from their original values) considered in the sensitivity analysis. In addition, the standard deviation curves are provided to illustrate the level of dispersion of the FRI at each timestep.

**Figure 8.** Mean FRI curves as aggregated results for every household in Maxvorstadt (blue lines) and standard deviation curves (red lines) showing the level of dispersion of the FRI at each timestep in face of a 100-year flooding. The simulation considers different multiplication factors of the reference parameters applied in the sensitivity analysis. Solid lines represent the considered case (either with a 50% increment or decrement of the original reference parameter) and the dashed lines correspond to the case applying the original set of reference parameters. Black dashed lines at *t* = 1 h in the zoom-in graphs (right-hand side) illustrate the end of the outdoor inundation modelling, for which the outdoor water depth is set to zero to enable the start of the recovery phase.

#### **5. Discussion**

#### *5.1. Flood Inundation Modelling*

According to the results of the maximum inundation modelling (see Figure 4), it can be shown that the scenarios with higher return period as well as rainfall intensity return larger inundation areas and higher water depths. The average water depth on streets also shows an increasing trend as the return period increases (see Table 2), which corresponds to the shape of the considered 15 min rainfall intensity–frequency curve in Figure 1d. In the maximum inundation maps, there are several spots showing a small area of higher water depths. The reason is that these are areas enclosed by buildings or tunnels, which have a lower elevation than the surroundings. The surface runoff modelling applying the P-DWave model only includes the overland surface routing of rainwater. Hence, in this case, it is not possible to model the underground drainage of water accumulated at the low-laying areas. According to the maximum indoor water depth results (see Figure 5), the percentage of buildings which face a more severe indoor flooding rises as the return period increases.

#### *5.2. Parameter Sensitivity Analysis*

According to the results of parameter sensitivity analysis, different reference parameters have an effect on the corresponding FRI indicator and total FRI. In the event phase (see Figure 6), the higher the four physical reference parameters are, the higher the indicators and FRI values will be. However, the sensitiveness of changing different reference parameters differs according to the assigned weighting factors and the original values of the reference parameters. Not surprisingly, reference parameters of water depth and flooding duration, which are assigned with the greatest weighting factors, have the highest impact. The water depth reference parameter controls the water depth indicator when the building is facing an indoor flooding. As shown in Figure 6, the sensitiveness of altering such a parameter is obvious within the timeframe from 0 to 100 min, in which the building is encountering the peak water depth. By increasing this reference parameter, the water depth indicator shows a 0.1 increment at the lowest point of the indicator curve. In contrast, by decreasing this reference parameter, the water depth indicator decreases and drops to zero during the peak water depth. The sensitiveness of altering the reference parameter of flooding duration is evident at the tail end of the event phase. Like the water depth indicator, the flooding duration indicator rises when the reference parameter increases and falls down to zero at the timestep at 400 min when the reference parameter decreases. Note that changing the reference parameter of flooding duration will lead to different endpoints of the FRI in the event phase, which makes different starting points for the recovery phase. The alteration of the water accumulation rate indicator only appears at the front end of the event phase, which corresponds to the rising limb of the indoor hydrograph. The altering of the water accumulation indicator is more evident when the reference parameter decreases. However, when the indoor water is receding, the water accumulation rate becomes ineffective, and thus changing this reference parameter does not affect the water accumulation rate indicator. Finally, the reference parameter of accumulated water depth in this study is set as an extreme case (3 m) to show that by either increasing or decreasing it by 50%, the indicator will fall down to zero. The difference between these two cases is only at which timestep the indicator drops to zero. According to Figure 6, the difference between increasing and decreasing the reference parameter of accumulated water depth is not significant in this case study.

The seven indicators included in the recovery phase constitute the recovery factor which is responsible for the system to return to the original state (FRI = 1). Similar to the indicators in the event phase, the effect of reference parameters in the recovery phase depends on the assigned weighting factors and the original values of the reference parameters (see Figure 7). All indicators, with exception for the total flooding depth and income indicators, increase as their reference parameters increase. Below is a short description of the impact of each reference parameter/indicator on the recovery phase:

1. Flood severity indicator—when its reference parameter decreases, the maximum water depth in the event phase becomes greater than its reference parameter, and thus the indicator is no longer contributive to the recovery factor and does not appear in the sensitivity analysis graph in the recovery phase. In this case, the system requires a longer recovery time (approximately 300 min longer) with the smaller recovery factor. On the contrary, when the reference parameter of flood severity increases, the indicator contribution to the recovery factor increases and the recovery of system is faster (200 min).


The sensitivity analysis of the reference parameters provides detailed information of the FRI composition, which is a powerful tool for decision makers to decide which aspect requires instant improvement through visual comparison among indicators. The results also allow a better understanding on how external influencing factors affect the FRI. For instance, when a house is equipped with water-proof furniture, the reference parameter of water depth should raise, standing for a higher resilience to flooding depth (i.e., higher FRI).

#### *5.3. Flood Resilience Index*

According to the results of the FRI simulation (see Figure 8), the mean FRI curve drops together with an increment of the standard deviation curve at the beginning. Within the one-hour simulation duration, buildings within Maxvorstadt experience indoor flooding, and thus the mean FRI decreases in the event phase. The indoor flooding hydrograph from every building differs quite significantly. Hence, the standard deviation curve climbs, illustrating a higher level of dispersion of the FRI values. At the timestep of 1 h, the outdoor water depth is set to recede to zero (see assumptions in Section 3.3.) and the indoor water depth of each building faces a significant drop according to the one-way coupling computation, leading to a sudden increase of the mean FRI curve and a sharp drop of the standard deviation curve. Note that the recovery phase does not start at the timestep of 1 h. It starts at the

timestep when the indoor water depth recedes to zero, so it differs for each building (see Figure 2). After the timestep at 1 h, the mean FRI curve climbs following the recession of indoor water depth, and gradually returns to one during the recovery phase. The standard deviation curve then returns back to zero along with the increment of the FRI values, reaching one for every building. The impact of each reference parameter on the FRI is now shortly summarized:


The summary table (see Table 3) provides information of the FRI duration (time lasting from the system being hit by flooding to a full recovery, including the event and recovery phase) and the minimum value of the mean FRI considering the effect of the altered reference parameters of each indicator. The alteration of the reference parameter of income has the highest impact considering the FRI duration, which is a 7.2 h difference comparing increasing and decreasing the reference parameter by 50%. The minimum value of the mean FRI is caused by the physical impacts from flooding and thus lies within the event phase, hence the altered reference parameters of the households with children, elderly population, and income indicators, which are only considered in the recovery phase and cannot have an effect on it. Among the four physical indicators, the alteration of the reference parameter of water depth has the largest effectiveness on the minimum mean FRI.

The aggregated FRI results provide the information of the flood resilience level within the study area (regarding it as a whole system). Based on this information, it is possible to verify whether: (a) The severity of the flooding impact hits the system and induces a significant drop of the FRI value, or (b) the system is undergoing a slow or a fast recovering process. Furthermore, the dispersiveness of the FRI curves also provides the information of how homogenously the urban components react to a certain event. If the standard deviation value is high, it means that the urban components react differently and some districts will need more assistance during low FRI periods than their neighboring zones, which have a higher FRI value.


**Table 3.** Summary table for the FRI simulation considering different multiplication factors of the reference parameters.

<sup>1</sup> Reference parameter. <sup>2</sup> Total duration including the event and recovery phase.

#### **6. Conclusions**

In this paper, we developed an indicator-based flood resilience quantification method by introducing the time-varying Flood Resilience Index (FRI). The FRI is able to quantify the flood resilience level for households within an urban area, divided into event and recovery phases. Therefore, the new FRI embodies the definition of flood resilience as the capacity to withstand adverse effects following flooding events and the ability to quickly recover to the original system performance before the event. During the flooding event, the FRI is estimated based on physical indicators, namely the water depth, accumulated water depth, flooding duration, and water accumulation rate. During the recovery phase, the FRI is estimated based on social indicators, i.e., the percentage of households with children and that of elderly population, as well as an economic indicator, i.e., annual household income.

The sensitivity analysis of the parameters (and indicators) provided a useful tool to understand better how external influencing factors affect the FRI. The aggregated FRI results allow the identification of fragilities in the urban household as part of a system. It is easy to identify which households have a slow-recovering process or which are being hit severely by the event. Furthermore, the dispersiveness of the FRI curves also provides the information of how homogenously the urban components of the system react to a certain event.

The novel time-varying FRI therefore provides a novel insight into the indicator-based quantification method of flood resilience level for households in an urban area. The time-dependent characteristic of the proposed method contributes to advancing the research field by enabling a quantifiable characterization and visualization of how a system responds during and after a flooding event. Therefore, the introduced FRI could become a valuable tool for urban planning and public communication, and promote a better flood risk management plan. Future work will see the inclusion of the sewer network and possible extension of the urban area of Maxvorstadt, which is considered at the moment isolated from other boroughs in Munich City.

**Author Contributions:** Conceptualization, K.-F.C. and J.L.; Data curation, K.-F.C.; Formal analysis, K.-F.C.; Investigation, K.-F.C.; Methodology, K.-F.C. and J.L.; Project administration, K.-F.C.; Resources, J.L.; Software, K.-F.C.; Supervision, J.L.; Visualization, K.-F.C.; Writing—original draft, K.-F.C.; Writing—review & editing, J.L.

**Funding:** This research received no external funding.

**Acknowledgments:** The authors are grateful to Professor Stephan Pauleit from the Centre for Urban Ecology and Climate Adaptation, TUM for providing GIS data of Maxvorstadt applied in this study.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


46. Leandro, J.; Chen, A.; Schumann, A. A 2D parallel diffusive wave model for floodplain inundation with variable time step (P-DWave). *J. Hydrol.* **2014**, *517*, 250–259. [CrossRef]

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
