Next Article in Journal
Improving the Assimilation of Enhanced Atmospheric Motion Vectors for Hurricane Intensity Predictions with HWRF
Next Article in Special Issue
AMFuse: Add–Multiply-Based Cross-Modal Fusion Network for Multi-Spectral Semantic Segmentation
Previous Article in Journal
Forecasting of Built-Up Land Expansion in a Desert Urban Environment
Previous Article in Special Issue
Impervious Surfaces Mapping at City Scale by Fusion of Radar and Optical Data through a Random Forest Classifier
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Urban Sprawl and COVID-19 Impact Analysis by Integrating Deep Learning with Google Earth Engine

1
Remote Sensing and Telecommunication Laboratory, Engineering Department, University of Sannio, 82100 Benevento, Italy
2
Earth Observation Center, Remote Sensing Technology Institute, German Aerospace Center (DLR), 82234 Oberpfaffenhofen, Germany
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(9), 2038; https://doi.org/10.3390/rs14092038
Submission received: 8 March 2022 / Revised: 3 April 2022 / Accepted: 21 April 2022 / Published: 24 April 2022
(This article belongs to the Special Issue Data Fusion for Urban Applications)

Abstract

:
Timely information on land use, vegetation coverage, and air and water quality, are crucial for monitoring and managing territories, especially for areas in which there is dynamic urban expansion. However, getting accessible, accurate, and reliable information is not an easy task, since the significant increase in remote sensing data volume poses challenges for the timely processing and analysis of the resulting massive data volume. From this perspective, classical methods for urban monitoring present some limitations and more innovative technologies, such as artificial-intelligence-based algorithms, must be exploited, together with performing cloud platforms and ad hoc pre-processing steps. To this end, this paper presents an approach to the use of cloud-enabled deep-learning technology for urban sprawl detection and monitoring, through the fusion of optical and synthetic aperture radar data, by integrating the Google Earth Engine cloud platform with deep-learning techniques through the use of the open-source TensorFlow library. The model, based on a U-Net architecture, was applied to evaluate urban changes in Phoenix, the second fastest-growing metropolitan area in the United States. The available ancillary information on newly built areas showed good agreement with the produced change detection maps. Moreover, the results were temporally related to the appearance of the SARS-CoV-2 (commonly known as COVID-19) pandemic, showing a decrease in urban expansion during the event. The proposed solution may be employed for the efficient management of dynamic urban areas, providing a decision support system to help policy makers in the measurement of changes in territories and to monitor their impact on phenomena related to urbanization growth and density. The reference data were manually derived by the authors over an area of approximately 216 km2, referring to 2019, based on the visual interpretation of high resolution images, and are openly available.

1. Introduction

The need for shared decision-making and direction-setting for the creation and use of geospatial data was recognised by the United Nations, leading to the establishment of the United Nations Committee of Experts on Global Geospatial Data Management (UN-GGIM), an intergovernmental group whose primary goals are to collaborate with governments to enhance their policy, arrangements, and legal frameworks, to address global issues, and, as a community with common interests and concerns, to contribute to collective knowledge, and the development of effective strategies for building geospatial capacity in developing countries [1]. As reported by UN-GGIM, 2.5 quintillion bytes of data are generated every day, with most of this information currently assimilated by and managed in cloud platforms which are in constant evolution. Decision support systems (DSSs) which were initially developed for financial purposes, have been calibrated on these large datasets, particularly where they are freely available on cloud repositories, in order to implement decision criteria for environmental issue management [2].
Among other issues, urban sprawl has attracted much attention within recent decades, as governmental bodies and non-profit organizations have increasingly become interested in its monitoring and the related economic development and environmental risks.
Indeed, recent decades have seen unprecedented expansion in urbanization. In heavily urbanized areas lacking permeable surfaces, rainwater which is not absorbed by the soil can cause devastating floods [3,4,5]. More generally, the uncontrolled growth of urban areas has damaged our planet and biodiversity by reducing worldwide vegetation cover, resulting in changes in the climate and gradual desertification [6,7,8,9].
The analysis conducted in [10] makes it clear that, while researchers have worked extensively on capturing urban growth dynamics and modifications of the surrounding environment, additional effort must be applied to these areas. Twenty years later, critical analysis continues to highlight challenges, and best practice, for developing successful environmental decision support systems [11,12]. In particular, key identified requirements for the success of a DSS include identification of key stakeholders, characterization at macro and local levels of the model, collection of heterogeneous information, and integration of science and management. Therefore, compromise between data integration, timeliness, level of detail in the data, accuracy, and user-friendliness is crucial to stakeholders and decision makers. This becomes especially important in difficult periods, such as the present one, with an ongoing struggle against the effects of COVID-19. In particular, urban sprawl and lockdown measures have been shown to be correlated, as discussed in [13], highlighting the socio-economic effects of COVID-19 in response to the global outbreak. Moreover, a comparative analysis of different actions against the pandemic has shown differences between countries in terms of their effectiveness, as described in [14,15,16].
In this context, remote sensing (RS) for Earth observation (EO) has played an increasingly important role in monitoring urban development and natural disasters, managing natural resources, and protecting the environment and population [17,18,19,20,21,22,23]. More specifically, feature extraction from satellite images enables timely quantitative and qualitative information to be obtained on land use, along with physical parameters, such as air and water quality [24,25,26]. Nevertheless, the increase in the number of EO satellites, along with their diverse characteristics, such as swath width, spectral, spatial, and temporal resolution, have significantly increased data volumes, posing challenges for their timely processing and analysis of the resulting massive data volumes. From this perspective, classical methods for urban monitoring [17,20,27,28] have limitations and emerging technologies, such as deep learning (DL), are being exploited, together with cloud platforms and specific pre-processing steps [18,24,29,30]. In particular, the use of Google Earth Engine (GEE) for multi-modal and large scale RS applications has significantly increased in the last five years, due to its computational in-the-cloud capabilities, the several petabytes of accessible geo-referenced and harmonized data, and its built-in ML techniques [31].
This paper presents a general approach for the integration of data fusion, AI, platform processing, and correlation with complementary information for analysis, making use of temporal information, and the provision of helpful tools. In particular, the authors propose the application of cloud-enabled DL technologies at large scale to map urban sprawl through the fusion of harmonized optical and synthetic aperture radar (SAR) data, integrating processing in the cloud using GEE with the open-source TensorFlow (TF) processing platform, an end-to-end open source library for machine learning (ML) applications. The choice of the network and case study design are tailored to the selected application, enabling DL to map urban dynamics using data fusion techniques [19,32,33].
Specifically, this study examines the metropolitan area of Phoenix area, in Arizona (USA), which has undergone rapid urban expansion in recent years, and is therefore ideal for testing the proposed method for the monitoring of fast-growing urbanized areas. Several studies reflect the interest in this area of study: in [34] the spatial-temporal characteristics of the urban expansion of the city are analyzed, while six types of land-use are mapped utilizing ASTER imagery and object-based analysis for the same area [35], or for dynamic land-cover classification [36], and the detection of road and pavement types [37].
Three models based on a U-Net architecture using different datasets yielded promising results; ancillary information available on a subset of the metropolitan area of Phoenix showed good agreement with maps produced of the changes detected. The multitemporal data used in this study allows for the estimation of the effects of the pandemic on urban dynamics, with results obtained consistent with government data on the slower expansion rate observed after the pandemics. The preparation of the reference data was performed entirely manually and carried out by the authors over an area of about 216 km2, with reference to 2019, based on visual interpretation of high resolution data provided by the National Agriculture Imagery Program (NAIP) [38], with the resulting urban layer made freely available to the community for training and testing urban detection algorithms, using the described methodology as a possible benchmark. The proposed system aims to overcome the limitations of classical methods and offers a useful tool to governments and decision makers, which is also accessible to non-experts. In order to focus on different periods and map the desired aspect of urban dynamics, a user only needs to change the dates of the images of interest, keeping unaltered the workflow and avoiding the need to undergo a new training phase. It is worth emphasizing that the proposed method makes use of free open data only, and this represents a huge advantage when funding sources are limited.
The remainder of the article is structured as follows: The methodology and tools are introduced in Section 2, where a brief description of the GEE platform, the TF library and the chosen U-Net architecture is given. The designed case study is presented in Section 3, describing the area of interest, the pre-processing steps for the selected Sentinel-2 optical and Sentinel-1 SAR data, along with their characteristics, and the reference data preparation. In Section 4, the selected neural network and its setup are described. The results are presented in Section 5, including the COVID-19 impact monitoring on the urban sprawl analysis. Section 6 concludes the article.

2. Tools

2.1. Google Earth Engine

GEE is a public cloud computing platform including a repository of a large variety of standard Earth science raster datasets. Petabytes of remote sensing data, such as Landsat, MODIS, Sentinel 1, 2, 3 and 5-P, as well as advanced land observing satellite (ALOS) data, have been stored on the system for over 40 years [39]. These multi-modal data can be processed using the JavaScript programming language. Another advantage of using GEE is that any code produced in its framework can be disseminated through official Google channels, direct links, or repositories, and can thus be linked to geographic information system (GIS) services via an application programming interface (API), allowing it to be easily reused and adapted for different scales and situations [40]. Applications developed within GEE range from cultural heritage monitoring [40] to surface water analysis [41] and forest fire dynamics [42], with advantages for the long-term monitoring of urban areas, as demonstrated in [43]. The potential of combining high resolution satellite imagery, cloud computing, and DL technologies is also described in [44].

2.2. TensorFlow

TF is a free, open-source, high-performance library for numerical computation [45], and, apart from being particularly popular for ML-based applications, represents a suitable option for many complex models, large training datasets, particularly for input properties, or whenever long training times are expected [46,47]. In the proposed workflow, TF was included in order to increase the overall system performance. TF models are developed, trained, and deployed outside GEE [48], and the results are returned to the GEE framework if post-processing steps are required. A specific, web-based code editor, provided in this case by GEE, is the Earth Engine (EE) Python API, as described in [49], running in Colab Notebooks [50]. TF allows for the use of the Keras functional application programming interface (API), with ML solutions that can be developed more rapidly through essential abstractions and building blocks [51]. Being cross-platform, TF can run on everything, including GPU, CPU, and mobile platforms [52]. A step-by-step description of how to integrate GEE and TF (version 2.0) is given in [53].

2.3. Neural Network

The proposed framework uses the Keras implementation of the U-Net model as presented in Figure 1, a fully convolutional neural network (FCNN), originally developed for medical image segmentation, and used today in many other fields, such as EO, for instance, to map surface water [41], sugarcane areas [44], or crop types [54].
The U-Net was chosen for this task since it is suitable for classification and is powerful, stable, and not complex with respect to similar solutions. Several studies have demonstrated its success when applied to multispectral satellite images for Earth surface mapping, especially in impervious environments, such as urban surfaces [55,56,57,58].
For a complete description of the U-Net characteristics and architecture, interested readers can refer to [56,59].

3. Framework for Urban Sprawl Analysis

3.1. Modular System

In this section, the general description of the modular system is given, while the detailed pipeline used for our research is described in the next section.
The main purpose of this study is to provide a system with the ability to extract data about a region of interest, fully implemented in a single cloud environment. The potential of cloud processing platforms, the levels reached by AI, and the wide availability of open satellite data can be combined to effectively create a practical means of monitoring the territory, which can then be integrated, for example, into a web platform and also used by non-experts. Regardless of the specific choice of images, network architecture, or tools used, the modules necessary for the acquisition of information are divided into a specific structure combining several concepts, as shown in Figure 2, as follows: input data acquisition, reference data creation, data processing, data fusion, preparation of the computational model, generation of classification maps, and combination of these maps for an analysis of changes over time.
The input data acquisition can be performed through the numerous repositories containing multi-modal open-source satellite imagery (e.g., Landviewer, Copernicus Open Access Hub, Sentinel Hub, NASA Earthdata Search, Google Earth). The input data can be optical or SAR, with high or low resolution, depending on the desired level of detail of the derived information, and on the area or period of interest.
Compared to the mentioned satellite data, reference data are more difficult to find and are costly to acquire. Despite efforts that have been made to create labeled datasets, these often do not have global coverage or a precise level of detail. In addition, in this case, the cloud platforms aid in creating a suitable dataset for a specifc case study. In fact, several tools have been developed to create precise datasets to be used as reference information, such as Amazon SageMaker Ground Truth and GEE. If necessary, the input data can be pre-processed before they are used in order to ensure or enhance performance. Moreover, the fusion of multi-temporal and multi-modal data allows information to be obtained which goes beyond the content of single images, since multi-modal information acquired by the different satellites, using a variety of electromagnetic wavelengths, are correlative. The fusion of these features represents the input for the computational model. The AI-based models learn from the input data, and, based on this, recognize new image information. ML algorithms, in particular CNNs, are suitable for generating classification maps but are computationally expensive. This could be a limitation if insufficient hardware resources are available for data processing, which can be tackled using cloud platforms and virtual machines designed for computationally intensive tasks, such as ML (Microsoft Azure, Google Cloud Platform, DigitalOcean, Amazon Web Services). Through appropriately trained models, it is possible to obtain classification maps, yielding information on land cover at a given time. The analysis of the information obtained from classification maps at different acquisition times allows for analysis of the changes that have occurred. Eventually, this information can be used to make decisions or used again as input for further processing.
Following the general idea of the workflow, the required modules were selected. With respect to computational complexity, and to related issues (e.g., uniqueness of the observations, different approaches to data observation and recording, wide range of dimensionality) in [61], different public and private services for Web-based online processing were analyzed and compared.
Among them, we selected the GEE cloud platform, the GEE cloud repository, and TF as the cloud processor, as shown in Figure 3. These services feature a high degree of interactivity, and they are usable without the need for downloading and installing any software. However, it is important to point out that GEE and TF represent one possible combination of elements in the process. GEE has been chosen as it offers the versatility to exemplify opportunities in terms of data access/selection, visualization, and information fusion. TF was chosen as it offers a suitable, accessible, and broadly accepted environment in which to develop and adjust the deep learning model required for the segmentation task. As will be seen in more detail in the case study that we will analyze, the optical and SAR data were chosen as input data using the GEE Data Catalog; the GEE Code Editor was used to generate the reference data. Complementary optical and SAR data in space and time were combined as input to a neural network for the task of urban area identification. A U-net architecture was the model of choice for the classification. U-net has been shown to benchmark performance in segmenting information semantically and represents a revolutionary solution for the ability to handle large numbers of training samples and to perform dense pixel-wise classifications. Linked to different points in time, different outputs were combined to generate change detection maps representing the evolution of urban growth over time. Finally, the results obtained from the change maps were analyzed to acquire knowledge about the impact of COVID-19 on urban growth.

3.2. Proposed System Workflow

The detailed pipeline used for our research is shown in Figure 3, and is described in this section. As already highlighted, among the possible solutions, we selected the GEE cloud platform, the GEE cloud repository, and TF as the cloud processor.
For easier interoperability between GEE and TF, methods for importing/exporting data were given by the EE API when the TFRecord format was used. The first step consisted of gathering and setting up the imagery to be used as input to the neural network (different images should be chosen for training and prediction). After filtering the appropriate images, among the available image collections, selecting the area and time period of interest, these were exported to the Google Cloud Storage as TFRecords, which were then imported into a virtual machine (VM). Afterwards, the images selected for training and reference data were stacked to create a single image from which single samples could be accessed. The final multi-band stacked image was converted into a multidimensional array in which each new pixel stores 256 × 256 patches of pixels for each band, from which training, validation and testing data sets can be exported. In order to split the reference data with a balanced number of pixels for the classes of interest, pre-made geometries were used to sample the stack in strategic locations.
In order to give details useful for understanding the inner operations and their influence on processing, it must be taken into account that, even if a variety of GPUs are included in Colab (i.e., Nvidia T4s, K80s, or P4s and P100s), it is not possible to select which GPU to use at a specific time. Furthermore, the Colab notebook VM is sometimes not heavy-duty enough to complete an entire training job, especially for a very complex model or a large number of epochs. In these cases, it may be necessary to set up an alternative VM or to package the code for running large training jobs on GEE.
Finally, the trained model was used to make the predictions ( in this case the images were also in TFRecord format). The results were automatically saved in the cloud storage and were then available for any post-processing step. Afterwards, the output of GEE can be directly embedded in different applications. In this study, as mentioned, the deployed model was employed in GEE to execute inferences for urban sprawl analysis on the area of interest, as described in the next section.

3.3. Neural Network Setup

In this study, the proposed U-Net model takes 256 × 256 pixel patches as inputs, and outputs per-pixel class probability. A mean squared error loss function on the sigmoidal (SGD) output was used for optimization, since this task can be treated as a regression problem, rather than a classification problem. Indeed, since the segmentation task is binary, a saturating activation function is suitable here.
Shallower networks were also considered, however the best performance was achieved by the proposed architecture comprised of five encoder layers, five decoder layers and one output layer, with a probabilistic confidence layer for the urban and non-urban classes as output. The encoder layer was composed of a linear stack of 2D convolution, batch normalization layers and an activation function (relu), followed by a Max pooling operation that reduced the spatial resolution of the feature map by a factor of two. The decoder layer was comprised of Concatenate, 2D convolution, batch normalization layers and an activation function (relu). Lastly, a final convolutional layer performs a convolution along the channels for each individual pixel (kernel size of (1, 1)) and outputs the final segmentation mask.
The SGD optimizer was used as a training algorithm [62], and the maximum number of epochs used per training cycle was 50, with an initial learning rate of 0.01. Three different sets of images were considered: Sentinel-2 (S2) on its own, Sentinel-1 and S2, and pre-processed Sentinel-1 (S1_ARD) and S2.

4. Case Study

This study was conducted on Phoenix (USA), in the central region of Arizona within the Sonoran Desert. It is the fifth biggest city in the US, and its area covers about 1.338 km2, with a great heterogeneity of types of vegetation, surfaces and soil characteristics ( Figure 4). Since 1960, the city and metropolitan area underwent major growth, with many of Phoenix’s residential skyscrapers being built during that period [63,64,65,66,67,68]. In 2010, Phoenix achieved the record of becoming the sixth biggest city in US with a population of 1,445,632 and millions more citizens in nearby suburbs. In 2016, it was the second most rapidly growing metropolitan area in the US after Las Vegas. In 2020, according to the recent Census, the population was 1,608,139 with an increase in 10 years of 11.2%. More than four million additional residents moved to Arizona over the last four decades, forcing surrounding cities to expand over vast areas of fragile ecosystems, particularly the desert biomes close to Phoenix and Tucson, which were almost inhabited areas characterized by a scarcity of rainfall and vegetation. For instance, outlying suburbs, such as Buckeye, grew by nearly 80 percent over the past 10 years, with new high-rise residential buildings and row-houses sprawling outward the urban limits into the desert, as reported in the 2021 August edition of The New York Times [66]. The uncontrolled expansion and population increase in areas not adequately equipped to handle large populations have created numerous problems, such as the provision of water for all new residents and their construction sites, especially in the context of droughts and hot summers, draining rivers and reservoirs.
In this study, information retrieved from optical and SAR data were combined to improve the final change detection maps. In particular, Sentinel-2 and Sentinel-1 images of the Copernicus ESA mission were chosen. Both Sentinel-1 and Sentinel-2 data are open access and used for applications across a wide range [69,70,71,72].

4.1. Sentinel-2 Data Description

The constellation of two Sentinel-2 satellites offers multispectral imagery with a global five-day revisit time at a ground sampling distance of up to 10 m, which was adequate for the case study, including several spectral bands. Among these, the short wave infrared region of the spectrum is very important for the detection of urban areas and their separation from bare soil.
Optical RS data are widely used for classification problems due to the spectral information they provide that allows discrimination between various materials at a high level of detail [73].
For this reason, Level-2A data from the Sentinel-2 MultiSpectral Instrument (MSI) was used as the primary data source; the data were retrieved from the GEE repository [74].
Level-2A refers to the ortho-rectified bottom-of-atmosphere (BOA) reflectance product; bands with a spatial resolution of 10 and 20 m (B2, B3, B4, B5, B6, B7, B8, B8A, B11, B12) were used for the analysis. Overall, a time series of Sentinel-2 composites data was analyzed for the specified case study during the period 2018–2021. The first available images of the area of Phoenix were acquired in December 2018; for the years 2019, 2020 and 2021, the months chosen were March and September in order to monitor urban growth every six months. Image composites were produced by averaging images with reduced cloud coverage for a given month, obtaining a reduction in the presence of noise and local anomalies.

4.2. Sentinel-1 Data Description

Complementary C-band Sentinel-1 SAR products, including ground range detected (GRD) data [75], were used for the case study. Sentinel-1 satellites acquire SAR data with a global six-day revisit time under any weather conditions. As already mentioned, all-day and all-weather coverage is provided by SAR data. In addition, by examining the radar signal amplitude and using the polarization multiplicity, the main properties of built structures can be evaluated, and, for these reasons, SAR data are widely used in urban environments. The selected Sentinel-1 data cover the same time period of Sentinel-2 data. The image polarizations were VV (vertical transmittance and receiving) and VH (vertical transmittance and horizontal receiving), while the data were chosen from the interferometric wide swath (IW) acquisition mode with an ascending orbit. In this case, the high-resolution Level-1 GRD data had a 10 m spatial resolution. An `angle band’ was included in each scene, containing at every point the approximate incidence angle from the ellipsoid.
As already highlighted, the optical and SAR data combination was expected to result in feature fusion, since data with different inner characteristics were considered. In particular, while optical data bring information on the Earth surface composition and materials, the contribution of SAR data when polarized is mainly related to the geometry of the surface (i.e., flat, rough, tall, etc.). Therefore, the goal in taking multimodal data in the proposed model was to achieve competitive integration of optical and SAR data at a signal level by enhancing the overall final information on man-made structures, or, for instance, on vegetated areas [76,77].
The Sentinel-1 GRD data used in this study were retrieved, as already pointed out, from the GEE catalogue, and the data in the linear scale under the ’Float’ extension were retained (i.e., Image collection ID: C O P E R N U C U S / S 1 _ G R D _ F L O A T ). The final terrain-corrected values for C O P E R N U C U S / S 1 _ G R D _ F L O A T were used without conversion to decibels. It is important to emphasize that this choice guarantees that the statistical properties of the data are preserved and this requirement must, most often, be respected for meaningful outputs [75,78].
GEE developers provide Sentinel-1 data pre-processed with the following operations: orbit file correction, GRD border noise removal, thermal noise removal, radiometric calibration, and terrain correction using SRTM 30, or ASTER DEM for areas greater than 60 degrees latitude where SRTM is not available [79,80]. However, the authors conducted a test using a new framework for preparing Sentinel-1 analysis-ready data (S1_ARD) for additional border noise correction, speckle filtering and radiometric terrain normalization, which was used according to [78]. To preserve the information content and user freedom, these pre-processing steps are often not applied directly to the data.

4.3. Preparation of Reference Data

The preparation of the reference data for training and evaluation was performed manually; the data have been made available for open access [81,82].
The images used for this purpose come from the NAIP database, acquired at a one-meter ground sampling distance (GSD) and available in GEE. The spatial resolution of the data allows creation of reference data with a high level of detail by visual interpretation. The reference data were created by drawing polygons directly onto the NAIP optical images as imported in GEE, after applying a spatial filter on the region of interest, and a temporal filter for the time interval 1 January 2019–31 December 2019, and finally selecting the scenes with the smallest cloud coverage. The digitized reference data were finally transformed from vector to raster data with a spatial resolution of 10 m. As well as being imported into the GEE code editor, the data can be exported in different formats (CSV, SHP (shapefile), GeoJSON, KML, KMZ or TFRecord) and represent an urban layer which can be made freely available to the community for training and testing urban detection algorithms, using the described methodology as a possible benchmark. The reference data created in this study can be found at the links [81,82].
Data collected on Phoenix over an area of 207 km2 were split into three subsets used for training, validation and testing. In order to create a more heterogeneous dataset for urban regions in Arizona, with the aim of easily expanding the area of potential application outside the Phoenix metropolitan area, additional reference data were collected for the city of Tucson ( Figure 5) over an area of 9 , 15 km2.

5. Results and Discussion

In this section, first the results of the proposed method are presented, based on the chosen metrics. Then, an urban sprawl qualitative analysis is carried out over the Phoenix area, while a quantitative assessment on a limited region referring to Queen Creek is also reported. Finally, the obtained results are further inspected to assess the impact of the COVID pandemic.

5.1. Classification Results and Accuracy

The metrics used for evaluating the proposed models were precision (1), recall (2), and F1 score (3), defined by the following equations:
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
F 1 = 2 P r e c i s i o n R e c a l l P r e c i s i o n + R e c a l l
with TP, FP, TN and FN standing, respectively, for true positives, false positives, true negatives, and false negatives.
Table 1 shows the validation results for the three different datasets. Standalone S2 provides adequate classification performances, with the fusion of SAR and optical data improving the results. As expected, the overall model performance improved with the merging of optical and SAR data, since their complementary nature allowed for competitive integration of textural, spatial, spectral, and temporal characteristics.
When both S2 and S1_ARD were considered, the experiments did not yield superior results, in spite of a relevant increase in recall. Considering the nature of the SAR images of the urban areas, the main explanation for this was assumed to be the impact of spatial speckle filtering, which disturbs salient signatures related to urban structures, such as building corners, balconies, or the bottom end of facades. To exemplify this, it is known that signatures of this kind are the main source of information when estimating building deformations with persistent scatterer interferometry. Speckle filtering supports the analysis of extended surfaces with diffuse signal response, e.g., crop fields. However, analyzing urban sprawl favors focusing on salient and spatially concentrated signatures at the maximum available level of spatial resolution.
In the next section, urban sprawl analysis has been carried out on the model trained using S2 and S1 data.

5.2. Urban Sprawl Analysis

A multi-temporal analysis of the entire area of Phoenix (4.714 km2) was performed, by employing in GEE the deployed model in order to draw inferences for urban detection. Specifically, urban detection maps were obtained in December 2018, March 2020 and September 2021. From these, maps of changes were derived (referring to the periods December 2018–March 2020 and March 2020–September 2021, respectively, pre-COVID-19 and post-COVID-19 pandemic outbreak) and have been represented in the Figure 6. Changes reported in the maps are areas that were classified as: (1) urban in all three images (Urban permanent); (2) not urban in December 2018 and urban in March 2020, called New Urban 1 (2018–2020); (3) not urban in March 2020 and urban in September 2021, called New Urban 2 (2020–2021).
The choice of this time frame was made mainly to create a change map of the dynamic Phoenix metropolitan area over recent years and to investigate whether the health emergency linked to COVID-19 had affected the urbanization phenomenon that has characterized the city for decades. The lack of comprehensive reference data did not allow for a quantitative analysis of the results over the whole Phoenix area. Nevertheless, the availability of reference data on a smaller region enabled quantitative evaluation of the results, as reported in Section 5.3.
Regarding the whole Phoenix area, a visual comparison of the true color combinations of the Sentinel-2 data was carried out, with Figure 7 and Figure 8 reporting two subsets of interest for the two periods 2018–2020 and 2020–2021, showing the reliability of the method in identifying new built-up areas.
The visual interpretation of the change maps shows that areas exhibiting the fastest urban growth were located outside the Phoenix city boundaries, and included both industrial and residential areas.

5.3. Validation on Queen Creek

A limited quantitative assessment was carried out on Queen Creek ( Figure 9), a city southeast of Phoenix, as an official map indicating development areas could be compared to the change detection results ( Figure 10).
The document was released by the Queen Creek’s administration in 2018 and is available at this link [83]. It reports areas of “Active home construction”, “Completed", “Active development construction”, and “Future development”. For our purposes, only the “Active home construction” labels were considered. The map is downloadable in .png or .pdf format but is not georeferenced; therefore, the overlay on the results and the comparison of each polygon was carried out manually.
A visual inspection for each polygon in the mentioned class of interest was carried out on S2 images in the periods defined above (December 2018 and September 2021). A total of 17 areas corresponding to “Active home construction” were positively identified by the algorithm, and reported as true positives in Figure 11. Furthermore, 17 polygons labeled as "Completed" in the official document of Queen Creek were under construction in the time frame of interest, and were correctly detected by the algorithm as verified by visual inspection. The latter were reported as "true positive verified by visual inspection" in Figure 11, along with examples for the visual validation of changes occurring in the period December 2018–September 2021.
In conclusion, by visually inspecting each polygon, a total of 34 areas of change were correctly detected, with only two and four areas containing undetected changes and false alarms, respectively. The results demonstrated that the ancillary information available on a subset of the metropolitan area agreed well with the produced maps of detected changes, highlighting the effectiveness of the proposed method.

5.4. Consideration of COVID-19 Impact on Urban Growth Rate

As demonstrated in previous sections, the proposed change detection procedure was reliable, and can be adapted to any period of interest by simply changing the dates of the Copernicus data to be retrieved.
Results for Queen Creek, referring to the periods December 2018–March 2020 and March 2020 - September 2021, reported in Figure 10, were further inspected to assess the impact of the COVID-19 pandemic. The city growth was computed on a total urban area of 29.6 km2. In the time frame December 2018–March 2020 (15 months), the expansion of the city was 3.53 km2; therefore, an average of 9.53% growth was observed per year. In the period March 2020–September 2021 (18 months), the growth was measured as 2.72 km2, an average of 6.11 % growth per year. The considered time spans refer to the period before and after COVID-19, enabling a quantitative evaluation of the pandemic impact on Queen Creek, with COVID-19 having an impact on the urban expansion leading to a slowdown of 35%. Official data from the U.S. Census Bureau [84] reported a drop of 10% between May 2019 and May 2020 in the number of permits to build single family homes in Arizona. This was observable in our results, with a higher impact on dynamic areas, such as Queen Creek.
In general, for the whole selected area of Phoenix, a 22% decrease in growth rate per year was observed by comparing the total surfaces of newly built areas in the two change detection maps, before and after the outbreak of the pandemic. This agrees well with data from the US Census Bureau, which reported a decrease after the outbreak of the pandemic in population growth, which was strongly correlated with urban growth, of 25% in major metropolitan areas in the US [85].
The presented analysis is just an example of the use of the proposed method. The chosen time frames allow straightforward analysis of the link between COVID-19 and urban sprawl evolution. Further studies and discussion should be carried out on the specific causes of this slowdown, but these are outside the purpose of this study.

6. Conclusions

In this study a general framework for the analysis of urban growth through ML techniques implemented on a cloud platform was presented. The advantages of using these powerful tools for monitoring territory have been extensively discussed. The availability of open satellite images with a temporal resolution of several days can readily support the mapping of changes for areas of interest.
In the proposed model, we selected GEE as the cloud platform with its TF library. This has been found to be effective for the monitoring of complex urban dynamics over large areas characterized by fast growth. The choice of multimodal data, the particular network architecture, the different proposed datasets, and the selected area, were intended to provide an example of how cloud computing can impact on realizing the integration of AI, data fusion, and change detection techniques to design a complex tool useful for decision-making by policy makers in urban monitoring.
As a case study, the urban growth of Phoenix was analyzed, and the impact that the COVID-19 pandemic had on the growth of the area of Queen Creek was assessed. In order to focus on different periods of interest, or to derive a full multitemporal evolution of the area, the workflow can be kept unaltered by just changing the dates of the images of interest or adding new ones.
The easy usability of the selected platforms, specifically GEE and TF, and the available computing power, contribute to the adaptability and flexibility of the method, based on key features which are powerful and facilitate timely monitoring of the territory, and which are within the limits of the economic resources that public institutions often have available.
An area larger than 200 km2 was manually annotated using high-resolution data, with the resulting urban layer made freely available to the community for training and testing urban detection algorithms, using the described methodology as a possible benchmark .
Future work will explore possible extensions of the proposed model and seek to make further improvements. One idea is to consider other pre-processing steps in the data and to analyze their impact. Another objective will be to work on web platform creation to offer a turnkey tool which is ready to use.

Author Contributions

Conceptualization by C.Z., D.C., S.A., S.L.U. and P.R.; data curation by C.Z., D.C. and S.A.; formal analysis by C.Z., D.C. and S.A.; funding acquisition by C.Z. and S.L.U.; investigation by C.Z., D.C. and S.A.; methodology by C.Z., D.C., S.A., S.L.U. and P.R.; project administration by D.C., S.A., S.L.U. and P.R.; software by C.Z. and D.C.; supervision by D.C., S.A., S.L.U. and P.R.; validation by C.Z., D.C., S.A., S.L.U. and P.R.; writing—original draft by C.Z.; writing—review and editing by C.Z., D.C., S.A., S.L.U. and P.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by POR Campania FSE 2014/2020—Obiettivo Specifico 14—Azione 10.4.5 http://porfesr.regione.campania.it/(accessed on 1 January 2022).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nations, U. United Nations Committee of Experts on Global Geospatial Information Management. Available online: https://ggim.un.org/ (accessed on 1 January 2022).
  2. Kersten, G.E.; Mikolajuk, Z.; Gar-On Yeh, A. Decision Support Systems for Sustainable Development: A Resource Book of Methods and Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2000; pp. 13–27. [Google Scholar] [CrossRef]
  3. Changnon, S.A. The Great Flood of 1993: Causes, Impacts, and Responses, 1st ed.; Routledge: Abingdon, UK; Taylor and Francis Group: Abingdon, UK, 1996. [Google Scholar] [CrossRef]
  4. Traver, R. Flood Risk Management: Call for a National Strategy; ASCE Library: Reston, VA, USA, 2014. [Google Scholar] [CrossRef] [Green Version]
  5. Konrad, C.P. U.S. Geological Survey Fact Sheet 076-03. Available online: https://pubs.usgs.gov/fs/fs07603/ (accessed on 29 November 2016).
  6. Rees, W.E. Ecological footprints and appropriated carrying capacity: What urban economics leaves out. Environ. Urban. 1992, 4, 121–130. [Google Scholar] [CrossRef]
  7. Jordan, Y.C.; Ghulam, A.; Chu, M.L. Assessing the Impacts of Future Urban Development Patterns and Climate Changes on Total Suspended Sediment Loading in Surface Waters Using Geoinformatics. J. Environ. Inform. 2014, 24, 65–79. [Google Scholar] [CrossRef] [Green Version]
  8. Pinto, F. Urban Planning and Climate Change: Adaptation and Mitigation Strategies. TeMA J. Land Use Mobil. Environ. 2014. [CrossRef]
  9. Cherlet, M.; Hutchinson, C.; Reynolds, J.; Hill, J.S.S.; von Maltitz, G. World Atlas of Desertification; Publication Office of the European Union: Luxembourg, 2018. [Google Scholar]
  10. Johnson, M.P. Environmental Impacts of Urban Sprawl: A Survey of the Literature and Proposed Research Agenda. Environ. Plan. A Econ. Space 2001, 33, 717–735. [Google Scholar] [CrossRef] [Green Version]
  11. Alaoui, A.; Ibara, B.O.; Ettaki, B.; Zerouaoui, J. Survey of Process of Data Discovery and Environmental Decision Support Systems. Int. J. Innov. Technol. Explor. Eng. (IJITEE) 2021, 10, 46–50. [Google Scholar] [CrossRef]
  12. Walling, E.; Vaneeckhaute, C. Developing successful environmental decision support systems: Challenges and best practices. J. Environ. Manag. 2020, 264, 110513. [Google Scholar] [CrossRef]
  13. Nicola, M.; Alsafi, Z.; Sohrabi, C.; Kerwan, A.; Al-Jabir, A.; Iosifidis, C.; Agha, M.; Agha, R. The socio-economic implications of the coronavirus pandemic (COVID-19): A review. Int. J. Surg. 2020, 78, 185–193. [Google Scholar] [CrossRef]
  14. Loeffler-Wirth, H.; Schmidt, M.; Binder, H. Covid-19 Transmission Trajectories–Monitoring the Pandemic in the Worldwide Context. Viruses 2020, 12, 777. [Google Scholar] [CrossRef]
  15. Giordano, G.; Blanchini, F.; Bruno, R.; Colaneri, P.; Di Filippo, A.; Di Matteo, A.; Colaneri, M. Modelling the COVID-19 epidemic and implementation of population-wide interventions in Italy. Nat. Med. 2020, 26, 855–860. [Google Scholar] [CrossRef]
  16. Sebastianelli, A.; Mauro, F.; Di Cosmo, G.; Passarini, F.; Carminati, M.; Ullo, S.L. AIRSENSE-TO-ACT: A Concept Paper for COVID-19 Countermeasures Based on Artificial Intelligence Algorithms and Multi-Source Data Processing. ISPRS Int. J. Geo-Inf. 2021, 10, 34. [Google Scholar] [CrossRef]
  17. Ullo, S.L.; Zarro, C.; Wojtowicz, K.; Meoli, G.; Focareta, M. LiDAR-Based System and Optical VHR Data for Building Detection and Mapping. Sensors 2020, 20, 1285. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Rajendran, G.B.; Kumarasamy, U.M.; Zarro, C.; Divakarachari, P.B.; Ullo, S.L. Land-Use and Land-Cover Classification Using a Human Group-Based Particle Swarm Optimization Algorithm with an LSTM Classifier on Hybrid Pre-Processing Remote-Sensing Images. Remote Sens. 2020, 12, 4135. [Google Scholar] [CrossRef]
  19. Xu, Y.; Du, B.; Zhang, L.; Cerra, D.; Pato, M.; Carmona, E.; Prasad, S.; Yokoya, N.; Hänsch, R.; Le Saux, B. Advanced Multi-Sensor Optical Remote Sensing for Urban Land Use and Land Cover Classification: Outcome of the 2018 IEEE GRSS Data Fusion Contest. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 1709–1724. [Google Scholar] [CrossRef]
  20. Weng, Q.E. Global Urban Monitoring and Assessment through Earth Observation, 1st ed.; CRC Press: Boca Raton, FL, USA, 2014. [Google Scholar] [CrossRef]
  21. Chien, S.; Tanpipat, V. Remote Sensingremote sensingof Natural Disastersremote sensingof natural disasters. In Encyclopedia of Sustainability Science and Technology; Meyers, R.A., Ed.; Springer: New York, NY, USA, 2012; pp. 8939–8952. [Google Scholar] [CrossRef]
  22. Pepe, M.; Costantino, D.; Alfio, V.S.; Vozza, G.; Cartellino, E. A Novel Method Based on Deep Learning, GIS and Geomatics Software for Building a 3D City Model from VHR Satellite Stereo Imagery. ISPRS Int. J. Geo-Inf. 2021, 10, 697. [Google Scholar] [CrossRef]
  23. Temitope Yekeen, S.; Balogun, A.; Wan Yusof, K.B. A novel deep learning instance segmentation model for automated marine oil spill detection. ISPRS J. Photogramm. Remote Sens. 2020, 167, 190–200. [Google Scholar] [CrossRef]
  24. Li, Q.; Shi, Y.; Auer, S.; Roschlaub, R.; Möst, K.; Schmitt, M.; Glock, C.; Zhu, X. Detection of Undocumented Building Constructions from Official Geodata Using a Convolutional Neural Network. Remote Sens. 2020, 12, 3537. [Google Scholar] [CrossRef]
  25. Orsomando, F.; Lombardo, P.; Zavagli, M.; Costantini, M. SAR and Optical Data Fusion for Change Detection. In Proceedings of the 2007 Urban Remote Sensing Joint Event, Paris, France, 11–13 April 2007; pp. 1–9. [Google Scholar] [CrossRef]
  26. Adriano, B.; Yokoya, N.; Xia, J.; Baier, G. Big Earth Observation Data Processing for Disaster Damage Mapping. In Handbook of Big Geospatial Data; Werner, M., Chiang, Y.Y., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 99–118. [Google Scholar] [CrossRef]
  27. Yang, X.; Lo, C. Using a time series of satellite imagery to detect land use and land cover changes in the Atlanta, Georgia metropolitan area. Int. J. Remote Sens. 2002, 23, 1775–1798. [Google Scholar] [CrossRef]
  28. Charbonneau, L.; Morin, D.; Royer, A. Analysis of different methods for monitoring the urbanization process. Geocarto Int. 1993, 8, 17–25. [Google Scholar] [CrossRef]
  29. Yang, X. Urban Remote Sensing: Monitoring, Synthesis and Modeling in the Urban Environment; 2nd ed.; Wiley-Blackwell: Hoboken, NJ, USA, 2021. [Google Scholar] [CrossRef]
  30. Beneke, C.; Schneider, A.; Sulla-Menashe, D.; Tatem, A.; Tan, B. Detecting change in urban areas at continental scales with MODIS data. Remote Sens. 2015, 158, 331–347. [Google Scholar] [CrossRef]
  31. Tamiminia, H.; Salehi, B.; Mahdianpari, M.; Quackenbush, L.; Adeli, S.; Brisco, B. Google Earth Engine for geo-big data applications: A meta-analysis and systematic review. ISPRS J. Photogramm. Remote Sens. 2020, 164, 152–170. [Google Scholar] [CrossRef]
  32. Schmitt, M.; Zhu, X. Data Fusion and Remote Sensing—An Ever-Growing Relationship. IEEE Geosci. Remote Sens. Mag. 2016, 4, 6–23. [Google Scholar] [CrossRef]
  33. Fatone, L.; Maponi, P.; Zirilli, F. Fusion of SAR/optical images to detect urban areas. In Proceedings of the IEEE/ISPRS Joint Workshop on Remote Sensing and Data Fusion over Urban Areas (Cat. No.01EX482), Rome, Italy, 8–9 November 2001; pp. 217–221. [Google Scholar] [CrossRef]
  34. Sha, M.; Tian, G. An analysis of spatiotemporal changes of urban landscape pattern in Phoenix metropolitan region. International Conference on Ecological Informatics and Ecosystem Conservation (ISEIS 2010). Procedia Environ. Sci. 2010, 2, 600–604. [Google Scholar] [CrossRef] [Green Version]
  35. Galletti, C.S.; Myint, S.W. Land-Use Mapping in a Mixed Urban-Agricultural Arid Landscape Using Object-Based Image Analysis: A Case Study from Maricopa, Arizona. Remote Sens. 2014, 6, 6089–6110. [Google Scholar] [CrossRef] [Green Version]
  36. Li, X.; Myint, S.; Zhang, Y.; Galletti, C.; Zhang, X.; II, B. Object-based land-cover classification for metropolitan Phoenix, Arizona, using aerial photography. Int. J. Appl. Earth Obs. Geoinf. 2014, 33, 321–330. [Google Scholar] [CrossRef]
  37. Yang, L.; Siddiqi, A.; de Weck, O.L. Urban Roads Network Detection from High Resolution Remote Sensing. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 7431–7434. [Google Scholar] [CrossRef]
  38. National Agriculture Imagery Programu. Available online: https://developers.google.com/earth-engine/datasets/catalog/USDA_NAIP_DOQQ (accessed on 1 January 2022).
  39. Earth Engine Data Catalog. Available online: https://developers.google.com/earth-engine/datasets/catalog (accessed on 1 January 2022).
  40. Fattore, C.; Abate, N.; Faridani, F.; Masini, N.; Lasaponara, R. Google Earth Engine as Multi-Sensor Open-Source Tool for Supporting the Preservation of Archaeological Areas: The Case Study of Flood and Fire Mapping in Metaponto, Italy. Sensors 2021, 21, 1791. [Google Scholar] [CrossRef]
  41. Mayer, T.; Poortinga, A.; Bhandari, B.; Nicolau, A.P.; Markert, K.; Thwal, N.S.; Markert, A.; Haag, A.; Kilbride, J.; Chishtie, F.; et al. Deep learning approach for Sentinel-1 surface water mapping leveraging Google Earth Engine. ISPRS Open J. Photogramm. Remote Sens. 2021, 2, 100005. [Google Scholar] [CrossRef]
  42. Bar, S.; Parida, B.R.; Pandey, A.C. Landsat-8 and Sentinel-2 based Forest fire burn area mapping using machine learning algorithms on GEE cloud platform over Uttarakhand, Western Himalaya. Remote Sens. Appl. Soc. Environ. 2020, 18, 100324. [Google Scholar] [CrossRef]
  43. Amani, M.; Ghorbanian, A.; Ahmadi, S.A.; Kakooei, M.; Moghimi, A.; Mirmazloumi, S.M.; Moghaddam, S.H.A.; Mahdavi, S.; Ghahremanloo, M.; Parsian, S.; et al. Google Earth Engine Cloud Computing Platform for Remote Sensing Big Data Applications: A Comprehensive Review. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 5326–5350. [Google Scholar] [CrossRef]
  44. Poortinga, A.; Thwal, N.S.; Khanal, N.; Mayer, T.; Bhandari, B.; Markert, K.; Nicolau, A.P.; Dilger, J.; Tenneson, K.; Clinton, N.; et al. Mapping sugarcane in Thailand using transfer learning, a lightweight convolutional neural network, NICFI high resolution satellite imagery and Google Earth Engine. ISPRS Open J. Photogramm. Remote Sens. 2021, 1, 100003. [Google Scholar] [CrossRef]
  45. Google. Tensorflow. Available online: https://www.tensorflow.org/ (accessed on 1 January 2022).
  46. Ertam, F.; Aydın, G. Data classification with deep learning using Tensorflow. In Proceedings of the 2017 International Conference on Computer Science and Engineering (UBMK), Antalya, Turkey, 5–8 October 2017; pp. 755–758. [Google Scholar] [CrossRef]
  47. Demirović, D.; Skejić, E.; Šerifović–Trbalić, A. Performance of Some Image Processing Algorithms in Tensorflow. In Proceedings of the 2018 25th International Conference on Systems, Signals and Image Processing (IWSSIP), Maribor, Slovenia, 20–22 June 2018; pp. 1–4. [Google Scholar] [CrossRef]
  48. Google Earth Engine and Tensorflow. Available online: https://developers.google.com/earth-engine/guides/tensorflow (accessed on 1 January 2022).
  49. Google Eart Engine and Tensorflow examples. Available online: https://developers.google.com/earth-engine/guides/tf_examples (accessed on 1 January 2022).
  50. Colaboratory. Available online: https://colab.research.google.com/ (accessed on 1 January 2022).
  51. Ullo, S.; Del Rosso, M.P.; Sebastianelli, A.; Puglisi, E.; Bernardi, M.; Cimitile, M. How to Develop Your Network with Python and Keras; IET Publishing: London, UK, 2021; pp. 131–158. [Google Scholar] [CrossRef]
  52. Singhla, R.; Singh, P.; Madaan, R.; Panda, S. Image Classification Using Tensor Flow. In Proceedings of the 2021 International Conference on Artificial Intelligence and Smart Systems (ICAIS), Coimbatore, India, 25–27 March 2021; pp. 398–401. [Google Scholar] [CrossRef]
  53. Csaybar Website. Available online: https://csaybar.github.io/blog/2019/06/21/eetf2/ (accessed on 1 January 2022).
  54. Adrian, J.; Sagan, V.; Maimaitijiang, M. Sentinel SAR-optical fusion for crop type mapping using deep learning and Google Earth Engine. ISPRS J. Photogramm. Remote Sens. 2021, 175, 215–235. [Google Scholar] [CrossRef]
  55. Asma, S.B.; Abdelhamid, D.; Youyou, L. U-Net Based Classification For Urban Areas In Algeria. In Proceedings of the 2020 Mediterranean and Middle-East Geoscience and Remote Sensing Symposium (M2GARSS), Tunis, Tunisia, 9–11 March 2020; pp. 101–104. [Google Scholar] [CrossRef]
  56. Zhang, W.; Tang, P.; Zhao, L.; Huang, Q. A Comparative Study of U-Nets with Various Convolution Components for Building Extraction. In Proceedings of the 2019 Joint Urban Remote Sensing Event (JURSE), Vannes, France, 22–24 May 2019; pp. 1–4. [Google Scholar] [CrossRef]
  57. Duan, Y.; Sun, L. Buildings Extraction from Remote Sensing Data Using Deep Learning Method Based on Improved U-Net Network. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 3959–3961. [Google Scholar] [CrossRef]
  58. McGlinchy, J.; Johnson, B.; Muller, B.; Joseph, M.; Diaz, J. Application of UNet Fully Convolutional Neural Network to Impervious Surface Segmentation in Urban Environment from High Resolution Satellite Imagery. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 3915–3918. [Google Scholar] [CrossRef]
  59. Siddique, N.; Paheding, S.; Elkin, C.P.; Devabhaktuni, V. U-Net and Its Variants for Medical Image Segmentation: A Review of Theory and Applications. IEEE Access 2021, 9, 82031–82057. [Google Scholar] [CrossRef]
  60. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the International Conference on Medical image computing and computer-assisted intervention (MICCAI), Munich, Germany, 5–9 October 2015. [Google Scholar]
  61. Sudmanns, M.; Tiede, D.; Lang, S.; Bergstedt, H.; Trost, G.; Augustin, H.; Baraldi, A.; Blaschke, T. Big Earth data: Disruptive changes in Earth observation data management and analysis? Int. J. Digit. Earth 2020, 13, 832–850. [Google Scholar] [CrossRef] [PubMed]
  62. Sutskever, I.; Martens, J.; Dahl, G.; Hinton, G. On the importance of initialization and momentum in deep learning. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16 June–21 June 2013; pp. 1139–1147. [Google Scholar]
  63. Government Services. City of Phoenix. Available online: https://www.phoenix.gov/ (accessed on 1 January 2022).
  64. Phoenix, Arizona. Available online: https://en.wikipedia.org/wiki/Phoenix,_Arizona (accessed on 1 January 2022).
  65. Taking a Look at Census 2010. City and Town Population Totals: 2010–2019. Available online: https://www.census.gov/data/tables/time-series/demo/popest/2010s-total-cities-and-towns.html (accessed on 1 January 2022).
  66. Healy, J.; No Large City Grew Faster than Phoenix. The New York Times: Census Updates. Available online: https://www.nytimes.com/2021/08/12/us/phoenix-census-fastest-growing-city.html (accessed on 1 January 2022).
  67. Martinez, N. Urban Sprawl in Arizona, Commercial Growth and the Effects of It. Available online: https://storymaps.arcgis.com/stories/c51f38e57cc04c35b898e9b30a9dd0d5 (accessed on 12 February 2021).
  68. Kolankiewicz, L.; Roy Beck, E.A. Population Growth and the Diminishing Natural State of Arizona; NumbersUSA: Arlington County, VA, USA, 2020. [Google Scholar]
  69. Benedetti, A.; Picchiani, M.; Del Frate, F. Sentinel-1 and Sentinel-2 Data Fusion for Urban Change Detection. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 1962–1965. [Google Scholar] [CrossRef]
  70. Hafner, S.; Ban, Y.; Nascetti, A. Exploring the Fusion of Sentinel-1 SAR and Sentinel-2 MSI Data for Built-Up Area Mapping Using Deep Learning. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 4720–4723. [Google Scholar] [CrossRef]
  71. Notarnicola, C.; Asam, S.; Jacob, A.; Marin, C.; Rossi, M.; Stendardi, L. Mountain crop monitoring with multitemporal Sentinel-1 and Sentinel-2 imagery. In Proceedings of the 2017 9th International Workshop on the Analysis of Multitemporal Remote Sensing Images (MultiTemp), Brugge, Belgium, 27–29 June 2017; pp. 1–4. [Google Scholar] [CrossRef]
  72. Yang, Q.; Wang, L.; Huang, J.; Lu, L.; Li, Y.; Du, Y.; Ling, F. Mapping Plant Diversity Based on Combined SENTINEL-1/2 Data—Opportunities for Subtropical Mountainous Forests. Remote Sens. 2022, 14, 492. [Google Scholar] [CrossRef]
  73. Gómez, C.; White, J.C.; Wulder, M.A. Optical remotely sensed time series data for land cover classification: A review. ISPRS J. Photogramm. Remote Sens. 2016, 116, 55–72. [Google Scholar] [CrossRef] [Green Version]
  74. Sentinel 2 Datasets. Available online: https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_S2_SR (accessed on 1 January 2022).
  75. Sentinel 1 DATASETS. Available online: https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_S1_GRD (accessed on 1 January 2022).
  76. Addabbo, P.; Focareta, M.; Marcuccio, S.; Votto, C.; Ullo, S. Land cover classification and monitoring through multisensor image and data combination. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 902–905. [Google Scholar] [CrossRef]
  77. Addabbo, P.; Focareta, M.; Marcuccio, S.; Votto, C.; Ullo, S.L. Contribution of Sentinel-2 data for applications in vegetation monitoring. Acta Imeko 2016, 5, 44. [Google Scholar] [CrossRef]
  78. Mullissa, A.; Vollrath, A.; Odongo-Braun, C.; Slagter, B.; Balling, J.; Gou, Y.; Gorelick, N.; Reiche, J. Sentinel-1 SAR Backscatter Analysis Ready Data Preparation in Google Earth Engine. Remote Sens. 2021, 13, 1954. [Google Scholar] [CrossRef]
  79. Ghorbanian, A.; Zaghian, S.; Asiyabi, R.M.; Amani, M.; Mohammadzadeh, A.; Jamali, S. Mangrove Ecosystem Mapping Using Sentinel-1 and Sentinel-2 Satellite Images and Random Forest Algorithm in Google Earth Engine. Remote Sens. 2021, 13, 2565. [Google Scholar] [CrossRef]
  80. Google Earth Engine and Sentinel pre-processing. Available online: https://developers.google.com/earth-engine/guides/sentinel1#sentinel-1-preprocessing (accessed on 1 January 2022).
  81. Zarro, C. Ground Truth of Phoenix. Available online: https://code.earthengine.google.com/?asset=users/chiarazarro/GroundTruthPHOENIX (accessed on 1 January 2022).
  82. Zarro, C. Ground Truth of Tucson. Available online: https://code.earthengine.google.com/?asset=users/chiarazarro/GroundTruthTUCSON (accessed on 1 January 2022).
  83. Administration, Q.C. Queen Creek Development Map. Available online: https://qcgis.maps.arcgis.com/apps/View/index.html?appid=69f33e00224d4ad78c462be9f412d628 (accessed on 29 November 2016.).
  84. National Association of Home Builders. Building Permits by State and Metro Area. Available online: https://www.nahb.org/news-and-economics/housing-economics/state-and-local-data/building-permits-by-state-and-metro-area (accessed on 17 February 2022).
  85. Bureau, U.S.C. Pandemic Population Change across Metro America: Accelerated Migration, Less Immigration, Fewer Births and More Deaths. Available online: https://www.brookings.edu/research/pandemic-population-change-across-metro-america-accelerated-migration-less-immigration-fewer-births-and-more-deaths/ (accessed on 20 May 2021).
Figure 1. The U-net diagram shows a multi-channel feature map with a blue box indicating the number of channels, and its lower left edge indicating the x-y-size. Copied feature maps are represented by the white boxes [60].
Figure 1. The U-net diagram shows a multi-channel feature map with a blue box indicating the number of channels, and its lower left edge indicating the x-y-size. Copied feature maps are represented by the white boxes [60].
Remotesensing 14 02038 g001
Figure 2. Workflow of the proposed decision aid system.
Figure 2. Workflow of the proposed decision aid system.
Remotesensing 14 02038 g002
Figure 3. Workflow including the core of the proposed system illustrating the choice for cloud repository and computing.
Figure 3. Workflow including the core of the proposed system illustrating the choice for cloud repository and computing.
Remotesensing 14 02038 g003
Figure 4. The study area covering the city of Phoenix, the capital of Arizona, United States.
Figure 4. The study area covering the city of Phoenix, the capital of Arizona, United States.
Remotesensing 14 02038 g004
Figure 5. Reference data manually collected over Phoenix and Tucson. High-resolution images have been interpreted visually.
Figure 5. Reference data manually collected over Phoenix and Tucson. High-resolution images have been interpreted visually.
Remotesensing 14 02038 g005
Figure 6. Sentinel-2 image mosaic with overlaid change detection results: Classes of Urban (permanent), New Urban 1 (2018–2020), and New Urban 2 (2020–2021) are shown for the area of interest.
Figure 6. Sentinel-2 image mosaic with overlaid change detection results: Classes of Urban (permanent), New Urban 1 (2018–2020), and New Urban 2 (2020–2021) are shown for the area of interest.
Remotesensing 14 02038 g006
Figure 7. Detail of change map for the period December 2018–March 2020. The first column shows a subset of Phoenix, respectively, on December 2018 (a) and March 2020 (b); the second column shows change detection results (c) and detail (d), where relevant changes are correctly detected (yellow areas).
Figure 7. Detail of change map for the period December 2018–March 2020. The first column shows a subset of Phoenix, respectively, on December 2018 (a) and March 2020 (b); the second column shows change detection results (c) and detail (d), where relevant changes are correctly detected (yellow areas).
Remotesensing 14 02038 g007
Figure 8. Detail of change map for the period March 2020 - September 2021. The first column shows a subset of Phoenix, respectively, on March 2020 (a) and September 2021 (b); the second column shows change detection results (c) and detail (d), where relevant changes are correctly detected (blue areas).
Figure 8. Detail of change map for the period March 2020 - September 2021. The first column shows a subset of Phoenix, respectively, on March 2020 (a) and September 2021 (b); the second column shows change detection results (c) and detail (d), where relevant changes are correctly detected (blue areas).
Remotesensing 14 02038 g008
Figure 9. The town of Queen Creek, located to the SE of Phoenix—@2022 CNES/ Airbus, Landsat/Copernicus,Maxar Technologies, U.S. Geological Survey, USDA Farm Service Agency @ Google.
Figure 9. The town of Queen Creek, located to the SE of Phoenix—@2022 CNES/ Airbus, Landsat/Copernicus,Maxar Technologies, U.S. Geological Survey, USDA Farm Service Agency @ Google.
Remotesensing 14 02038 g009
Figure 10. Sentinel-2 image and change detection results for Queen Creek: classes of Urban (permanent), New Urban 1 (2018–2020), and New Urban 2 (2020–2021) are shown for the area of interest within the black polygon.
Figure 10. Sentinel-2 image and change detection results for Queen Creek: classes of Urban (permanent), New Urban 1 (2018–2020), and New Urban 2 (2020–2021) are shown for the area of interest within the black polygon.
Remotesensing 14 02038 g010
Figure 11. Comparison between change detection results and official documents of Queen Creek Development.
Figure 11. Comparison between change detection results and official documents of Queen Creek Development.
Remotesensing 14 02038 g011
Table 1. Performance assessment of the algorithms.
Table 1. Performance assessment of the algorithms.
S2S2 and S1S2 and S1_ARD
Precision0.7030.7460.705
Recall0.8110.8060.823
F1 score0.7540.7750.759
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zarro, C.; Cerra, D.; Auer, S.; Ullo, S.L.; Reinartz, P. Urban Sprawl and COVID-19 Impact Analysis by Integrating Deep Learning with Google Earth Engine. Remote Sens. 2022, 14, 2038. https://doi.org/10.3390/rs14092038

AMA Style

Zarro C, Cerra D, Auer S, Ullo SL, Reinartz P. Urban Sprawl and COVID-19 Impact Analysis by Integrating Deep Learning with Google Earth Engine. Remote Sensing. 2022; 14(9):2038. https://doi.org/10.3390/rs14092038

Chicago/Turabian Style

Zarro, Chiara, Daniele Cerra, Stefan Auer, Silvia Liberata Ullo, and Peter Reinartz. 2022. "Urban Sprawl and COVID-19 Impact Analysis by Integrating Deep Learning with Google Earth Engine" Remote Sensing 14, no. 9: 2038. https://doi.org/10.3390/rs14092038

APA Style

Zarro, C., Cerra, D., Auer, S., Ullo, S. L., & Reinartz, P. (2022). Urban Sprawl and COVID-19 Impact Analysis by Integrating Deep Learning with Google Earth Engine. Remote Sensing, 14(9), 2038. https://doi.org/10.3390/rs14092038

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop