Next Article in Journal
Hankel and Symmetric Toeplitz Determinants for a New Subclass of q-Starlike Functions
Previous Article in Journal
Novel Approaches for Solving Fuzzy Fractional Partial Differential Equations
Previous Article in Special Issue
Fractal Characteristics of Deep Shales in Southern China by Small-Angle Neutron Scattering and Low-Pressure Nitrogen Adsorption
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Water Detection in Satellite Images Based on Fractal Dimension

by
Javier Del-Pozo-Velázquez
,
Pedro Chamorro-Posada
*,
Javier Manuel Aguiar-Pérez
,
María Ángeles Pérez-Juárez
and
Pablo Casaseca-De-La-Higuera
Departamento de Teoría de la Señal y Comunicaciones e Ingeniería Telemática, Universidad de Valladolid, ETSI Telecomunicación, Paseo de Belén 15, 47011 Valladolid, Spain
*
Author to whom correspondence should be addressed.
Fractal Fract. 2022, 6(11), 657; https://doi.org/10.3390/fractalfract6110657
Submission received: 28 September 2022 / Revised: 24 October 2022 / Accepted: 1 November 2022 / Published: 7 November 2022
(This article belongs to the Special Issue Methods for Estimation of Fractal Dimension Based on Digital Images)

Abstract

:
Identification and monitoring of existing surface water bodies on the Earth are important in many scientific disciplines and for different industrial uses. This can be performed with the help of high-resolution satellite images that are processed afterwards using data-driven techniques to obtain the desired information. The objective of this study is to establish and validate a method to distinguish efficiently between water and land zones, i.e., an efficient method for surface water detection. In the context of this work, the method used for processing the high-resolution satellite images to detect surface water is based on image segmentation, using the Quadtree algorithm, and fractal dimension. The method was validated using high-resolution satellite images freely available at the OpenAerialMap website. The results show that, when the fractal dimensions of the tiles in which the image is divided after completing the segmentation phase are calculated, there is a clear threshold where water and land can be distinguished. The proposed scheme is particularly simple and computationally efficient compared with heavy artificial-intelligence-based methods, avoiding having any special requirements regarding the source images. Moreover, the average accuracy obtained in the case study developed for surface water detection was 96.03%, which suggests that the adopted method based on fractal dimension is able to detect surface water with a high level of accuracy.

1. Introduction

Identification and monitoring of different types of surface water bodies on the Earth are necessary for numerous scientific disciplines and industrial uses and are of paramount importance in sustaining all forms of lives and ecosystems. Surface water refers to water on the surface of the Earth, including wetlands, rivers, or lakes. This definition usually excludes oceans because of their enormous size and salty character, although smaller saline water bodies are normally included [1]. Potential applications and uses of identifying and monitoring surface water include sustainable agriculture, irrigation reservoirs’ monitoring, wetland inventory, watershed monitoring, river dynamics, climate change, types of climates, environment monitoring, and flood mapping [2,3,4,5,6]. In short, identification, monitoring, and assessment of present and future water resources are essential, as water information is crucial in many areas and necessary for human life.
One way to detect and monitor surface water on the Earth is through the use of high-resolution satellite images. These satellite images can be captured with the help of a variety of remote sensing technologies and devices. Some examples of remote sensing techniques used to obtain satellite images are satellite sensors, aerial photographs, or drones. The possibilities opened by remote sensing technologies and devices, especially drones, have attracted the interest of many researchers and practitioners [7,8].
It is important to remark that the trustworthiness of satellite images depends on their resolution; in other words, low quality images will not be able to provide information with added value. After all, satellite images are often snapshots generated by different algorithms that are able to determine how the raw data from sensors must be interpreted and visualized. Through platforms like Google Earth [9] and websites like OpenAerialMap [10], satellite images are becoming increasingly ubiquitous and are available to scholars and technicians. The availability of this high-resolution imagery was exploited in this research work.
High-resolution satellite images can be processed with the help of different data-driven techniques to obtain the desired information. Some of the techniques most commonly used for satellite image processing are enhancement, feature extraction, segmentation, fusion, change detection, compression, classification, and feature detection. Researchers and practitioners often categorize these techniques into four broad categories: pre-processing, transformation, correction, and classification [11,12,13,14]. Although satellite image processing can be performed using a variety of computational methods, different methods will produce different results in a number of fields of application and use cases and, consequently, the wrong selection of techniques will produce worse results for a specific application or use case. Moreover, processing satellite images is computationally intensive and can be quite complex.
In the context of this research work, satellite images will be used to detect surface water. To obtain the necessary information from the satellite images, a method based on image segmentation plus fractal dimension analysis will be used. Then, when the fractal dimensions of the tiles in which the image is divided after segmentation are calculated, there is a clear threshold where water and land can be distinguished.
Image segmentation is a process that allows breaking down a digital image into several image segments or regions of interest. This technique allows reducing the complexity of the image to make further analysis easier and has been broadly researched and exploited by scholars and practitioners [15,16,17,18]. During the segmentation process, the same label will be assigned to all pixels belonging to the same category. When a segmentation technique is used, the input can be a single region selected by a segmentation algorithm, instead of the whole image. This will reduce the processing time and computational effort. Image segmentation can be based on detecting similarity between pixels in the image to form a segment, based on a threshold (similarity), or the discontinuity of pixel intensity values of the image (discontinuity). In an image segmentation process, the objective is to identify the content of the image at the pixel level, which means that every pixel in the image belongs to a single class, contrary to, for example, image object detection, where the bounding boxes of objects can overlap [19].
A fractal is a structure where each part has the same statistical properties as a whole. In other words, fractals are complex patterns that are self-similar across different scales and, for this reason, can be thought of as never-ending patterns that can be created by repeating a simple process indefinitely. Fractals have attracted the interest of scholars and researchers [20,21,22]. One of the reasons for this is the interesting possibilities in using fractals in a variety of very different fields of application and use cases [23,24,25,26,27,28,29]. More specifically, one of the possible uses of fractals is the image compression field, where a block-based image compression technique, which detects and decodes the existing similarities between different image regions, called fractal image compression, is applied [30].
Artificial intelligence (AI) has evolved in the last decades thanks to the progress in computer science and technology [31]. Moreover, the information storage capacity has considerably increased, allowing the treatment of massive amounts of data, which is commonly known as Big Data. Nevertheless, a systematic literature review on the existing methods capable of detecting water in aerial images reveals several challenges. Delving into the world of AI, convolutional neural networks (CNNs) have proven to be an excellent tool in the fields of deep learning (DL) and machine learning (ML) [32]. CNNs are composed of a series of layers that extract features from the pixels, just as a human eye would do. However, this type of network needs to be trained by supervised learning, which means that it is necessary to indicate the object to be detected in the training images.
Moreover, sometimes, AI techniques focused on the detection of objects in images can pose difficulties that can vary depending on the field of application and the type of image or object to be detected [33]. In fact, to use AI techniques for the same objective as the method proposed in this paper, it would be necessary to count with a set of metadata in attached files, apart from the image itself. An example is a shapefile, which is a vector format that saves the location of geographic elements and attributes associated with them.
For example, in [34], a combination of clustering and an ML classifier to detect water in high resolution aerial images, obtained from Sentinel, is presented. Nevertheless, the main difficulty in the use of this model is related to the data source, as it is necessary to use pictures with the associated geoinformation metadata.
Therefore, these requirements must be considered when preparing a dataset to train and evaluate a neural network. Likewise, there are some objects that, because of their heterogeneity, are difficult to detect using AI techniques [35].
The objective of this research work is to establish and validate a method to distinguish efficiently between water and land zones, i.e., an efficient method for surface water detection, based on image segmentation plus fractal dimension analysis. More specifically, this research work uses the computational method described in [36] to directly estimate the dimension of fractals stored as digital image files. To validate this method in the context of surface water detection, an experimental study was conducted using high-resolution satellite images freely available to researchers and practitioners at the OpenAerialMap website [10]. The proposed scheme is particularly simple and computationally efficient compared with heavy-artificial intelligence-based methods, avoiding having any special requirements regarding the source images.
The remainder of the paper is organized as follows. Section 2 introduces the proposed method for surface water detection based on image segmentation plus fractal dimension analysis. Section 3 describes the case study developed where high-quality satellite images, obtained from the freely accessible OpenAerialMap website [10], were used as the source to detect surface water. The obtained results after applying the method are also presented. Finally, in Section 4, these results are discussed and interpreted.

2. Water Detection Using Quadtree Segmentation and Fractal Dimension Analysis

The proposed method can be divided into three different phases: (1) image pre-processing, (2) image segmentation, and (3) fractal dimension analysis of the images’ tiles.
Initially, the quality of the image is far more important, as it is necessary to start with a high-resolution picture that contains a considerable quantity of information. For this reason, images taken from the open-source website known as OpenAerialMap [10] were used as the source. Although the website has more than 12,000 images, 670 sensors, and 963 providers, it is necessary to apply a filter at the time of selection. The first criterion is to obtain images containing water areas, as water detection is the main purpose of this work. As previously mentioned, this study requires high-quality images, and this requirement sets the second selection criterion. Fortunately, the source has a search engine that allows filtering by resolution level; however, size and resolution vary depending on the provider. Therefore, in this work, both criteria were applied, obtaining a set of pictures with a resolution of 300 × 300 ppi and a file size of around 20 MB, depending on the image size. Because color information has a three-dimensional property, which makes processing bulkier and heavier [37], the first step consists of transforming original RGB (red, green and blue) pictures into grayscale ones.
Secondly, once the images have been pre-processed, segmentation is performed to obtain several tiles for which the fractal dimension will be calculated. In this research work, a frequently used algorithm, called Quadtree, was used [38]. A quadtree is a tree data structure commonly used to partition a two-dimensional space by recursively subdividing it into four regions. Quadtree decomposition allows subdividing an image into regions that are more homogeneous than the image itself, thus revealing information about the structure of the image. In this work, a script was developed to apply the quadtree algorithm to an input image. Firstly, the concept known as a node was implemented, which represents a spatial region, i.e., the top-left point of the image followed by its width and height. Nevertheless, to divide the input image, it is necessary to use the mean squared error of the pixels, obtaining the concept of a purity node. Then, there are two parameters that must be set to stop the recursive subdivision of the quadtree method: the contrast and the node size. On the one hand, the contrast is defined because, if there is too much contrast, it is necessary to continue dividing, while if there is too little contrast, it is better to finish the algorithm. On the other hand, the node size depends on the size and the amount of information of the image, because a middle ground is sought, i.e., if the node size is too small, the tiles are overfitted, whereas if it is too big, the tiles provide the same information as the original image. Likewise, it is necessary to emphasize the relevance of storing the coordinates of each region, specifically the upper-left and lower-right coordinates. This step is necessary to crop the grayscale picture and to obtain all of the regions for which the fractal dimension will be calculated.
Thirdly, when the image pre-processing and segmentation have been completed, the compression fractal dimension [36] of the resulting tiles can be calculated. The greater the number of regions, the longer the processing time. In order to calculate the compression fractal dimension of each tile, the following three tasks are needed: (1) image tile resizing, (2) image tile compression, and (3) sweeping for image tile size storage. Initially, the first step consists of resizing each one of the tiles, sweeping the resizing percentage from 10% to 90% with an increment of 10%. This corresponds to reducing the image size to 10%, 20%, …, and 90% of the original file size. It is worth noting that a free library, called ImageMagick [39], was used, where the resizing percentage is related to the payload. Therefore, the greater the resizing percentage, the larger the size of the file that will be obtained. Finally, the resulting number of resized images is obtained by multiplying the set of tiles by nine. When the regions have been resized, a crucial second step is performed, which consists of a process of compression, which is necessary to obtain the fractal dimension. Again, the open-source library ImageMagick [39] was used; however, in this case, it is applied to compress each one of the tiles in GZIP file format. The third step is based on a sweep where the size of the compressed tiles is stored in rows. Thus, there will be one file with nine rows for each tile, relating each row to the resizing percentage.
To understand the next steps, it is necessary to first explain the mathematical component that supports the calculation of the compression fractal dimension, as described in detail in [36]. The initial image whose fractal dimension we want to estimate requires
N i = n x n y
symbols to be stored, one per pixel, where N is the total number of pixels; nx and ny are the number of pixels along the x- and y-axis, respectively; and the subindex i stands for “image”. If a file containing this set of symbols is compressed, the minimum file size
S = N i h
required to store the information of the image is the entropy S of the data file, where h (bits/symbol) is the entropy rate.
We now address the effect of changing the image resolution on the file size and its entropy. As the scale s used to represent the image is varied, the number of pixels used along each axis changes linearly with s, and the total number of pixels used to represent the image varies as
N i s = n x n y = N 0 s 2
where N0 is an arbitrary reference number. Therefore, the optimal compressed file size is
S s = N 0 s 2 h s
At the same time, as a fractal body, the amount of information required to represent such an object at the scaling level s is [36]
N f = 2 H s = N 1 s D
where N1 is an integer, D is the fractal dimension, and H(s) is the number of bits required to represent the fractal. Therefore, the entropy of the picture file follows the scaling asymptotic
S s   ~   2 H s   ~   s D
and we can estimate the fractal dimension D from the sizes of the image files after compression at each resolution level s by calculating the compression dimension.
D c = lim s   l o g S s l o g s
The compression dimension calculation will only provide a valid estimate of the fractal dimension if the image file gives a truthful representation of the fractal object in full detail at all of the scaling levels considered. That is,
N f s N i s
at all values of s used in the calculation. Equation (8) sets a bound on the minimum usable resolution of the image file at the beginning of the calculation and the depth of the values of s used in the estimation algorithm.
When the sizes of the compressed tiles are stored in files, linear regression is calculated for each file, sweeping the nine values. In this operation, the slope is obtained facing the logarithm of the resizing scale against the logarithm of the compressed sizes. Although the resizing scale ideally ranges from 10% to 90% [36], an exhaustive study shows that, for the tiles used, there is a turning point where the linearity disappears, indicating the violation of Equation (7). Therefore, in this research work, the scale varies between 70% and 90%, i.e., the reduced information contents in the scaled versions with resizing percentages between 60% and 10% yield meaningless data. Finally, the slope calculated for each tile corresponds to the fractal dimension of a specific tile.
To finish, when the fractal dimensions of the tiles are calculated, it is found that there is a clear threshold where surface water and land zones can be distinguished. This threshold depends on the scenario, defined as a collection of pictures captured under the same conditions, i.e., they must have the same resolution, pixel dimensions, bit depth, dynamic range, and so on. In addition, a threshold is calculated for each scenario, carrying out a loop to test the accuracy obtained with a range of thresholds, selecting the threshold that allows the best accuracy to be obtained. Likewise, this range of thresholds oscillates between the smallest and the largest fractal dimension, obtained in the first picture of each scenario.
Besides, it is necessary to clarify how the accuracy is calculated, for which it is essential to label each tile to obtain a ground truth. Likewise, the accuracy percentage is calculated by dividing the correctly classified tiles by the total number of tiles and multiplying the result by one hundred. Then, in order to validate the established threshold, this threshold is applied to another image of the same scenario and its accuracy is calculated. As the threshold varies according to the scenario, two application scenarios were established, with two images each. The first image of each scenario is used to calculate the optimal threshold, while the second image of each scenario is used to validate the previously calculated threshold. This shows that it is possible to set clear thresholds for different scenarios where surface water and land zones can be distinguished. To show the results, as previously explained, the tiles must have been labeled, thus a comparison can be made contrasting the predictions made with the fractal dimension against the true labels.

3. Results

In this work, high-quality satellite images, obtained from OpenAerialMap [10], were used to detect surface water. The results show that it is possible to set a threshold in the fractal dimension that allows differentiation between surface water and land zones.
However, it is more important to determine the accuracy, which must be studied for each picture, as the accuracy will determine the validity or not of the proposed method. Each image presents a specific accuracy; however, the average obtained in this work is 96.03%, which indicates good performance of the proposed method.
The first scenario has two different images, one to set the optimal threshold of this scenario (Figure 1a) and one to validate this threshold (Figure 1d). Next, image pre-processing was applied, converting RGB images into gray-scale ones (Figure 1b,e).
In Figure 1c–f, the segmentation performed by the Quadtree algorithm can be observed. The number of tiles depends on the size and the amount of information in the picture. In the first image, the minimum region size was set to 30 pixels, obtaining 757 tiles. Nevertheless, in the second picture, the minimum region size was set to 60 pixels owing to the dimensions, obtaining 934 tiles.
In Figure 2, a distribution histogram of the fractal dimensions of the first scenario can be observed. The x-axis shows the range in which the fractal dimensions are delimited, whereas on the y-axis, the frequency of repetition can be analyzed.
Then, in Figure 3a,b, the resizing evolution of a specific tile of the first image and the second image in the first scenario, correspondingly, can be analyzed. The first resizing corresponds to 90% and the last one corresponds to 10%; therefore, the last image contains more information.
Finally, Figure 4a,b show the results of the images of the first scenario. The accuracy of the first picture is 94.98%, whereas the second image obtains an accuracy of 96.79%. The threshold obtained on the first scenario is 1.86. As can be seen, using the threshold obtained for image 1 in image 2, an even higher accuracy is obtained, validating that the water was clearly identified for this scenario.
The second scenario is also composed of two different images, one to establish the optimal threshold of this scenario (Figure 5a) and one to validate this threshold (Figure 5d). Again, next, image pre-processing was applied, converting RGB images into gray-scale ones (Figure 5b,e).
In Figure 5c,f, the segmentation performed by the Quadtree algorithm can be observed. In the first image, the minimum region size was set to 60 pixels, obtaining 1024 tiles. In the second picture, the minimum region size was set to 30 pixels, obtaining 505 tiles. Again, the reason that the number of tiles is higher than in the first scenario is because of the amount of information contained in the image, as well as its size.
Again, Figure 6 shows a distribution histogram of the fractal dimensions of the second scenario. The x-axis shows the range in which the fractal dimensions are delimited, whereas on the y-axis, the frequency of repetition can be analyzed.
Again, in Figure 7a,b, the resizing evolution of a specific tile of the first image and the second image in the second scenario, correspondingly, can be analyzed. The first resizing corresponds to 90% and the last one corresponds to 10%; therefore, the last image contains more information.
Finally, Figure 8a,b show the results of the images of the second scenario. The accuracy of the first picture is 95.51%, whereas the accuracy of the second picture is 96.83%. The threshold obtained on the scenario is 1.90. As in the previous scenario, using the threshold obtained for image 1 in image 2, a higher accuracy is obtained, demonstrating once again that the water areas were clearly identified for this scenario too.

4. Discussion

The method proposed in this paper, based on image segmentation, performed with the Quadtree algorithm, and compression fractal dimension calculation of the tiles in which the image is divided after completing the segmentation phase, in order to detect surface water bodies on Earth, is particularly simple, especially when compared with the complex techniques used in AI. It is also remarkable that this method does not have any special requirements regarding the data source, which is a great advantage, because only the image is needed, but no geoinformation metadata are required. Moreover, from a computational point of view, the proposed method, based on the computational scheme described in [36] to directly estimate the dimension of fractals stored as digital image files, involves lighter tasks than the training and related tasks of artificial neural networks. Moreover, the average accuracy obtained in the work developed for surface water detection was 96.03%, which suggests that the fractal dimension is quite useful to detect surface water with a high level of accuracy.

Author Contributions

Conceptualization, P.C.-P. and J.M.A.-P.; formal analysis, J.D.-P.-V.; writing—review and editing, M.Á.P.-J.; supervision, P.C.-D.-L.-H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Ministerio de Ciencia e Innovación grant number PID2020-119418GB-I00.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Huang, C.; Chen, Y.; Zhang, S.; Wu, J. Detecting, extracting, and monitoring surface water from space using optical sensors: A review. Rev. Geophys. 2018, 56, 333–360. [Google Scholar] [CrossRef]
  2. Quang, D.N.; Linh, N.K.; Tam, H.S.; Viet, N.T. Remote sensing applications for reservoir water level monitoring, sustainable water surface management, and environmental risks in Quang Nam province, Vietnam. J. Water Clim. Change 2021, 12, 3045–3063. [Google Scholar] [CrossRef]
  3. Acharya, T.; Subedi, A.; Huang, H.; Lee, D. Application of water indices in surface water change detection using Landsat imagery in Nepal. Sens. Mater. 2019, 31, 1429. [Google Scholar] [CrossRef]
  4. Anusha, N.; Bharathi, B. Flood detection and flood mapping using multi-temporal synthetic aperture radar and optical data. Egypt. J. Remote Sens. Space Sci. 2020, 23, 207–219. [Google Scholar] [CrossRef]
  5. Bioresita, F.; Puissant, A.; Stumpf, A.; Malet, J.P. A method for automatic and rapid mapping of water surfaces from Sentinel-1 imagery. Remote Sens. 2018, 10, 217. [Google Scholar] [CrossRef] [Green Version]
  6. Li, J.; Ma, R.; Cao, Z.; Xue, K.; Xiong, J.; Hu, M.; Feng, X. Satellite Detection of SurfaceWater Extent: A Review of Methodology. Water 2022, 14, 1148. [Google Scholar] [CrossRef]
  7. Sibanda, M.; Mutanga, O.; Chimonyo, V.G.P.; Clulow, A.D.; Shoko, C.; Mazvimavi, D.; Dube, T.; Mabhaudhi, T. Application of drone technologies in surface water resources monitoring and assessment: A systematic review of progress, challenges, and opportunities in the global south. Drones 2021, 5, 84. [Google Scholar] [CrossRef]
  8. Acharya, B.S.; Bhandari, M.; Bandini, F.; Pizarro, A.; Perks, M.; Joshi, D.R.; Wang, S.; Dogwiler, T.; Ray, R.L.; Kharel, G.; et al. Unmanned aerial vehicles in hydrology and water management: Applications, challenges, and perspectives. Water Resour. Res. 2021, 57, e2021WR029925. [Google Scholar] [CrossRef]
  9. Google Earth. Available online: https://www.google.com/intl/es/earth/ (accessed on 5 September 2022).
  10. OpenAerialMap. Available online: https://openaerialmap.org/ (accessed on 5 September 2022).
  11. Asokan, A.; Anitha, J.; Ciobanu, M.; Gabor, A.; Naaji, A.; Hemanth, D.J. Image processing techniques for analysis of satellite images for historical maps classification—An overview. Appl. Sci. 2020, 10, 4207. [Google Scholar] [CrossRef]
  12. Sowmya, D.R.; Shenoy, P.D.; Venugopal, K.R. Remote sensing satellite image processing techniques for image classification: A comprehensive survey. Int. J. Comput. Appl. 2017, 161, 24–37. [Google Scholar] [CrossRef]
  13. Ablin, R.; Sulochana, C.H.; Prabin, G. An investigation in satellite images based on image enhancement techniques. Eur. J. Remote Sens. 2019, 53, 86–94. [Google Scholar] [CrossRef] [Green Version]
  14. Dhingra, S.; Kumar, D. A review of remotely sensed satellite image classification. Int. J. Electr. Comput. Eng. 2019, 9, 1720–1731. [Google Scholar] [CrossRef]
  15. Abdulateef, S.; Salman, M. A comprehensive review of image segmentation techniques. Iraqi J. Electr. Electron. Eng. 2021, 17, 166–175. [Google Scholar] [CrossRef]
  16. Zaitoun, N.M.; Aqel, M.J. Survey on image segmentation techniques. Procedia Comput. Sci. 2015, 65, 797–806. [Google Scholar] [CrossRef] [Green Version]
  17. Jeevitha, K.; Iyswariya, A.; RamKumar, V.; Basha, S.M.; Kumar, V.P. A Review on various segmentation techniques in image processing. Eur. J. Mol. Clin. Med. 2020, 7, 1342–1348. [Google Scholar]
  18. Sarma, R.; Gupta, Y. A comparative study of new and existing segmentation techniques. IOP Conf. Ser. Mater. Sci. Eng. 2021, 1022, 012027. [Google Scholar] [CrossRef]
  19. Leonard, J.K. Image classification and object detection algorithm based on convolutional neural network. Sci Insigt. 2019, 31, 85–100. [Google Scholar] [CrossRef] [Green Version]
  20. Garg, A.; Agrawal, A.; Negi, A. A review on natural phenomenon of fractal geometry. Int. J. Comput. Appl. 2014, 86, 975–8887. [Google Scholar] [CrossRef]
  21. Nurujjaman, M.; Hossain, A.; Ahmed, P. A review of fractals properties: Mathematical approach. Sci. J. Appl. Math. Stat. 2017, 5, 98–105. [Google Scholar] [CrossRef] [Green Version]
  22. Kolyukhin, D. Study the accuracy of the correlation fractal dimension estimation. Commun. Stat. Simul. Comput. 2021, 1–15. [Google Scholar] [CrossRef]
  23. Zhao, W.; Yan, J.; Hou, G.; Diwu, P.; Liu, T.; Hou, J.; Li, R. Research on a Fractal Dimension Calculation Method for a Nano-Polymer Microspheres Dispersed System. Front. Chem. 2021, 9, 732797. [Google Scholar] [CrossRef] [PubMed]
  24. Mwema, F.M.; Jen, T.-C.; Kaspar, P. Fractal Theory in Thin Films: Literature Review and Bibliometric Evidence on Applications and Trends. Fractal Fract. 2022, 6, 489. [Google Scholar] [CrossRef]
  25. Marquardt, T.; Momber, A.W. The determination of fractal dimensions of blast-cleaned steel substrates by means of comparative cross-section image analysis and contact stylus instrument measurements. J. Adhes. Sci. Technol. 2022, 1–20. [Google Scholar] [CrossRef]
  26. Naito, T.; Fukudaa, Y. The universal relationship between sample dimensions and cooperative phenomena: Effects of fractal dimension on the electronic properties of high-TC cuprate observed using electron spin resonance. Phys. Chem. Chem. Phys. 2022, 24, 4147–4156. [Google Scholar] [CrossRef] [PubMed]
  27. Sánchez, J.; Martín-Landrove, M. Morphological and Fractal Properties of Brain Tumors. Front. Physiol. 2022, 13, 878391. [Google Scholar] [CrossRef]
  28. Hu, X.; Liu, H.; Tan, X.; Yi, C.; Niu, Z.; Li, J.; Li, J. Image Recognition–Based Identification of Multifractal Features of Faults. Front. Earth Sci. 2022, 10, 909166. [Google Scholar] [CrossRef]
  29. Porcaro, C.; Marino, M.; Carozzo, S.; Russo, M.; Ursino, M.; Ruggiero, V.; Ragno, C.; Proto, S.; Tonin, P. Fractal Dimension Feature as a Signature of Severity in Disorders of Consciousness: An EEG Study. Int. J. Neural Syst. 2022, 32, 2250030. [Google Scholar] [CrossRef]
  30. Khatun, S.; Iqbal, A. A review of image compression using fractal image compression with neural network. Int. J. Innov. Res. Comput. Sci. Technol. 2018, 6, 9–11. [Google Scholar] [CrossRef]
  31. Li, N. On the Chinese development of computer-assisted translation under the background of Artificial Intelligence. In Proceedings of the International Conference on Artificial Intelligence and Education, Tianjin, China, 26–28 June 2020. [Google Scholar]
  32. Wu, J. Introduction to Convolutional Neural Networks. National Key Lab for Novel Software Technology; Nanjing University: Nanjing, China, 2017. [Google Scholar]
  33. Kang, J.; Tariq, S.; Oh, H.; Woo, S.S. A survey of Deep Learning-based object detection methods and datasets for overhead imagery. IEEE Access 2022, 10, 20118–20134. [Google Scholar] [CrossRef]
  34. Cordeiro, M.C.R.; Martinez, J.M.; Peña-Luque, S. Automatic water detection from multidimensional hierarchical clustering for Sentinel-2 images and a comparison with Level 2A processors. Remote Sens. Environ. 2021, 253, 112209. [Google Scholar]
  35. Ghahremani Nahr, J.; Nozari, H.; Sadeghi, M.E. Artificial intelligence and Machine Learning for Real-world problems (A survey). Int. J. Innov. Eng. 2021, 1, 38–47. [Google Scholar]
  36. Chamorro-Posada, P. A simple method for estimating the fractal dimension from digital images: The compression dimension. Chaos Solitons Fractals 2016, 91, 562–572. [Google Scholar] [CrossRef] [Green Version]
  37. Kaler, P. Study of grayscale image in image processing. Int. J. Recent Innov. Trends Comput. Commun. 2016, 4, 309–311. [Google Scholar] [CrossRef]
  38. Muhsin, Z.F.; Rehman, A.; Altameem, A.; Saba, T.; Uddin, M. Improved quadtree image segmentation approach to region information. Imaging Sci. J. 2014, 62, 56–62. [Google Scholar] [CrossRef]
  39. ImageMagick. Available online: https://imagemagick.org/index.php (accessed on 5 September 2022).
Figure 1. First scenario. (a) Original satellite RGB image in TIFF format (image 1). (b) Original satellite grayscale (image 1). (c) Original satellite grayscale segmented (image 1). (d) Original satellite RGB image in TIFF format (image 2). (e) Original satellite grayscale image (image 2). (f) Original satellite grayscale segmented (image 2).
Figure 1. First scenario. (a) Original satellite RGB image in TIFF format (image 1). (b) Original satellite grayscale (image 1). (c) Original satellite grayscale segmented (image 1). (d) Original satellite RGB image in TIFF format (image 2). (e) Original satellite grayscale image (image 2). (f) Original satellite grayscale segmented (image 2).
Fractalfract 06 00657 g001
Figure 2. Histograms of the fractal dimensions of the first scenario. Results for images 1 and 2 are displayed in the left and right plots, respectively.
Figure 2. Histograms of the fractal dimensions of the first scenario. Results for images 1 and 2 are displayed in the left and right plots, respectively.
Fractalfract 06 00657 g002
Figure 3. Resizing evolution of a specific tile in the first scenario. (a) Image 1. (b) Image 2.
Figure 3. Resizing evolution of a specific tile in the first scenario. (a) Image 1. (b) Image 2.
Fractalfract 06 00657 g003
Figure 4. Original satellite RGB image with blue dots over surface water regions and red dots over land regions in the first scenario. (a) Image 1. (b) Image 2.
Figure 4. Original satellite RGB image with blue dots over surface water regions and red dots over land regions in the first scenario. (a) Image 1. (b) Image 2.
Fractalfract 06 00657 g004
Figure 5. Second scenario. (a) Original satellite RGB image in TIFF format (image 1). (b) Original satellite grayscale (image 1). (c) Original satellite grayscale segmented (image 1). (d) Original satellite RGB image in TIFF format (image 2). (e) Original satellite grayscale image (image 2). (f) Original satellite grayscale segmented (image 2).
Figure 5. Second scenario. (a) Original satellite RGB image in TIFF format (image 1). (b) Original satellite grayscale (image 1). (c) Original satellite grayscale segmented (image 1). (d) Original satellite RGB image in TIFF format (image 2). (e) Original satellite grayscale image (image 2). (f) Original satellite grayscale segmented (image 2).
Fractalfract 06 00657 g005aFractalfract 06 00657 g005b
Figure 6. Histograms of the fractal dimensions of the second scenario. Results for images 1 and 2 are displayed in the left and right plots, respectively.
Figure 6. Histograms of the fractal dimensions of the second scenario. Results for images 1 and 2 are displayed in the left and right plots, respectively.
Fractalfract 06 00657 g006
Figure 7. Resizing evolution of a specific tile in the second scenario. (a) Image 1. (b) Image 2.
Figure 7. Resizing evolution of a specific tile in the second scenario. (a) Image 1. (b) Image 2.
Fractalfract 06 00657 g007
Figure 8. Original satellite RGB image with blue dots over surface water regions and red dots over land regions in the second scenario. (a) Image 1. (b) Image 2.
Figure 8. Original satellite RGB image with blue dots over surface water regions and red dots over land regions in the second scenario. (a) Image 1. (b) Image 2.
Fractalfract 06 00657 g008
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Del-Pozo-Velázquez, J.; Chamorro-Posada, P.; Aguiar-Pérez, J.M.; Pérez-Juárez, M.Á.; Casaseca-De-La-Higuera, P. Water Detection in Satellite Images Based on Fractal Dimension. Fractal Fract. 2022, 6, 657. https://doi.org/10.3390/fractalfract6110657

AMA Style

Del-Pozo-Velázquez J, Chamorro-Posada P, Aguiar-Pérez JM, Pérez-Juárez MÁ, Casaseca-De-La-Higuera P. Water Detection in Satellite Images Based on Fractal Dimension. Fractal and Fractional. 2022; 6(11):657. https://doi.org/10.3390/fractalfract6110657

Chicago/Turabian Style

Del-Pozo-Velázquez, Javier, Pedro Chamorro-Posada, Javier Manuel Aguiar-Pérez, María Ángeles Pérez-Juárez, and Pablo Casaseca-De-La-Higuera. 2022. "Water Detection in Satellite Images Based on Fractal Dimension" Fractal and Fractional 6, no. 11: 657. https://doi.org/10.3390/fractalfract6110657

Article Metrics

Back to TopTop