Next Article in Journal
Dissolved Oxygen Inversion Based on Himawari-8 Imagery and Machine Learning: A Case Study of Lake Chaohu
Next Article in Special Issue
A Novel Algorithm for the Retrieval of Chlorophyll a in Marine Environments Using Deep Learning
Previous Article in Journal
Spatial Distribution of Al, Zn, Fe, As, Pb, Mn, Cr, and Cu in Surface Waters of the Urumqi River Basin, China, and Assessment of Risks to Ecosystems and Human Health
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Approach of Monitoring Ulva pertusa Green Tide on the Basis of UAV and Deep Learning

1
CAS Key Laboratory of Coastal Environmental Processes and Ecological Remediation, Yantai Institute of Coastal Zone Research, Chinese Academy of Sciences, Yantai 264003, China
2
Shandong Key Laboratory of Coastal Environmental Processes, Yantai 264003, China
3
University of Chinese Academy of Sciences, Beijing 100049, China
4
Marine College, Shandong University, Weihai 264209, China
*
Author to whom correspondence should be addressed.
Water 2023, 15(17), 3080; https://doi.org/10.3390/w15173080
Submission received: 5 July 2023 / Revised: 20 August 2023 / Accepted: 22 August 2023 / Published: 28 August 2023
(This article belongs to the Special Issue Conservation and Monitoring of Marine Ecosystem)

Abstract

:
Ulva pertusa (U. pertusa) is a benthic macroalgae in submerged conditions, and it is relatively difficult to monitor with the remote sensing approaches for floating macroalgae. In this work, a novel remote-sensing approach is proposed for monitoring the U. pertusa green tide, which applies a deep learning method to high-resolution RGB images acquired with unmanned aerial vehicle (UAV). The results of U. pertusa extraction from semi-simultaneous UAV, Landsat-8, and Gaofen-1 (GF-1) images demonstrate the superior accuracy of the deep learning method in extracting U. pertusa from UAV images, achieving an accuracy of 96.46%, a precision of 94.84%, a recall of 92.42%, and an F1 score of 0.92, surpassing the algae index-based method. The deep learning method also performs well in extracting U. pertusa from satellite images, achieving an accuracy of 85.11%, a precision of 74.05%, a recall of 96.44%, and an F1 score of 0.83. In the cross-validation between the results of Landsat-8 and UAV, the root mean square error (RMSE) of the portion of macroalgae (POM) model for U. pertusa is 0.15, and the mean relative difference (MRD) is 25.01%. The POM model reduces the MRD in Ulva pertusa area extraction from Landsat-8 imagery from 36.08% to 6%. This approach of combining deep learning and UAV remote sensing tends to enable automated, high-precision extraction of U. pertusa, overcoming the limitations of an algae index-based approach, to calibrate the satellite image-based monitoring results and to improve the monitoring frequency by applying UAV remote sensing when the high-resolution satellite images are not available.

1. Introduction

Large-scale macroalgal blooms were observed to be increasing in world-wide oceans in the past two decades [1,2]. The world’s largest green tide caused by the blooms of green macroalgae Ulva prolifera (U. prolifera) happen every summer in the Yellow Sea since 2007 [3,4,5], and a small scale of green tide in the Yellow Sea can be dated back to the summer of 1999 through satellite observations of Landsat-5 [6]. The world’s largest golden tide caused by brown macroalgae of sargassum increased in the Atlantic Ocean and its marginal seas [7,8,9], and their early satellite observations were in the summer of 2005 over the western portion of the Gulf of Mexico [7].
Excessive macroalgae may cause a marine disaster by bringing damages to a marine ecosystem and economic loss. In the coastal and nearshore waters of China, the green tide of U. prolifera produced as biofoulings in seaweed cultivation in the Yellow Sea have received world-wide concerns [3,10,11]. Additionally, floating Sargassum (Sargassum horneri) in the Yellow Sea and the East China Sea during the winter and spring seasons have also attracted attention since 2017 [12,13]. Different from the above-mentioned blooms caused by floating macroalgae in surface water, coastal benthic green macroalgae [14,15] are relatively hard to monitor with satellite remote sensing; however, expanding blooms of benthic green macroalgae also pose threats to the environment and cause pollution [6,16] due to their wide distribution. Similar large-scale algal bloom disasters are also increasing in other countries, such as the green algae in Brittany, France [17].
The use of satellite imagery allows for the wide-scale and synchronous monitoring of the distribution and status of macroalgae on the sea surface, providing essential prerequisites for the prevention and control of macroalgae blooms [18]. Microwave imagery was used to identify floating macroalgae on the sea surface based on roughness information [19,20], and thermal infrared imagery could be also used to distinguish floating macroalgae from seawater based on different thermal emissivity characteristics [21]. Macroalgae have similar spectral characteristics to terrestrial vegetation, exhibiting low reflectance in the visible wavelength range, absorption in the red and blue wavelength ranges, and a reflection peak in the near-infrared range, and optical imagery is most commonly used in monitoring macroalgae. Early remote sensing algorithms for identifying floating macroalgae include the difference vegetation index (DVI), ratio vegetation index (RVI), and normalized difference vegetation index (NDVI) [22], with NDVI being the most widely used algorithm for extracting green algae from optical imagery [23]. To reduce the impacts from sunglints, aerosols and so on, some specific indices were proposed for green algae extraction, such as the floating algae index (FAI) [10,24] and the virtual-baseline floating macroalgae height (VB-FAH) [6].
The lower resolution of satellite imagery and the limitation of index-based methods that primarily focus on spectral information have led to suboptimal performance in the fine extraction of benthic algae. In contrast, unmanned aerial vehicles (UAVs) equipped with imaging sensors offer a cost-effective and highly flexible solution, capable of providing imagery with resolutions as fine as centimeters [25,26]. This is significantly higher than what satellite sensors can achieve, enabling more accurate mapping [27]. On the other hand, deep learning-based techniques can more effectively utilize the spectral and textural information of images and offer a high degree of automation, demonstrating significant potential in the analysis of features in marine remote-sensing imagery [28]. These technologies have achieved remarkable success in the extraction of benthic algae species, showcasing high precision and performance.
Ulva pertusa (U. pertusa), known as sea lettuce, is a common benthic green macroalgae primarily found in the eutrophicated intertidal zone and shallow seas. U. pertusa can be used as food and animal feed and can also be processed into pharmaceuticals and health products [29]. It can grow in non-attached conditions, and it can suspend in a water column or float on the sea surface. It grows rapidly and may form green tide due to outbreaks and excessive accumulation. The ecological effects and resource management issues associated with this macroalgae cannot be ignored, and an efficient remote sensing method is essential.
Since U. pertusa is a benthic macroalgae and is mostly found in submerged conditions, the remote sensing approaches for floating macroalgae [3,6,10] may not be suitable any more. In this study, with the aims of establishing useful remote sensing approaches for the U. pertusa green tide, a deep-learning method is applied to high-resolution RGB imagery acquired with an UAV to extract U. pertusa, and the results are analyzed together with the semi-simultaneous high-resolution satellite imagery of Landsat-8 Operational Land Imager (OLI) and Gaofen-1 (GF-1) Wide Field of View (WFV).

2. Study Area and Datasets

2.1. Study Area

The study area is situated in a small bay in the southern Bohai Sea, located at 119.89° E and 37.30° N, and surrounded by aquaculture ponds, with an approximate area of 2.07 km2 (Figure 1). Massive green tide caused by U. pertusa often occurred in this area [30].

2.2. Datasets

In view of the low cost, high spatial resolution and the larger swath of UAV RGB images, compared to near-infrared cameras, this study employed two sets of quasi-synchronous UAV RGB and satellite images as remote sensing data sources. The first set comprises an UAV image (labeled as #1) of the study area obtained from a DJI Mavic 2 drone equipped with an FC2220 camera on 24 October 2020 and a Landsat-8 OLI image from the same day. The second set comprises two UAV images (labeled as #2, #3) of the study area obtained from a DJI Mavic 2 drone equipped with an FC2220 camera on 11 November 2020 and a GF-1 image from the same day. During the photography session, the UAV maintained a flight altitude of 497 m. Table 1 provides detailed information about these images.
The GF-1 WFV L1A data are a product of orthorectified and geometrically corrected top-of-atmosphere (TOA) reflectance, which reflects the reflectance and scattering characteristics of ground cover materials but without considering atmospheric effects. Landsat-8 Collection 2 L2 data are atmospherically corrected surface reflectance images, which can be directly used for subsequent research. The orthoimages of the surveyed area were obtained by orthorectification and mosaicking of the overlapping UAV aerial images [31].

3. U. pertusa Extraction and Quantification Workflow

In this study, we address the challenges of the restricted accuracy and real-time performance of the algae index method for identifying U. pertusa in satellite and UAV images. We propose an advanced approach for the automated extraction and analysis of U. pertusa using the U-Net convolutional neural network. Specifically, the proposed method consists of data preprocessing, model training and prediction, accuracy evaluation as shown in Figure 2.

3.1. Data Preprocessing

This section describes the data preprocessing techniques and the process of constructing datasets for training and testing the model. For high-resolution satellite remote sensing images (GF-1 and Landsat-8), we performed orthorectification and atmospheric correction on the images. Four specific bands, the near-infrared, blue, green, and red bands of satellite images, were chosen as the input for model training and testing. For high-resolution UAV remote-sensing images, this study applied orthorectification to the RGB image comprising three bands and selected it as the model input.
To construct the training dataset, we selected a specific number of high-resolution satellite and UAV images. This study accurately annotated U. pertusa through meticulous visual interpretation. During the model training process, this study cropped the annotated images into sub-images of dimensions 256 × 256. Moreover, to reduce the influence of imbalanced positive and negative samples on classifier performance, this study excluded sub-images with a pixel composition of background elements (non-Ulva pertusa pixels) exceeding 90%. Subsequently, this study further augmented the dataset through brightness adjustment, mirroring, random rotation, and random translation operations, facilitating the acquisition of enhanced U. pertusa datasets for both satellite and UAV data sources. Finally, this study randomly partitioned the datasets into three distinct subsets, with 70% designated for training, 20% for validation, and 10% for testing purposes.

3.2. Model Structure

In this study, a deep learning framework was designed to extract U. pertusa from high-resolution satellite images and UAV images, combining the U-Net [32] with the VGG-16 [33] encoder, as shown in Figure 3. Specifically, the encoder of the original U-Net was modified to the VGG-16 to enhance its feature extraction capability. Transfer learning was employed by pretraining the model on the high-resolution U. prolifera dataset to improve the convergence speed and accuracy of the U-Net model for U. pertusa.
U-Net is a semantic segmentation model used for biomedical image analysis. It consists of an encoder and a decoder, where the encoder path and the decoder path are symmetrical and resemble a U-shape in morphology. This unique architecture reduces information loss during feature extraction, performs multiscale feature fusion, and accurately localizes target features. Therefore, it has been tested in feature detection tasks in remote-sensing images and has demonstrated superior performance compared to traditional methods [28,34].
Compared with high-resolution remote sensing of floating green macroalgae of U. prolifera, high-resolution U. pertusa remote-sensing data is relatively scarce, which can result in model overfitting and diminished suitability of the model for different U. pertusa data. Both U. pertusa and U. prolifera are classified as macroalgae, and the tasks of extracting U. prolifera and U. pertusa using the U-Net model exhibit strong correlation. Thus, transfer learning can be employed to train the model. Specifically, the pre-trained weights of the U-Net model, trained on the U. prolifera datasets, are utilized as initial weights for the U-Net model to facilitate U. pertusa extraction. This approach aims to expedite the convergence speed and enhance the accuracy of U. pertusa extraction [28,35].
For satellite images, the near-infrared, blue, green, and red bands are utilized as inputs to the U-Net model. Conversely, for high-resolution UAV remote-sensing images, the blue, green, and red bands are employed as the model input. The output of the model is a binary image, where the values of non-zero pixel indicate the presence of U. pertusa.

3.3. Model Training and Accuracy Evaluation

In this study, we used the cross-entropy function as the loss function for the model-training process. The Adaptive Moment estimation (Adam) optimizer [36] was applied for model optimization. The initial learning rate was set to 0.0001, and the learning rate schedule followed the Cosine Annealing strategy. The dropout parameter was set to 0.5. Specifically, the model-training process consisted of two stages: the frozen stage and the unfrozen stage. During the frozen stage, the backbone network of the model was not trained. This stage required lower GPU memory usage and addressed the issue of insufficient computational resources. Additionally, the frozen training stage accelerated the training process and preserved the weights of the pretrained backbone network in the initial stages of training. The model underwent a total of 100 epochs, with epochs 1–50 corresponding to the frozen backbone stage, where only a subset of the model parameters was trained. Epochs 51–100 involved training all network parameters. During the frozen stage, the model was trained with a batch size of 4, while during the unfrozen training stage, the batch size was increased to 2.
In this paper, the hardware configuration consisted of an Intel(R) Xeon(R) Silver 4110 CPU @ 2.10GHz, an NVIDIA Quadro P2000 GPU and 32 GB of RAM. The operating system employed was Windows 10. PyTorch was used as the deep learning development framework for both training and testing purposes, and the programming language used was Python 3.6. During the prediction phase, the Overlap-tile strategy [32] was adopted to address the challenge of the boundary effect [34], which could potentially lead to a decrease in the accuracy of edge predictions. This strategy aimed to enhance prediction accuracy by overlapping tiles in images and combining their results to mitigate errors induced by the tile boundaries.
To assess the reliability of the proposed method for U. pertusa extraction in this study, four evaluation metrics, namely, accuracy, precision, recall, and F1 score, were employed. These metrics served to evaluate the accuracy of U. pertusa extraction and measure its performance on the validation dataset. The calculation formulas for four evaluation metrics are as follows:
A c c u r a c y = ( T P + T N ) / ( T P + T N + F P + F N )
P r e c i s i o n = T P / ( T P + F P )
R e c a l l = T P / ( T P + F N )
F 1   s c o r e = 2 × P r e c i s i o n × R e c a l l / ( P r e c i s i o n + R e c a l l )
where TP, TN, FP, and FN represent the number of true positive samples, true negative samples, false positive samples, and false negative samples, respectively. It is important to note that in the evaluation metrics of this study, the sample count refers to the number of pixels, not the number of images.
To verify the efficacy of the proposed method for U. pertusa extraction, a comparative analysis was performed using the dynamic thresholding technique. Specifically, since UAV images comprise only red, green, and blue spectral bands without near-infrared bands, the normalized green–blue difference index (NGBDI) [37] and a threshold in the red (R) band were selected for U. pertusa extraction research. For satellite data, the normalized difference vegetation index (NDVI) [22] and the virtual-baseline floating macroalgae height (VB-FAH) [6] were employed for extraction testing. The calculation formulas for NGBDI, NDVI, and VB-FAH are as follows:
N G B D I = ( G B ) / ( G + B )
N D V I = ( N I R R ) / ( N I R + R )
V B F A H = ( N I R G ) + ( G R ) ( λ N I R λ N I R ) / ( 2 λ N I R λ R λ G )
where NIR, R, G, and B represent the reflectance or Digital Number (DN) values of the near-infrared, red, green, and blue spectral bands, respectively, while λ denotes the wavelength.
Finally, we measured the U. pertusa extraction results of the UAV and satellite imagery using the root mean square error (RMSE, Equation (8)) and the mean relative difference (MRD, Equation (9)).
R M S E = 1 N i = 1 N A i F i 2
M R D = 100 % N i = 1 N A i F i A i
where n is the number of sample points, Ai is the U. pertusa value of UAV sample point i, and Fi is the U. pertusa value of satellite sample point i.

4. Results and Discussion

4.1. Ulva pertusa Extraction Performance from the UAV Images

To assess the effectiveness of the proposed U. pertusa extraction method in high-resolution UAV imagery, a comparative analysis was conducted against threshold-based extraction methods utilizing NGBDI and the R band on the test dataset. Table 2 provides an overview of the accuracy evaluation metrics employed for U. pertusa extraction, including the U-Net model, NGBDI, and the R band threshold-based method. The U-Net-based extraction method, as proposed in this study, yields the highest values for three evaluation metrics: accuracy (96.46%), precision (94.84%), recall (92.42%), and F1 score (0.92). These results underscore the superior performance of the U-Net-based approach in U. pertusa extraction.
Five representative images were selected to demonstrate the effectiveness of the U. pertusa extraction method using the U-Net model, NGBDI, and the R band threshold-based method, as visually depicted in Figure 4. The comprehensive analysis of the extraction results in Figure 4a,b, reveals that the R band, NGBDI index, and U-Net approach yield favorable outcomes for U. pertusa extraction. However, it is observed that the extraction results using the R band and NGBDI index deteriorate in the presence of high-intensity glare, as evidenced from the depiction in Figure 4c. This degradation can be primarily attributed to the utilization of a fixed threshold applied uniformly across the entire image, leading to suboptimal extraction performance. Furthermore, in low-light conditions characterized by reduced exposure time, both the R band and NGBDI index methods fail to accurately identify U. pertusa information, resulting in the presence of significant salt-and-pepper noise within the extraction results. The in-depth analysis underscores the effectiveness of the U. pertusa extraction method based on the U-Net model, which capitalizes on the extraction of multi-scale texture features [38]. This approach demonstrates remarkable robustness against variations in lighting conditions, including challenging low-light scenarios. Consequently, the proposed method exhibits exceptional capability in accurately extracting U. pertusa from high-resolution UAV imagery, even in complex environmental conditions.
This study employed the U-Net model to extract U. pertusa from UAV images in the study area. Manual visual corrections refined the extraction results and determined the U. pertusa distribution in the study area, as illustrated in Figure 5. Then, based on statistical analysis, this study quantified the area of U. pertusa in the UAV images with identifiers #1–#3 as 0.52 km2, 0.22 km2, and 0.39 km2, respectively.

4.2. Ulva pertusa Extraction Performance from the Satellite Images

The U-Net model, as an effective image segmentation method, has been widely employed in satellite remote sensing. Figure 6 illustrates the classification results of the U-Net model for GF-1 and Landsat-8. Table 3 provides an overview of the accuracy evaluation metrics employed for U. pertusa extraction from satellite images, including the U-Net model, NDVI, and VB-FAH.
Figure 6 and Table 3 demonstrate that U-Net and VB-FAH have similar accuracy and outperform NDVI. Despite the slight advantage of the U-Net model in accuracy, it has a higher degree of automation, obviating the need for manual threshold selection. These results highlight the continued advantage of the U-Net-based approach for U. pertusa extraction from satellite images.
Table 4 presents the areas of U. pertusa extracted using the U-Net model from the satellite image regions corresponding to UAV images #1–#3, which are 0.70 km2, 0.12 km2, and 0.33 km2, respectively. These areas differ significantly from those extracted from the UAV images, with relative errors of 36.08%, 44.69%, and 13.25%. The analysis suggests that the following factors contribute to the discrepancy between the estimates of the satellite images and UAV images of U. pertusa areas:
(1)
The lower-resolution satellite images contain mixed pixels that overestimate the U. pertusa areas in the regions corresponding to UAV images #1 and #3.
(2)
U. pertusa, as a benthic macroalgae, is sensitive to water depth. In the region corresponding to UAV image #2, the deeper water hinders the detection of U. pertusa with satellite remote sensing, resulting in a smaller area.
(3)
Furthermore, both index-based and U-Net model extractions only provide binary information on the presence or absence of U. pertusa (0 for non-Ulva pertusa pixels and 1 for U. pertusa pixels), without quantifying the U. pertusa content within each pixel.
To enhance the accuracy of U. pertusa extraction in satellite images, we employed the portion of macroalgae (POM) model proposed by Li, Meng et al. [23,30] for Landsat-8 to estimate the U. pertusa content in the corresponding image. The results are illustrated in Figure 7.
However, due to the incomplete temporal synchronization and inherent geometric registration errors between the UAV and satellite imagery, there is a lack of perfect point-to-point correspondence among the data. To address these challenges, this study applied a 3 × 3 mean filter to reduce noise and smooth the data. Subsequently, this study generated scatter plots and conducted statistical analysis to examine the POM model, as shown in Figure 8.
As shown in Figure 8, the simulated POM of Landsat-8 and the calculated POM from UAV data show a strong linear correlation in the scatter plot, with an RMSE of 0.15 and an MRD of 25.01%. The simulated POM calculation estimates the coverage area of U. pertusa in the Landsat-8 image corresponding to the UAV image (label as #1) as 0.49 km2, with a 6% error from the actual area. However, the distribution of macroalgae coverage, as shown in Figure 8b, exhibits a slight underestimation of U. pertusa coverage for some pixels near the coastline. It provides a good representation of the overall macroalgae distribution and can be utilized for estimating the coverage area of U. pertusa in this region.

4.3. Discussion

4.3.1. Strengths and Weaknesses for U. pertusa Extraction Based on UAVs and the U-Net Model

On the basis of deep learning techniques, accurate information about U. pertusa is successfully extracted from high-resolution UAV RGB images taken from nearshore waters. This method of the extraction of U. pertusa green tide based on UAVs and deep learning has important advantages in two aspects.
(1) Using high-resolution RGB digital cameras, UAV can achieve centimeter-level resolution, which can significantly reduce the influence of factors such as illumination, water color, and waves on the extraction accuracy of U. pertusa green tide, compared with the GF-1 and landsat-8 images with 16m and 30m resolution, respectively. This enables more refined monitoring of U. pertusa distribution.
(2) The deep learning U-Net model has the advantages of being data-driven and strong in learning ability. By fully utilizing the spectral and texture information in the remote sensing images, the U-Net model shows excellent performance in complex environments. In the UAV images, the accuracy, precision, F1-Score are improved by 10%, 19%, and 11%, respectively. In the satellite images, the accuracy is comparable to VB-FAH, and compared with NDVI, the accuracy, precision, recall, F1-Score are improved by 2%, 4%, 5%, and 4%, respectively. In addition, the U-Net model has an absolute advantage in automatic extraction. With the support of a large number of high-resolution images obtained with an UAV, it is a reliable solution for extracting U. pertusa green tide information.
Despite the advantages of using UAV and deep learning models for monitoring U. pertusa green tide, there are also some challenges and limitations. One of them is that the deep learning U-Net model used lacks explainability; that is, it is difficult to understand how the model makes its decisions and what features it learns from the images. Another challenge is that the deep learning U-Net model requires a large amount of high-quality and accurately annotated data for training, which is the key to improving the model’s accuracy. Moreover, compared with the index threshold method, the deep learning approach is more complex in terms of operation, and it needs certain computer hardware resources to carry out model training, which can increase the cost and difficulty of implementation.

4.3.2. Improving the Accuracy of Monitoring U. pertusa in Satellite Remote Sensing Based on POM Analysis

Different from the quantitative sub-pixel estimation of floating microalgae [9,22,39,40] or zooplankton [41] on the basis of satellite images, the estimation of the sub-pixel coverage of underwater macroalgae (U. pertusa) is impacted by the variations in both the water constituents and water depth, and, thus, the POM estimated from satellite imagery may have uncertainty, which is why the POM calculated from the Landsat-8 image is not strictly consistent with the real coverage monitored with UAV. However, the UAV images with super high resolution to identify every individual U. pertusa thallus can provide accurate coverage of underwater U. pertusa. Moreover, UAV remote sensing tends to provide information when the satellite image is not available due to the limited revisit frequency or cloud coverings. The POM model further improved the monitoring accuracy of U. pertusa in satellite images, providing data support for biomass and other research. However, applying the POM model to the GF-1 satellite data results in large errors, and the applicability of the POM model to other satellites also needs further research.

5. Conclusions

In this study, we developed an advanced approach for detecting Ulva pertusa (U. pertusa) green tides by integrating the U-Net model, a VGG16 backbone network, and transfer learning. The proposed method aims to extract submerged U. pertusa from high-resolution UAV images. The accuracy analysis on a validation dataset indicates that the U-Net-based method achieves an accuracy of 96.46%, precision of 94.84%, recall of 92.42%, and F1 score of 0.92 for high-resolution UAV images. This method also performs well in extracting U. pertusa from satellite images, achieving an accuracy of 85.11%, precision of 74.05%, recall of 96.44%, and F1 score of 0.83. This method enables accurate and automated extraction of U. pertusa under challenging conditions, such as varying lighting conditions and low-light environments, demonstrating robustness and feasibility for operational U. pertusa green tide monitoring.
A portion of macroalgae (POM) model based on the green band reflectance is pro-posed to estimate the subpixel coverage of green algae in Landsat-8 satellite images. In the validation images, the root mean square error (RMSE) of the POM model for U. pertusa is 0.15, with a mean relative difference (MRD) of 25.01%. The POM model reduces the MRD in U. pertusa area extraction from Landsat-8 imagery from 36.08% to 6% when applied to the entire study area. In the remote sensing detection of submerged aquatic vegetation from U. pertusa, high-resolution UAV images can detect smaller patches and provide more information in mixed pixels of low-resolution satellite images, thus improving the estimation of the portion of macroalgae in low-resolution satellite image pixels.

Author Contributions

Conceptualization, Q.X. and J.L.; methodology, Q.X., J.L., H.L., Y.H. and M.M.; software, J.L. and H.L.; formal analysis, Q.X., J.L. and H.L.; resources, Q.X. and C.L.; writing—original draft preparation, Q.X., J.L., H.L. and Y.H.; writing—review and editing, Q.X., J.L., H.L. and C.L.; project administration, Q.X.; funding acquisition, Q.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research received financial support from the National Natural Science Foundation of China (42076188), Key R&D Program of Shandong Province, China (2022CXPT019), the Strategic Priority Research Program of the Chinese Academy of Sciences (Class A) (XDA19060203, XDA19060501), the Instrument Developing Project of the Chinese Academy of Sciences (YJKYYQ20170048).

Data Availability Statement

According to the requirement of confidentiality agreement, the data used in this paper are not public.

Acknowledgments

The authors are thankful to the anonymous reviewers for their useful suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Smetacek, V.; Zingone, A. Green and golden seaweed tides on the rise. Nature 2013, 504, 84–88. [Google Scholar] [CrossRef] [PubMed]
  2. Gower, J.; King, S. Seaweed, seaweed everywhere. Science 2019, 365, 27. [Google Scholar] [CrossRef] [PubMed]
  3. Duarte, C.M.; Bruhn, A.; Krause-Jensen, D. A seaweed aquaculture imperative to meet global sustainability targets. Nat. Sustain. 2022, 5, 185–193. [Google Scholar] [CrossRef]
  4. Xing, Q.G.; An, D.Y.; Zheng, X.Y.; Wei, Z.N.; Wang, X.H.; Li, L.; Tian, L.Q.; Chen, J. Monitoring seaweed aquaculture in the Yellow Sea with multiple sensors for managing the disaster of macroalgal blooms. Remote Sens. Environ. 2019, 231, 111279. [Google Scholar] [CrossRef]
  5. Hu, C.M.; Qi, L.; Hu, L.B.; Cui, T.W.; Xing, Q.G.; He, M.X.; Wang, N.; Xiao, Y.F.; Sun, D.Y.; Lu, Y.C.; et al. Mapping Ulva prolifera green tides from space: A revisit on algorithm design and data products. Int. J. Appl. Earth Obs. Geoinf. 2023, 116, 103173. [Google Scholar] [CrossRef]
  6. Xing, Q.G.; Hu, C.M. Mapping macroalgal blooms in the Yellow Sea and East China Sea using HJ-1 and Landsat data: Application of a virtual baseline reflectance height technique. Remote Sens. Environ. 2016, 178, 113–126. [Google Scholar] [CrossRef]
  7. Gower, J.; Hu, C.M.; Borstad, G.; King, S. Ocean color satellites show extensive lines of floating sargassum in the Gulf of Mexico. IEEE Trans. Geosci. Remote Sens. 2006, 44, 3619–3625. [Google Scholar] [CrossRef]
  8. Gower, J.; Young, E.; King, S. Satellite images suggest a new Sargassum source region in 2011. Remote Sens. Lett. 2013, 4, 764–773. [Google Scholar] [CrossRef]
  9. Wang, M.Q.; Hu, C.M.; Barnes, B.B.; Mitchum, G.; Lapointe, B.; Montoya, J.P. The great Atlantic Sargassum belt. Science 2019, 365, 83–87. [Google Scholar] [CrossRef]
  10. Hu, C.M. A novel ocean color index to detect floating algae in the global oceans. Remote Sens. Environ. 2009, 113, 2118–2129. [Google Scholar] [CrossRef]
  11. Son, Y.B.; Choi, B.J.; Kim, Y.H.; Park, Y.G. Tracing floating green algae blooms in the Yellow Sea and the East China Sea using GOCI satellite data and Lagrangian transport simulations. Remote Sens. Environ. 2015, 156, 21–33. [Google Scholar] [CrossRef]
  12. Wang, Z.L.; Yuan, C.; Zhang, X.L.; Liu, Y.J.; Fu, M.; Xiao, J. Interannual variations of Sargassum blooms in the Yellow Sea and East China Sea during 2017–2021. Harmful Algae 2023, 126, 102451. [Google Scholar] [CrossRef] [PubMed]
  13. Qi, L.; Hu, C.M.; Wang, M.Q.; Shang, S.L.; Wilson, C. Floating algae blooms in the East China Sea. Geophys. Res. Lett. 2017, 44, 11501–11509. [Google Scholar] [CrossRef]
  14. Lapointe, B.E.; Barile, P.J.; Matzie, W.R. Anthropogenic nutrient enrichment of seagrass and coral reef communities in the Lower Florida Keys: Discrimination of local versus regional nitrogen sources. J. Exp. Mar. Biol. Ecol. 2004, 308, 23–58. [Google Scholar] [CrossRef]
  15. Bohorquez, J.; Papaspyrou, S.; Yufera, M.; van Bergeijk, S.A.; Garcia-Robledo, E.; Jimenez-Arias, J.L.; Bright, M.; Corzo, A. Effects of green macroalgal blooms on the meiofauna community structure in the Bay of Cadiz. Mar. Pollut. Bull. 2013, 70, 10–17. [Google Scholar] [CrossRef]
  16. Han, H.B.; Song, W.; Wang, Z.L.; Ding, D.W.; Yuan, C.; Zhang, X.L.; Lie, Y. Distribution of green algae micro-propagules and their function in the formation of the green tides in the coast of Qinhuangdao, the Bohai Sea, China. Acta Pharmacol. Sin. 2019, 38, 72–77. [Google Scholar] [CrossRef]
  17. Schreyers, L.; van Emmerik, T.; Biermann, L.; Le Lay, Y.F. Spotting green tides over brittany from space: Three decades of monitoring with Landsat imagery. Remote Sens. 2021, 13, 1408. [Google Scholar] [CrossRef]
  18. Fraiola, K.M.S.; Miura, T.; Martinez, J.; Lopes, K.H.; Amidon, F.; Torres-Pérez, J.; Spalding, H.L.; Williams, T.; So, K.; Sachs, E.; et al. Using commercial high-resolution satellite imagery to monitor a nuisance macroalga in the largest marine protected area in the USA. Coral Reefs 2023, 42, 253–259. [Google Scholar] [CrossRef]
  19. Jiang, X.W.; Liu, J.Q.; Zou, B.; Wang, Q.M.; Zeng, T.; Guo, M.H.; Zhu, H.T.; Zou, Y.R.; Tang, J.W. The satellite remote sensing system used in emergency response monitoring for Entermorpha prolifera disaster and its application. Acta Pharmacol. Sin. 2009, 31, 52–64. [Google Scholar]
  20. Shen, H.; Perrie, W.; Liu, Q.R.; He, Y.J. Detection of macroalgae blooms by complex SAR imagery. Mar. Pollut. Bull. 2014, 78, 190–195. [Google Scholar] [CrossRef]
  21. Song, Q.J.; Ma, C.F.; Liu, J.Q.; Wei, H.Y. Quantifying ocean surface green tides using high-spatial resolution thermal im-ages. Opt. Express 2022, 30, 36592–36602. [Google Scholar] [CrossRef] [PubMed]
  22. Cui, T.W.; Zhang, J.; Sun, L.E.; Jia, Y.J.; Zhao, W.J.; Wang, Z.L.; Meng, J.M. Satellite monitoring of massive green macroal-gae bloom (GMB): Imaging ability comparison of multi-source data and drifting velocity estimation. Int. J. Remote Sens. 2012, 33, 5513–5527. [Google Scholar] [CrossRef]
  23. Li, L.; Zheng, X.Y.; Wei, Z.N.; Zou, J.Q.; Xing, Q.G. A spectral-mixing model for estimating sub-pixel coverage of sea-surface floating macroalgae. Atmos.-Ocean 2018, 56, 296–302. [Google Scholar] [CrossRef]
  24. Wang, M.Q.; Hu, C.M. Mapping and quantifying Sargassum distribution and coverage in the Central West Atlantic using MODIS observations. Remote Sens. Environ. 2016, 183, 350–367. [Google Scholar] [CrossRef]
  25. Olmedo-Masat, O.M.; Raffo, M.P.; Rodríguez-Pérez, D.; Arijón, M.; Sánchez-Carnero, N. How far can we classify macroalgae remotely? An example using a new spectral library of species from the south west atlantic (argentine patagonia). Remote Sens. 2020, 12, 3870. [Google Scholar] [CrossRef]
  26. Kutser, T.; Hedley, J.; Giardino, C.; Roelfsema, C.; Brando, V.E. Remote sensing of shallow waters–A 50 year retrospective and future directions. Remote Sens. Environ. 2020, 240, 111619. [Google Scholar] [CrossRef]
  27. Vahtmäe, E.; Kotta, J.; Lõugas, L.; Kutser, T. Mapping spatial distribution, percent cover and biomass of benthic vegetation in optically complex coastal waters using hyperspectral CASI and multispectral Sentinel-2 sensors. Int. J. Appl. Earth Obs. Geoinf. 2021, 102, 102444. [Google Scholar] [CrossRef]
  28. Wang, M.Q.; Hu, C.M. Satellite remote sensing of pelagic Sargassum macroalgae: The power of high resolution and deep learning. Remote Sens. Environ. 2021, 264, 112631. [Google Scholar] [CrossRef]
  29. Liu, Y.; Liu, T.; Yu, D.; Zhang, J.; Wang, X.M.; Gong, Q.L. Biological characteristics and molecular systematics studies on common green algae of Ulvaceae. Period. Ocean. Univ. China 2010, 40, 71–80. [Google Scholar]
  30. Meng, M.M.; Zheng, X.Y.; Xing, Q.G.; Liu, H.L. Remote sensing estimation of green macroalgae Ulva pertusa based on unmanned aerial vehicle and satellite image. J. Trop. Oceanogr. 2022, 41, 46–53. [Google Scholar]
  31. Lee, J.; Sung, S. Evaluating spatial resolution for quality assurance of UAV images. Spat. Inf. Res. 2016, 24, 141–154. [Google Scholar] [CrossRef]
  32. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
  33. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  34. Iglovikov, V.; Mushinskiy, S.; Osin, V. Satellite imagery feature detection using deep convolutional neural network: A kaggle competition. arXiv 2017, arXiv:1706.06169. [Google Scholar]
  35. Iglovikov, V.; Shvets, A. TernausNet: U-Net with VGG11 encoder pre-trained on imagenet for image segmentation. arXiv 2018, arXiv:1801.05746. [Google Scholar]
  36. Kingma, D.P.; Ba, J.L. Adam: A method for stochastic optimization. arXiv 2015, arXiv:1412.6980. [Google Scholar]
  37. Zhang, X.L.; Zhang, F.; Qi, Y.X.; Deng, L.F.; Wang, X.L.; Yang, S.T. New research methods for vegetation information extraction based on visible light remote sensing images from an unmanned aerial vehicle (UAV). Int. J. Appl. Earth Obs. Geoinf. 2019, 78, 215–226. [Google Scholar] [CrossRef]
  38. Cheng, G.; Han, J. A survey on object detection in optical remote sensing images. ISPRS J. Photogramm. Remote Sens. 2016, 117, 11–28. [Google Scholar] [CrossRef]
  39. Cui, T.W.; Liang, X.J.; Gong, J.L.; Tong, C.; Xiao, Y.F.; Liu, R.J.; Zhang, X.; Zhang, J. Assessing and refining the satellite-derived massive green macro-algal coverage in the Yellow Sea with high resolution images. ISPRS J. Photogramm. Remote Sens. 2018, 144, 315–324. [Google Scholar] [CrossRef]
  40. Lu, T.; Lu, Y.; Hu, L.B.; Jiao, J.N.; Zhang, M.W.; Liu, Y.X. Uncertainty in the optical remote estimation of the biomass of Ulva prolifera macroalgae using MODIS imagery in the Yellow Sea. Opt. Express 2019, 27, 18620–18627. [Google Scholar] [CrossRef]
  41. Tian, L.Q.; Tian, J.Y.; Wang, J.R.; Wang, X.; Li, W. A novel remote sensing index for brine shrimp (Artemia) slick detection in salt lakes. Remote Sens. Environ. 2023, 286, 113428. [Google Scholar] [CrossRef]
Figure 1. (a,b) The study area overview and the location. (c) UAV and satellite image: the arrow indicates the UAV image labeled as #1 (refer to Table 1), with a Landsat-8 OLI image as the base map. (d) UAV and satellite image: the arrows indicate the UAV images labeled as #2 and #3 (refer to Table 1), with a GF-1 image as the base map. (eg) The in situ photos showing macroalgae of U. pertusa. #1–#3 indicate three UAV image strips.
Figure 1. (a,b) The study area overview and the location. (c) UAV and satellite image: the arrow indicates the UAV image labeled as #1 (refer to Table 1), with a Landsat-8 OLI image as the base map. (d) UAV and satellite image: the arrows indicate the UAV images labeled as #2 and #3 (refer to Table 1), with a GF-1 image as the base map. (eg) The in situ photos showing macroalgae of U. pertusa. #1–#3 indicate three UAV image strips.
Water 15 03080 g001
Figure 2. Workflow for the automatic extraction and analysis of U. pertusa using images from both the satellite (Landsat-8 and GF-1) and the UAV.
Figure 2. Workflow for the automatic extraction and analysis of U. pertusa using images from both the satellite (Landsat-8 and GF-1) and the UAV.
Water 15 03080 g002
Figure 3. The topology of the U-Net for U. pertusa extraction. It incorporates a contracting path on the left side and an expansive path on the right side. The Rectified Linear Unit (ReLU) is employed as the primary activation function throughout the model. To determine the segmentation results, the sigmoid activation function is specifically chosen for the final output layer.
Figure 3. The topology of the U-Net for U. pertusa extraction. It incorporates a contracting path on the left side and an expansive path on the right side. The Rectified Linear Unit (ReLU) is employed as the primary activation function throughout the model. To determine the segmentation results, the sigmoid activation function is specifically chosen for the final output layer.
Water 15 03080 g003
Figure 4. Comparing U. pertusa extraction results from R band, NGBDI, and the U-Net model applied to UAV images. (ae) depict five test images, with (d) representing a low-light image. A 5% linear stretch enhances the visibility of U. pertusa distribution in image (d), producing image (e).
Figure 4. Comparing U. pertusa extraction results from R band, NGBDI, and the U-Net model applied to UAV images. (ae) depict five test images, with (d) representing a low-light image. A 5% linear stretch enhances the visibility of U. pertusa distribution in image (d), producing image (e).
Water 15 03080 g004
Figure 5. U. pertusa extraction from UAV imagery in the study area. (a) The arrow indicates the extraction results of U. pertusa from UAV image labeled as #1, with the Landsat-8 image as the base map. (b) The arrows indicate the extraction results of U. pertusa from the UAV images labeled as #2 and #3, with the GF-1 image as the base map.
Figure 5. U. pertusa extraction from UAV imagery in the study area. (a) The arrow indicates the extraction results of U. pertusa from UAV image labeled as #1, with the Landsat-8 image as the base map. (b) The arrows indicate the extraction results of U. pertusa from the UAV images labeled as #2 and #3, with the GF-1 image as the base map.
Water 15 03080 g005
Figure 6. The results of U. pertusa extraction from GF-1 and Landsat-8 images. (a) displays the results of U. pertusa extraction from Landsat-8 images using NDVI, VB-FAH, and U-Net. (b) displays the results of U. pertusa extraction from GF-1 images using NDVI, VB-FAH, and U-Net.
Figure 6. The results of U. pertusa extraction from GF-1 and Landsat-8 images. (a) displays the results of U. pertusa extraction from Landsat-8 images using NDVI, VB-FAH, and U-Net. (b) displays the results of U. pertusa extraction from GF-1 images using NDVI, VB-FAH, and U-Net.
Water 15 03080 g006
Figure 7. (a) The UAV image (label as #1). (b) The Landsat-8 image. (c) The POM derived from the UAV macroalgae map. (d) shows the simulated POM distribution maps of Landsat-8.
Figure 7. (a) The UAV image (label as #1). (b) The Landsat-8 image. (c) The POM derived from the UAV macroalgae map. (d) shows the simulated POM distribution maps of Landsat-8.
Water 15 03080 g007
Figure 8. (a) is the UAV and Landsat-8 POM distribution image with the aid of binned scatter plots. The number of bins in the x-axis and y-axis was 50. N represents the number of sample points. (b) is the binned scatter plots of UAV and Landsat-8 POM distribution images with a 3 × 3 window mean filtering.
Figure 8. (a) is the UAV and Landsat-8 POM distribution image with the aid of binned scatter plots. The number of bins in the x-axis and y-axis was 50. N represents the number of sample points. (b) is the binned scatter plots of UAV and Landsat-8 POM distribution images with a 3 × 3 window mean filtering.
Water 15 03080 g008
Table 1. Remote sensing data information.
Table 1. Remote sensing data information.
Sensor/SourceData LevelNo.Capture DateTime/UTC+8Spatial Resolution
Landsat-8 OLI GF-1 WFVL2-24 October 202010:4930 m
L1A-11 November 202011:0816 m
DJI Mavic2 FC2220-#124 October 202011:080.17 m
-#211 November 202010:520.10 m
-#311 November 202011:230.12 m
Table 2. The accuracy of Ulva pertusa (U. pertusa) extraction on unmanned aerial vehicle (UAV) images using different methods.
Table 2. The accuracy of Ulva pertusa (U. pertusa) extraction on unmanned aerial vehicle (UAV) images using different methods.
MethodAccuracy (%)Precision (%)Recall (%)F1-Score
R87.3879.7693.270.83
NGBDI57.9764.1155.550.54
U-Net96.4694.8492.420.92
Table 3. The accuracy of U. pertusa extraction on satellite images using different methods.
Table 3. The accuracy of U. pertusa extraction on satellite images using different methods.
MethodAccuracy (%)Precision (%)Recall (%)F1-Score
NDVI82.5471.4290.800.80
VB-FAH85.0574.5992.960.83
U-Net85.1174.0596.440.83
Table 4. Areas of U. pertusa extracted from satellite images in the regions corresponding to UAV images (label #1–#3).
Table 4. Areas of U. pertusa extracted from satellite images in the regions corresponding to UAV images (label #1–#3).
MethodRegion#1 of Landsat-8Region#2 of GF-1Region#3 ofGF-1
NDVI0.62 km20.16 km20.40 km2
VB-FAH0.59 km20.14 km20.37 km2
U-Net0.70 km20.12 km20.33 km2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xing, Q.; Liu, H.; Li, J.; Hou, Y.; Meng, M.; Liu, C. A Novel Approach of Monitoring Ulva pertusa Green Tide on the Basis of UAV and Deep Learning. Water 2023, 15, 3080. https://doi.org/10.3390/w15173080

AMA Style

Xing Q, Liu H, Li J, Hou Y, Meng M, Liu C. A Novel Approach of Monitoring Ulva pertusa Green Tide on the Basis of UAV and Deep Learning. Water. 2023; 15(17):3080. https://doi.org/10.3390/w15173080

Chicago/Turabian Style

Xing, Qianguo, Hailong Liu, Jinghu Li, Yingzhuo Hou, Miaomiao Meng, and Chunli Liu. 2023. "A Novel Approach of Monitoring Ulva pertusa Green Tide on the Basis of UAV and Deep Learning" Water 15, no. 17: 3080. https://doi.org/10.3390/w15173080

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop