Next Article in Journal
Exploring Inner Speech Recognition via Cross-Perception Approach in EEG and fMRI
Previous Article in Journal
Sensor Fault Detection and Classification Using Multi-Step-Ahead Prediction with an Long Short-Term Memoery (LSTM) Autoencoder
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Double-Exposure Algorithm: A Powerful Approach to Address the Accuracy Issues of Fractional Vegetation Extraction under Shadow Conditions

College of Geoscience and Surveying Engineering, China University of Mining andTechnology, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(17), 7719; https://doi.org/10.3390/app14177719 (registering DOI)
Submission received: 5 July 2024 / Revised: 26 August 2024 / Accepted: 29 August 2024 / Published: 1 September 2024

Abstract

:

Featured Application

Accuracy verification of large-scale remote sensing fractional vegetation cover products; precision agriculture; ecological research

Abstract

When recording the vegetation distribution with a camera, shadows can form due to factors like camera angle and direct sunlight. These shadows result in the loss of pixel information and texture details, significantly reducing the accuracy of fractional vegetation coverage (FVC) extraction. To address this issue, this study proposes an efficient double-exposure algorithm. The method reconstructs the pixel information in shadow areas by fusing normal-exposure and overexposed images. This approach overcomes the limitations of the camera’s dynamic range in capturing pixel information in shadowed regions. The study evaluates images with five levels of overexposure combined with five vegetation extraction indices. The aim is to determine the best-performing double-exposure combination under shadow conditions and the most suitable vegetation index. Experimental results reveal that the R² value between the best vegetation index and the FVC calculated from the fused double-exposure images and the ground truth FVC increases from 0.750 to 0.969. The root mean square error (RMSE) reduces from 0.146 to 0.046, and the intersection over union (IOU) increases from 0.856 to 0.943. These results demonstrate the excellent vegetation extraction capability of the double-exposure algorithm under shadow conditions, offering a straightforward and effective solution to low accuracy of FVC in shadowed areas.

1. Introduction

Vegetation is a crucial part of natural ecosystems and an indispensable natural resource [1]. The root system of vegetation is closely related to the soil, and vegetation branches, stems, leaves, etc., are closely related to soil erosion [2]. Photosynthesis, the conversion from inorganic into organic matter, is a crucial process in maintaining the Earth’s carbon–oxygen balance [3]. Transpiration amplifies the circulation of water, which is a vital element of life in ecosystems [4]. Therefore, changes in the vegetation type and quality can notably affect the cycle and profile of the whole ecosystem. The fractional vegetation cover (FVC), a critical parameter characterizing vegetation features, is important for the study of ecology, hydrology, and meteorology and in other aspects. It is often used as fundamental data for research on regional or globalization issues [5,6]. Consequently, FVC investigations are important.
The FVC is usually defined as the vertical projection of a vegetation canopy area to the ground and can be expressed as a fraction or percentage of the reference area [7]. Within the context of remote sensing, the FVC can be defined as the percentage of the green vegetation area to the total observed area [8]. The FVC is an important phenotypic factor in agriculture, forestry, and ecology and an important indicator for the study of changes in each circle of the entire ecosphere and their interactions. Moreover, the FVC is a key biophysical parameter for capturing horizontal exchange at the surface-atmosphere boundary using soil, vegetation, and atmospheric transfer models [9,10,11].
Considering that the FVC is a vital indicator, it is important to measure it at different scales for different applications [12]. For example, the remote sensing technique has provided an important data source for large-scale FVC studies for decades due to loaded positions and greater ease of efficient, large-scale monitoring [13,14]. The first commonly used method is the vegetation index-based method. It aims to extract the FVC with the help of various vegetation indices, such as the normalized difference vegetation index (NDVI), perpendicular vegetation index (PVI), soil adjusted vegetation index (SAVI), modified soil adjusted vegetation index (MSAVI), and transformed soil adjusted vegetation index (TSAVI) [15,16]. Through the different vegetation and non-vegetation characteristics determined by these vegetation indices, thresholds obtained from various algorithms can be used to distinguish the vegetation cover from the background. Two aspects must be investigated in this method: one is the determination of a suitable vegetation index for different application scenarios, and the other is the establishment of a suitable algorithm to obtain a fixed or self-adaptive threshold for vegetation extraction. Using the vegetation index-based method, Bac et al. produced a map of the extent of mangroves in the Philippines using Google Earth MVI [17]. Another commonly used technique is the regression method. This method involves first establishing a mathematical regression model based on several data sources and the FVC over a relatively small range of sample points and then generalizing the results to the entire study area [18]. Currently, with the rapid development of computer technology, machine learning [19,20,21,22] and deep learning [23,24,25] methods are emerging.
Large-scale remote sensing FVC measurements require more accurate ground data at smaller scales as validation data, which can be obtained from traditional small-scale FVC measurements [26]. The validation dataset strives to meet the following objectives: (1) the instrument should be economical, practical, and easy to operate; (2) the instrument should capture valid, accurate, and objective ground observations; (3) the measurement duration should be short; and (4) the influence of human factors should be minimized [27,28]. The most traditional on-site FVC measurement method is the visual interpretation method, which can be divided into the direct visual estimation method [29] and the photo visual estimation method [30]. However, the visual estimation method is limited by subjectivity, an extreme dependence on observer experience, and a tendency to produce large errors. Therefore, various sampling methods, such as the point measurement method [31], sample strip method [1], shade method [32], and square viewpoint frame method [33], have been proposed for use in this area. The point measurement method entails the use of long, sharp needles stuck vertically in the sample plot, and the FVC is determined using the ratio of the number of needles passing through vegetation to the total number of needles. The shade method (ruler measurement) utilizes the relationship between a given scale and the dark shadows produced by vegetation to measure the FVC. However, the square viewpoint frame method considers the gaps in the small holes of the frame to obtain a sample within the study area. The quadrat method reduces the workload when increasing the observation accuracy due to the application of statistical principles. Relevant measurement instruments have also been developed to obtain FVC data via spatial quantitative meters and moving light meters. With the rapid development of large-scale integrated circuits, the photography technique is no longer an expensive, complex, or time-consuming option, as in film era, digital photography and digital image processing techniques have grown quickly [34]. Digital cameras provide instant shooting results, greatly improve intuition, and reduce the learning costs of workflows; methods for extracting FVC products from digital images have been proposed and modified owing to their efficiency and convenience [35]. Melville et al. used a portable photographic instrument to measure the FVC in the field [36]. Liu et al. also proposed an FVC extraction algorithm based on the LAB color space [20].
However, photography methods in actual shooting face certain challenges. Shadows often create extreme illumination contrasts, causing substantial luminance differences within a single image scene. A study revealed that the reflectance of shaded leaves is lower than that of sunlit leaves at all wavelengths [37]. The presence of shadows could disrupt the accurate recording of pixel information. When the field scene is not covered by a hood or similar structure, shadows are inevitable [38]. In many cases, shadows constitute a high-incidence area where commission and omission errors occur during FVC extraction. As a consequence, the FVC extraction capability in the shadow region must be addressed to improve the segmentation performance. To address the issue of low FVC accuracy based on photography methods under shadow conditions, several studies have proposed solutions from different perspectives. There are two primary approaches: first, specific techniques can be employed during data acquisition to compensate for the limitations due to shadows impeding the accurate recording of vegetation information [39,40]; second, algorithms have been designed for the vegetation extraction process to minimize the impact of shadows. In numerous studies, the characteristics of high-dynamic range (HDR) imaging have been utilized to effectively expand the dynamic range of photos. This technique allows for better capture of vegetation information and is often combined with thresholding methods or deep learning techniques to calculate FVC [41,42,43,44,45]. Yang et al. proposed acquiring polarization images in the shooting process to complement and enhance pixel information that cannot be effectively captured under shadows [46]. From a postprocessing perspective, Song et al. introduced the SHAR-LABFVC shadow-resistant algorithm, which alters the color space distribution of captured images to mitigate the adverse effect of shadows on the calculation of FVC [47].
Although various methods have been proposed to address the issue of low FVC accuracy under shadow conditions, there are still areas that need improvement. First, the computational efficiency of these methods still needs to be enhanced. Methods using deep learning and machine learning techniques are extremely time-consuming. Moreover, the accuracy of these methods is closely tied to the quality of data samples, which means that they are significantly influenced by human factors. Second, there is a lack of solutions for FVC calculation under complex shadow conditions. Previous algorithms such as SHAR-LABFVC, methods based on HDR images, and algorithms incorporating polarized images essentially aim to supplement pixel information in shadowed areas to improve FVC accuracy. These studies all indicate that reconstructing shadow pixel information is an effective and feasible solution. However, they have not addressed complex shadow situations more specifically. For instance, the SHAR-LABFVC algorithm enhances shadow pixels based on the overall light distribution of the entire image.
To address the issue of shadows affecting the accurate recording of pixel information and subsequently reducing the FVC accuracy, a simple and effective solution is proposed in this study. The approach is based on a pixel-level classification strategy using a double-exposure algorithm with the ability to process images with different shadow levels. The double-exposure fusion data are derived from a normal-exposure image and a specific overexposed image. By increasing the exposure compensation during shooting, shadow regions are enhanced to show more vegetation details. Moreover, sunlit vegetation may be enhanced by overexposure. Therefore, a fusion operation is performed between the shadow pixels of the normal-exposure image and the corresponding pixels of the overexposed image. This process yields a fused double-exposure image containing a strengthened shadow region and a normal sunlit region. In the research process, multiple records of overexposed images at different exposure levels were captured. Various vegetation indices were assessed to identify the most suitable candidates for the double-exposure method and the optimal overexposed images under shadow conditions. This article is divided into four sections (materials and methods, experimental results and analysis, discussion, and conclusion) elaborating on the double-exposure method.

2. Materials and Methods

2.1. Data Acquisition

The camera used in this study is a Pentax K-S2 digital camera (Ricoh, Tokyo, Japan). It is equipped with a CMOS sensor measuring 23.5 × 15.6 mm2 and provides a resolution of 20.12 million effective pixels. The exposure compensation function of this camera allows adjustment within the −5 ev to +5 ev range. During data collection, we positioned the camera vertically approximately 1 m above the ground to capture vegetation images. Throughout the shooting process, the ISO and focal length were fixed at 200 and 18 mm, respectively. Camera exposure compensation relies on automatic calculation and adjustment of exposure time and aperture to achieve the desired image exposure levels. To capture images with varying exposure levels, we adjusted the exposure compensation parameter values during the shooting process. The exposure compensation parameter ranged from 0 to +5 ev, with images captured at each gear, resulting in multiple sets of data containing normal exposure and five levels of overexposure. A schematic of the data capture process is shown in Figure 1. Data were captured in March 2023, yielding a total of 14 sets of experimental data. The image size is 5472 pixels ×3648 pixels, and the captured vegetation type is green herbaceous plants.
Figure 2 shows one of the experimental datasets, consisting of exposure data at various exposure compensation levels: 0 ev, +1 ev, +2 ev, +3 ev, +4 ev, and +5 ev. Notably, overexposure not only increases the image brightness but also results in varying pixel restoration capabilities for shadow areas at different overexposure levels. Compared to the normal-exposure image, the textural information of vegetation and non-vegetation areas within the red box is more distinct and accurate. However, at the same time, positive exposure compensation exerts a detrimental effect on pixels in nonshadow areas. The sharp increase in brightness causes varying degrees of damage to these pixels, resulting in blurred texture information, indistinct vegetation boundaries, and color distortion. In addition, to facilitate better comparison and analysis of the experimental results, data augmentation is provided in this study. The images of the 14 sets of data were cropped to 512 pixels × 512 pixels using a stride of 512 pixels. The cropped results were then selected and filtered, leading to a final collection of 414 images of shaded green vegetation.

2.2. Methods

The experimental flowchart of this study is shown in Figure 3, which primarily encompasses three sections: data preprocessing, vegetation extraction, and method evaluation. Regarding the vegetation images captured under normal and overexposure conditions, the data preprocessing module utilizes a custom fusion algorithm to generate fused images. This approach significantly extends the dynamic range of the fused images and enriches the pixel information in shadow areas. As a result, vegetation and other features in the captured area can be more accurately represented in digital images. Subsequently, based on the values of five vegetation indices and the Otsu thresholding method, image-based vegetation classification is achieved. The FVC can be obtained from the classification results. The strategy of designing five vegetation extraction methods aims to find the best-fitting vegetation extraction approach for the double-exposure method, thereby maximizing the performance of the double-exposure fusion data product. In method evaluation, the actual impact of the double-exposure method on the FVC accuracy under shadow conditions is assessed. Factors considered include the intensity of exposure compensation, the vegetation extraction method, and the proportion of shadows. The optimal double-exposure combination suitable for different degrees of shadow issues was evaluated.

2.2.1. Data Preprocessing

To obtain the intensity value of each pixel, the image must be converted from the RGB color space to the HIS color space. The detailed conversion method can be expressed as follows:
i = R + G + B 3
θ = cos 1 1 2 R G + R B R G 2 + R B G B 1 2
h =   θ   B G   360 θ   B > G
s = 1 mi n ( R , G , B   i
where i , h , and s represent the intensity, hue, and saturation, respectively, and R , G , and B denote the red, green, and blue component values of each pixel, respectively.
Based on the intensity component values of each pixel, a threshold of 0.2 is used to distinguish between shadow and nonshadow regions, and the distribution range of shadows can be calculated. For shadow area pixels, a fusion algorithm is employed to combine the corresponding pixel values obtained under normal and overexposure conditions. The fusion algorithm determines the weight of the overexposed pixel values in the fusion process based on the intensity values. The strategy of adaptively adjusting fusion weights can better handle images with varying degrees of shadow. This results in reconstructed fusion pixel values with recovered information, thereby generating a shadow-free fused image.
D N d x , y = D N n x , y + D N o x , y × 0.2 i 0.2
The DN values of the shadow pixels at the spatial position x , y in the double-exposure, normal-exposure, and overexposure images are denoted as D N d x , y , D N n ( x , y ) , and D N o ( x , y ) , respectively. In addition, i denotes the intensity value of the shadow pixels. The fusion effect of the algorithm is shown in Figure 4. The pixel categories in the red box of the fused image also become more distinct because the introduction of pixel information under overexposed conditions allows for the recovery of pixel information in shadow areas. As a result, the boundary information between vegetation and non-vegetation areas is also clearer. However, excessive fusion of overexposed information can affect the accuracy of pixels in weak shadow areas. These pixels become whitish, making it difficult to accurately describe the color information of ground objects. In contrast, incorporating too little overexposure information could lead to failure to fully reconstruct and recover pixel information in intense shadow areas. Therefore, the most suitable fusion approach should be determined based on the actual shadow conditions.

2.2.2. Vegetation Extraction

This study chose to employ a threshold classification method based on vegetation indices, achieving fully automated pixel-level vegetation extraction from vegetation images. This method is an unsupervised approach that can be used directly without relying on manual sample labeling. It features high computational efficiency, fast processing speed, and strong robustness, allowing it to adapt well to different types of vegetation images. Moreover, it does not require additional training data, making it particularly advantageous for rapid FVC calculation in practical scenarios. In addition to this method, machine learning methods such as random forests, support vector machines, and deep learning have also been applied to calculate FVC. However, all of these methods require labeled data to construct datasets for model training, and they need a substantial number of samples to support the development of the final model for vegetation extraction in images. This process consumes a considerable amount of time and computational resources. Consequently, this strategy was ultimately selected to complete the vegetation extraction task.
Four vegetation indices based on the RGB color space and the hue component (HUE) of the HIS color space were adopted as the basis for vegetation extraction. The Otsu thresholding method was used to determine the optimal threshold segmentation point, thereby separating the vegetation and non-vegetation parts of the image data. The four vegetation indices used are the excess green index ( E X G ), color index of vegetation extraction ( C I V E ), excess red index ( E X R ), and excess green minus excess red index ( E X G R ). These indices can be calculated by linearly combining the red component value ( R ), green component value ( G ), and blue component value ( B ) of each pixel in the image as follows:
E X G = 2 G R B
E X R = 1.4 R G
E X G R = E X G E X R
C I V E = 0.441 R 0.811 G + 0.385 B + 18.78745
Otsu thresholding is a widely used adaptive threshold segmentation algorithm designed for image binary classification problems. The algorithm aims to find a threshold value, denoted as t , that maximizes the between-class variance σ 2 . This approach maximizes the difference between the foreground and background, achieving optimal classification results. Ultimately, it generates a binary image that reflects the specific distribution of the two classes. The between-class variance can be calculated as follows:
σ 2 t = P a t P b t μ a t μ b t 2
where P a and P b denote the proportions of the class a and b distributions, respectively, at a threshold of t . μ a and μ b are the pixel means corresponding to classes a and b, respectively.

2.2.3. Accuracy Evaluation

In this study, experimentally obtained vegetation classification binary images and FVC data were used to comprehensively evaluate and analyze the impact of the double-exposure algorithm on the accuracy of vegetation extraction from shadow images. This evaluation was conducted from both semantic segmentation and statistical analysis perspectives. The ground truth labels of the vegetation images in the experimental data were obtained through manual visual interpretation and annotation. This process allowed for the calculation of the true FVC for each image, providing a reliable baseline for our analysis. The confusion matrix consists of four parts: TP, FN, FP, and TN [48]. TP represents the portion where both the actual and predicted values are vegetation. FN represents the portion where the actual value is vegetation but is predicted as non-vegetation. FP represents the portion where the actual value is non-vegetation but is predicted as vegetation. TN represents the portion where both the actual and predicted values are non-vegetation. Based on the confusion matrix, the above indices can be calculated as follows:
r e c a l l = T P T P + F N
a c c u r a c y = T P + T N T P + F P + F N + T N
p r e c i s i o n = T P T P + F P
a = T P + F N T P + F P + F P + T N F N + T N T P + F N + F P + T N 2
K a p p a = a c c u r a c y a 1 a c c u r a c y
I O U = T P T P + F N + F P
m I O U = T P 2 T P + F N + F P + T N 2 F P + T N + F N
R e c a l l is the proportion of vegetation instances correctly predicted as vegetation among all actual vegetation instances. A c c u r a c y denotes the proportion of correct predictions overall. P r e c i s i o n is defined as the proportion of correctly predicted vegetation instances. In this study, the intersection over union ( I O U ) of the vegetation class denotes the ratio of the intersection to the union of the predicted and ground truth values. Both the k a p p a coefficient and the mean intersection over union ( m I O U ) are used to evaluate the overall classification performance of vegetation and non-vegetation classes. The closer the values of these six metrics are to 1, the higher the image classification performance and accuracy. To evaluate the accuracy of FVC prediction, a statistical analysis approach was adopted in this study. A univariate linear regression model was constructed to analyze the relationship between the experimental FVC values and the ground truth values obtained through manual visual interpretation. The regression equation, the R2, and the root mean square error (RMSE) of the model were obtained. These metrics were utilized to assess the accuracy of shadow images FVC using the double-exposure algorithm.

3. Results

3.1. Performance of the Different Extraction Methods under the Double-Exposure Algorithm

In this study, linear regression analysis was conducted of the FVC results obtained from 414 sets of normal-exposure images with shadows and five types of double-exposure fusion images using five vegetation extraction indices. The results are shown in Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9. The diagonal line in the figures is the reference line indicating equal predicted and true FVC values. Each index responds differently to the different double-exposure images. Among them, the C I V E and E X G indices performed the best on the nor+over2 image. The HUE achieved the highest accuracy in extracting the FVC from the nor+over3 image. The E X R and E X G R indices achieved the best vegetation extraction results on the nor+over4 and nor+over5 images, respectively. The fusion of normal-exposure images with images compensated at the +1 ev gear, resulting in the nor+over1 image, was insufficient for effectively solving the problem of low accuracy of FVC under shadow conditions. This occurs because the lower exposure compensation level fails to completely avoid the presence of shadow pixels, making it challenging to capture sufficient pixel information under shadows. As a result, the obtained double-exposure image pixels are still damaged by shadows. Therefore, to fully leverage the benefits of the double-exposure algorithm, it is crucial to choose appropriate exposure compensation and utilize vegetation indices based on actual shadow conditions. This approach can significantly improve the FVC accuracy from shadow images and mitigate the adverse effect of shadows on vegetation extraction. In Figure 5a, Figure 6a, and Figure 7a, the scatter points derived from the C I V E , E X G , and HUE indices under normal exposure conditions are mostly clustered around or below the diagonal line. This indicates that for some experimental datasets under normal exposure, the predicted FVC is lower than the true FVC. In other words, some shaded vegetation in the images was not correctly identified. As shown in Figure 8a and Figure 9a, the scatter points for the E X R and E X G R indices under normal exposure conditions are more scattered and disorganized. These two methods indicate cases where the extracted FVC is higher than the true FVC, as well as instances where it is lower. These findings suggest that these indices not only fail to detect vegetation in shadow areas but also cause misclassification of non-vegetation areas as vegetation areas. Among the five indices, using double-exposure images generally improved the FVC accuracy over using normal-exposure images as a reference. However, while the double-exposure method significantly reduced vegetation misdetection or misclassification, only the HUE index achieved the best performance in terms of all three metrics: R2, bias, and RMSE. The predicted FVC exhibited a strong correlation with the true FVC, demonstrating reliability and stability. Particularly in the nor+over3 image, the R2, bias, and RMSE values were increased from 0.750, −0.018, and 0.146 in the normal-exposure image to 0.969, 0.006, and 0.046, respectively, indicating excellent vegetation extraction capability.
To provide a comprehensive and objective evaluation of the experimental results for the different double-exposure images, the results were also analyzed from a semantic segmentation perspective. This analysis employed evaluation metrics such as the m I O U index and k a p p a coefficient. Table 1, Table 2, Table 3, Table 4 and Table 5 provide the specific performance of the five indices given the normal- and double-exposure data. A comparison between the double- and normal-exposure data indicated that the use of double-exposure data significantly improved the FVC accuracy among the five indices. For example, the notable increase in r e c a l l values for the E X G and C I V E indices indicates a significant reduction in the number of missed detections of vegetation. The significant increase in p r e c i s i o n for the E X R index suggests a lower probability of misclassifying non-vegetation as vegetation. Additionally, the substantial improvement in the r e c a l l and p r e c i s i o n values for the HUE index also demonstrates the effectiveness in resolving double-exposure data.
When further comparing the performance of different indices given double-exposure data, the most suitable double-exposure combination results for the C I V E , E X G , HUE, and E X R indices strongly conform with the conclusions obtained from linear analysis methods. However, for the EXGR index, the most suitable double-exposure image changes from nor+over5 to nor+over4. Combining Figure 7 and Table 3, it can be concluded that when using double-exposure data for vegetation extraction, the HUE index provides a significant improvement in accuracy relative to normal-exposure data. This indicates that the HUE index is well-suited for analyzing double-exposure data. Additionally, the FVC accuracy under the HUE index is substantially higher than it is under the other four indices. Among all five indices evaluated, the HUE index is the optimal choice for calculating FVC when using double-exposure data.
Figure 10 shows a set of experimental data with a high proportion of shadows, in which shadows blur pixel information and have a greater impact on the accuracy of FVC. After merging overexposed images, the distribution of vegetation and non-vegetation becomes more clearly visible. Moreover, double-exposure images with different degrees of overexposure show varying results, as demonstrated in the first column. The change in pixel areas within the red circle is particularly noticeable. Consequently, the vegetation extraction results obtained from the shadow-affected normal-exposure image and the shadow-removed double-exposure fusion image using the five vegetation indices substantially differ as a result. When extracting vegetation from the normal-exposure data, the results across the various vegetation indices indicate that a high proportion of shadow nearly renders the provided vegetation extraction function ineffective. Among these, the CIVE and EXG perform better compared to other indices. However, they extract only a small portion of vegetation and misclassify some non-vegetation as vegetation, resulting in an increased occurrence of missed vegetation detection overall. With the introduction of double-exposure fusion data, the vegetation extraction results become more reliable. However, there are still areas where vegetation cannot be accurately extracted, and some non-vegetation areas are misidentified as vegetation areas. In the case of the HUE index, the loss of pixel information caused by shadows significantly impacts the effectiveness of vegetation extraction. However, in the nor+over2 images and higher-gear double-exposure images, the hue index demonstrates a stronger ability to extract vegetation compared to other indices, with lower probabilities of vegetation misclassification and omission. This indicates that the removal of shadows can greatly improve the HUE index’s classification performance. Therefore, as the exposure compensation level increases along the positive direction, pixel information in the shadow regions of the double-exposure fusion image is increasingly restored, allowing accurate vegetation extraction. In contrast to those of the previous three indices, the extraction results of the E X G and E X G R show different results when applied to normal-exposure images with a high proportion of shadows. These two indices indicate a high frequency of vegetation misclassification in the presence of a high proportion of shadows. These two indices reveal that a significant number of non-vegetation pixels are erroneously classified as vegetation pixels, resulting in a low FVC accuracy. Although the use of double-exposure data can reduce misclassification occurrence, the overall effectiveness of vegetation extraction remains unsatisfactory, especially for results based on the E X R . Overall, in this set of experimental data, the hue index demonstrates good adaptability to double-exposure images, exhibiting superior vegetation extraction capabilities.
To comprehensively analyze and evaluate the performance of the double-exposure algorithm, this study also compared it with the classic algorithm, SHAR-LABFVC. Both algorithms are based on the principle of image enhancement through intensity values. Both algorithms prove that image enhancement based on the intensity value is direct and effective. We conducted vegetation extraction on 14 sets of experimental data, with results shown in Table 6. When compared to the FVC ground truth, the double-exposure algorithm demonstrated a significant advantage in accuracy over the SHAR-LABFVC algorithm. It offers a more effective solution to the problem of vegetation extraction in shadowed areas. This advantage stems from the fundamental difference in approach between the two algorithms. The SHAR-LABFVC algorithm directly employs statistical methods to optimize the intensity value of all pixels; therefore, its enhancement approach is not as refined. The double-exposure algorithm specifically divides the image into shadow and non-shadow regions based on the intensity value, and image enhancement is completed for shadow areas by combining overexposed images on a pixel-by-pixel basis. This targeted approach enables more accurate reconstruction of shadowed image information and, consequently, yields more precise vegetation extraction results.

3.2. Performance of the Different Double-Exposure Combinations under Varying Shadow Proportions

The vegetation extraction capabilities of the five double-exposure images vary depending on the shadow proportion in the vegetation images. The HUE index performs the best in extracting vegetation from shadow images. Therefore, the focus is placed on determining the most effective double-exposure image under the different shadow levels based on the FVC results using the HUE index. This approach aims to improve vegetation extraction effectiveness under shadow conditions. Based on the proportion of shadow pixels in the normal-exposure image, the experimental data are categorized into three classes: weak shadows (0–40% shadow pixels), moderate shadows (40–70% shadow pixels), and strong shadows (70–100% shadow pixels). According to Table 7, Table 8 and Table 9, in the case of weak shadow data, the nor+over2 double-exposure image demonstrates the best extraction performance. Compared to normal-exposure data, there are slight improvements in all the evaluation metrics, but the changes are not significant. For the data with moderate shadow levels, the nor+over3 double-exposure image proves optimal. It demonstrates a 10.7% increase in the m I O U metric, an 8.3% increase in accuracy, and more than a 14% increase in the other metrics. Similarly, for data with strong shadows, the nor+over3 double exposure image is the best choice, with all the metrics surpassing 0.9. Moreover, compared to those for the normal-exposure data, m I O U increases from 0.755 to 0.914, and the k a p p a coefficient increases from 0.645 to 0.903. These improvements greatly enhance the reliability of the vegetation extraction results and ensure high vegetation extraction accuracy under shadow conditions.

4. Discussion

4.1. Advantages of Applying the Double-Exposure Algorithm

The double-exposure algorithm proposed in this paper addresses the issue of low FVC accuracy in the case of shadows. By merging normal-exposure and overexposed images, this approach efficiently captures the pixel information in shadow areas to restore their true characteristics, facilitating accurate vegetation identification. Extracting shadow-affected vegetation essentially involves capturing real-life scenarios under an extreme lighting contrast. For this purpose, scholars worldwide have proposed several approaches. The main solutions are as follows: (1) The probability statistics method can be used. One approach to extracting vegetation from digital images entails using the segmentation point between vegetation and non-vegetation in the a* component of the LAB color space [49]. The channels in this color space exhibit a low correlation and have been utilized in studies focused on mitigating the impact of vegetation shadows [50,51]. Song et al. [47] suggested preprocessing the brightness histogram of the digital image in the HIS color space, followed by utilizing the probability distribution of the a* component for FVC calculation. This method’s performance evaluation relies solely on the overall accuracy of FVC calculation from the image. It does not require or consider assessing the classification correctness of each pixel. Therefore, the FVC accuracy using this method should be further investigated and verified. (2) HDR imaging techniques can be used. Suh et al. employed the HDR imaging technique to identify shadow areas in images and mitigated their impact on vegetation extraction [38]. Wang et al. proposed using HDR images to capture the vegetation distribution and obtain images with a wider dynamic range, thereby reducing the shadow occurrence frequency [41]. This technology can improve shadow issues in images, but HDR imaging cannot completely resolve shadow problems [38]. (3) Polarization information can be combined. Yang et al. and Cui et al. proposed the idea of capturing polarization information simultaneously while recording vegetation images to supplement the information of shadow areas in normal-exposure images, thus aiding in high-precision FVC [46,52]. However, the acquisition of polarization information relies on the lighting conditions during photography. When the light intensity is too low, the amount of polarized light reduces, making it difficult to capture and record images, which hinders the recovery of shadow pixel information. In this study, the following factors are considered when selecting a vegetation extraction strategy under shadow conditions. First, several vegetation indices based on the RGB color space, such as CIVE, EXG, and EXGR, perform poorly under shadow conditions [53]. However, vegetation index-based methods can achieve a high classification efficiency and are easy to implement [54,55,56]. Second, the process of machine learning classification is complex. In existing research, deep learning methods with excellent image classification performance are used for classification [23,57]. However, this method is cumbersome and requires time-consuming sample annotation and complex training processes, making it labor-intensive. After weighing the advantages and disadvantages, we ultimately adopt the vegetation index method to achieve efficient and accurate vegetation extraction, even under shadow conditions. After addressing the issue of shadows, the vegetation index method provides excellent vegetation extraction performance, with vegetation identification levels comparable to those of deep learning methods.
This study is based on multidimensional experimental tests to determine a suitable vegetation extraction index for the double-exposure algorithm. By considering shadow intensity differences, the best double-exposure combination under different shadow conditions was identified. The design of this algorithm considers both the strengths and weaknesses of existing related research. The double-exposure data obtained by this algorithm can be adapted to the vegetation index method. Ultimately, the design objective of a simple, direct, targeted, and efficient anti-shadow algorithm based on vegetation extraction was achieved in this study. The double-exposure algorithm is a vegetation extraction method that can mitigate shadow interference. This approach provides a new and effective strategy for precise and rapid FVC calculation, high-precision monitoring of ecological environments, and ground validation of remote sensing products.

4.2. Limitations and Future Perspectives

Due to various limitations, this study focused only on sampling and analyzing herbaceous plants under shadow conditions. Research on the extraction of other types of green vegetation, such as coniferous plants, under shadow conditions is lacking. Different types of plants exhibit significant differences in texture and color features. Therefore, further evaluation and analysis of the inclusion of shadow data for various vegetation types should be conducted. Additionally, we will consider the effects of more complex lighting scenarios and seasonal changes on shadow vegetation extraction. Furthermore, while the double-exposure algorithm was primarily evaluated and analyzed on ground truth data, shadow-related issues during vegetation extraction also occur in unmanned aerial vehicle (UAV) data. Therefore, there is a need to assess the double-exposure algorithm on UAV remote sensing imagery. This approach will contribute to improving the accuracy of large-scale FVC and expanding the application scenarios of the double-exposure algorithm.

5. Conclusions

This study proposed a method to address the issue of shadows in images, which can conceal true object information and fail to accurately reflect the actual vegetation distribution. By fusing normal-exposure and overexposed images, we yielded double-exposure fusion images that mitigate the negative impact of shadows. Comparing the experimental results of five vegetation indices using five double-exposure fusion combinations clearly revealed that the HUE index yields the highest accuracy in calculating the FVC from the double-exposure fusion data. The correlation coefficient between the FVC and ground truth reached 0.969, with bias and RMSE values of 0.006 and 0.046, respectively. Moreover, m I O U , I O U , and the other six evaluation metrics exceeded 0.92, indicating the excellent adaptability of the HUE index to double-exposure fusion data. Based on the classification of the experimental data according to the degree of shadow coverage, the following conclusions can be drawn: (1) When shadow coverage in the image is less than 40%, the use of the HUE index for FVC calculation is less affected by shadows. Satisfactory results can be obtained for normal-exposure images. (2) When shadow coverage exceeds 40%, shadows exert a significant impact on vegetation extraction, leading to a notable reduction in the FVC accuracy. (3) In images with shadow coverage above 40%, utilizing nor+over3 double-exposure fusion data with the HUE index can increase the FVC accuracy to above 0.9. In conclusion, this study demonstrates the effectiveness of the double-exposure algorithm in improving FVC accuracy in shadowed areas. It provides a new, simple, efficient solution for precise vegetation extraction under shadow conditions. The application of this method can offer more reliable data support for ground-truth validation of satellite FVC products and ecological environment monitoring. In future research, it would be valuable to test the double-exposure algorithm in more complex vegetation types and lighting environments. Additionally, methods for automatically determining optimal exposure parameters are worthy of further exploration. This method can improve the accuracy of FVC calculation and provide more accurate FVC information. The codes and data are available at https://github.com/Jiaaaaa88/double-exposure-algorithm (accessed on 17 August 2024).

Author Contributions

Conceptualization, J.L. and W.C.; methodology, J.L., T.Y., W.C. and L.Y.; formal analysis, J.L. and W.C.; investigation, J.L. and T.Y.; data curation, J.L.; writing—original draft preparation, J.L.; writing—review and editing, J.L. and W.C.; visualization, J.L.; supervision, W.C.; funding acquisition, W.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key R&D Program of China (2018YFB0504800 and 2018YFB0504805), the Fundamental Research Funds for the Central Universities (Grant number 2022YJSDC14), the Undergraduate Training Program for Innovation and Entrepreneurship of CUMTB(202302030), and the Yue Qi Young Scholar Project, CUMTB.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

Authors express their thanks to Qinmin Fu.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhou, Q.; Robson, M.; Pilesjo, P. On the ground estimation of vegetation cover in Australian rangelands. Int. J. Remote Sens. 1998, 19, 1815–1820. [Google Scholar] [CrossRef]
  2. Puente, C.; Olague, G.; Smith, S.V.; Bullock, S.H.; Hinojosa-Corona, A.; Gonzalez-Botello, M.A. A Genetic Programming Approach to Estimate Vegetation Cover in the Context of Soil Erosion Assessment. Photogramm. Eng. Remote Sens. 2011, 77, 363–376. [Google Scholar] [CrossRef]
  3. Wang, S.; Zhang, Y.; Ju, W.; Chen, J.M.; Ciais, P.; Cescatti, A.; Sardans, J.; Janssens, I.A.; Wu, M.; Berry, J.A.; et al. Recent global decline of CO2 fertilization effects on vegetation photosynthesis. Science 2020, 370, 1295. [Google Scholar] [CrossRef]
  4. Guerschman, J.P.; Hill, M.J.; Renzullo, L.J.; Barrett, D.J.; Marks, A.S.; Botha, E.J. Estimating fractional cover of photosynthetic vegetation, non-photosynthetic vegetation and bare soil in the Australian tropical savanna region upscaling the EO-1 Hyperion and MODIS sensors. Remote Sens. Environ. 2009, 113, 928–945. [Google Scholar] [CrossRef]
  5. Yin, J.; Zhan, X.; Zheng, Y.; Hain, C.R.; Ek, M.; Wen, J.; Fang, L.; Liu, J. Improving Noah land surface model performance using near real time surface albedo and green vegetation fraction. Agric. For. Meteorol. 2016, 218–219, 171–183. [Google Scholar] [CrossRef]
  6. Cui, Y.L.; Sun, H.; Wang, G.X.; Li, C.J.; Xu, X.Y. A Probability-Based Spectral Unmixing Analysis for Mapping Percentage Vegetation Cover of Arid and Semi-Arid Areas. Remote Sens. 2019, 11, 3038. [Google Scholar] [CrossRef]
  7. Jia, K.; Yang, L.; Liang, S.; Xiao, Z.; Zhao, X.; Yao, Y.; Zhang, X.; Jiang, B.; Liu, D. Long-Term Global Land Surface Satellite (GLASS) Fractional Vegetation Cover Product Derived From MODIS and AVHRR Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 508–518. [Google Scholar] [CrossRef]
  8. Chhabra, S.K. Regional variations in vital capacity in adult males in India: Comparison of regression equations from four regions and impact on interpretation of spirometric data. Indian J. Chest Dis. Allied Sci. 2009, 51, 7–13. [Google Scholar]
  9. Zeng, X.; Dickinson, R.E.; Walker, A.; Shaikh, M.; Defries, R.S.; Qi, J. Derivation and evaluation of global 1-km fractional vegetation cover data for land modeling. J. Appl. Meteorol. 2000, 39, 826–839. [Google Scholar] [CrossRef]
  10. Chen, T.H.; Henderson-Sellers, A.; Milly, P.C.D.; Pitman, A.J.; Beljaars, A.C.M.; Polcher, J.; Abramopoulos, F.; Boone, A.; Chang, S.; Chen, F.; et al. Cabauw experimental results from the Project for Intercomparison of Land-Surface Parameterization Schemes. J. Clim. 1997, 10, 1194–1215. [Google Scholar] [CrossRef]
  11. Hukkinen, M.; Kaprio, J.; Broms, U.; Viljanen, A.; Kotz, D.; Rantanen, T.; Korhonen, T. Heritability of Lung Function: A Twin Study Among Never-Smoking Elderly Women. Twin Res. Hum. Genet. 2011, 14, 401–407. [Google Scholar] [CrossRef]
  12. Johnson, B.; Tateishi, R.; Kobayashi, T. Remote Sensing of Fractional Green Vegetation Cover Using Spatially-Interpolated Endmembers. Remote Sens. 2012, 4, 2619–2634. [Google Scholar] [CrossRef]
  13. Fortis, S.; Corazalla, E.O.; Wang, Q.; Kim, H.J. The Difference Between Slow and Forced Vital Capacity Increases With Increasing Body Mass Index: A Paradoxical Difference in Low and Normal Body Mass Indices. Respir. Care 2015, 60, 113–118. [Google Scholar] [CrossRef]
  14. Yang, H.-T.; Xu, H.-Q. Assessing fractional vegetation cover changes and ecological quality of the Wuyi Mountain National Nature Reserve based on remote sensing spatial information. Ying Yong Sheng Tai Xue Bao = J. Appl. Ecol. 2020, 31, 533–542. [Google Scholar]
  15. Duncan, J.; Stow, D.; Franklin, J.; Hope, A. Assessing the relationship between spectral vegetation indices and shrub cover in the Jornada Basin, New Mexico. Int. J. Remote Sens. 1993, 14, 3395–3416. [Google Scholar] [CrossRef]
  16. Yue, J.; Guo, W.; Yang, G.; Zhou, C.; Feng, H.; Qiao, H. Method for accurate multi-growth-stage estimation of fractional vegetation cover using unmanned aerial vehicle remote sensing. Plant Methods 2021, 17, 51. [Google Scholar] [CrossRef] [PubMed]
  17. Bac, C.W.; Hemming, J.; van Henten, E.J. Robust pixel-based classification of obstacles for robotic harvesting of sweet-pepper. Comput. Electron. Agric. 2013, 96, 148–162. [Google Scholar] [CrossRef]
  18. Visser, F.; Buis, K.; Verschoren, V.; Schoelynck, J. Mapping of submerged aquatic vegetation in rivers from very high-resolution image data, using object-based image analysis combined with expert knowledge. Hydrobiologia 2018, 812, 157–175. [Google Scholar] [CrossRef]
  19. Niu, Y.; Han, W.; Zhang, H.; Zhang, L.; Chen, H. Estimating fractional vegetation cover of maize under water stress from UAV multispectral imagery using machine learning algorithms. Comput. Electron. Agric. 2021, 189, 106414. [Google Scholar] [CrossRef]
  20. Liu, D.; Yang, L.; Jia, K.; Liang, S.; Xiao, Z.; Wei, X.; Yao, Y.; Xia, M.; Li, Y. Global Fractional Vegetation Cover Estimation Algorithm for VIIRS Reflectance Data Based on Machine Learning Methods. Remote Sens. 2018, 10, 1648. [Google Scholar] [CrossRef]
  21. Lin, X.; Chen, J.; Lou, P.; Yi, S.; Qin, Y.; You, H.; Han, X. Improving the estimation of alpine grassland fractional vegetation cover using optimized algorithms and multi-dimensional features. Plant Methods 2021, 17, 96. [Google Scholar] [CrossRef] [PubMed]
  22. Wang, H.; Han, D.; Mu, Y.; Jiang, L.; Yao, X.; Bai, Y.; Lu, Q.; Wang, F. Landscape-level vegetation classification and fractional woody and herbaceous vegetation cover estimation over the dryland ecosystems by unmanned aerial vehicle platform. Agric. For. Meteorol. 2019, 278, 107665. [Google Scholar] [CrossRef]
  23. Yu, R.; Li, S.; Zhang, B.; Zhang, H. A deep transfer learning method for estimating fractional vegetation cover of sentinel-2 multispectral images. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
  24. Mahendra, H.; Mallikarjunaswamy, S.; Subramoniam, S.R. An assessment of vegetation cover of Mysuru City, Karnataka State, India, using deep convolutional neural networks. Environ. Monit. Assess. 2023, 195, 526. [Google Scholar] [CrossRef] [PubMed]
  25. Nijhawan, R.; Sharma, H.; Sahni, H.; Batra, A. A deep learning hybrid CNN framework approach for vegetation cover mapping using deep features. In Proceedings of the 2017 13th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), Jaipur, India, 4–7 December 2017; IEEE: Piscataway Township, NJ, USA, 2017; pp. 192–196. [Google Scholar]
  26. Gitelson, A.A.; Kaufman, Y.J.; Stark, R.; Rundquist, D. Novel algorithms for remote estimation of vegetation fraction. Remote Sens. Environ. 2002, 80, 76–87. [Google Scholar] [CrossRef]
  27. Zhou, Q.; Robson, M. Automated rangeland vegetation cover and density estimation using ground digital images and a spectral-contextual classifier. Int. J. Remote Sens. 2001, 22, 3457–3470. [Google Scholar] [CrossRef]
  28. Wilson, R.O.; Tueller, P.T. Aerial and ground spectral characteristics of rangeland plant communities in Nevada. Remote Sens. Environ. 1987, 23, 177–191. [Google Scholar] [CrossRef]
  29. Nwagha, U.; Iyare, E.; Anyaehie, U.; Onyedum, C.; Okereke, C.; Ajuzieogu, O.; Amucheazi, A.; Oluboboku, T.; Agu, P.; Igweh, J.; et al. Forced Expiratory Volume in 6 s (FEV6) and FEV1/FEV6 Values as a Viable Alternative for Forced Vital Capacity (FVC) and FEV1/FVC Values During Pregnancy in South East Nigeria: A Preliminary Study. Ann. Med. Health Sci. Res. 2014, 4, 516. [Google Scholar] [CrossRef] [PubMed]
  30. Zhou, Q. Ground truthing, how reliable is it. In Proceedings of the Geoinformatics’ 96 Conference, West Palm Beach, FL, USA, 19–21 November 1996; Citeseer: Sydney, Australia, 1996; pp. 26–28. [Google Scholar]
  31. Vanha-Majamaa, I.; Salemaa, M.; Tuominen, S.; Mikkola, K. Digitized photographs in vegetation analysis—A comparison of cover estimates. Appl. Veg. Sci. 2000, 3, 89–94. [Google Scholar] [CrossRef]
  32. Myneni, R.B.; Keeling, C.; Tucker, C.J.; Asrar, G.; Nemani, R.R. Increased plant growth in the northern high latitudes from 1981 to 1991. Nature 1997, 386, 698–702. [Google Scholar] [CrossRef]
  33. Lal, R. Soil Erosion Research Methods; CRC Press: Boca Raton, FL, USA, 1994. [Google Scholar]
  34. Riegler-Nurscher, P.; Prankl, J.; Bauer, T.; Strauss, P.; Prankl, H. A machine learning approach for pixel wise classification of residue and vegetation cover under field conditions. Biosyst. Eng. 2018, 169, 188–198. [Google Scholar] [CrossRef]
  35. Li, L.; Liu, Y.; Yuan, Z.; Gao, Y. Wind field effect on the power generation and aerodynamic performance of offshore floating wind turbines. Energy 2018, 157, 379–390. [Google Scholar] [CrossRef]
  36. Melville, B.; Fisher, A.; Lucieer, A. Ultra-high spatial resolution fractional vegetation cover from unmanned aerial multispectral imagery. Int. J. Appl. Earth Obs. Geoinf. 2019, 78, 14–24. (In English) [Google Scholar] [CrossRef]
  37. Zhang, L.; Sun, X.; Wu, T.; Zhang, H. An analysis of shadow effects on spectral vegetation indexes using a ground-based imaging spectrometer. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2188–2192. [Google Scholar] [CrossRef]
  38. Suh, H.K.; Hofstee, J.W.; Van Henten, E.J. Improved vegetation segmentation with ground shadow removal using an HDR camera. Precis. Agric. 2018, 19, 218–237. [Google Scholar] [CrossRef]
  39. Zhou, G.Q.; Liu, S.H. Estimating ground fractional vegetation cover using the double-exposure method. Int. J. Remote Sens. 2015, 36, 6085–6100. [Google Scholar] [CrossRef]
  40. Mu, X.; Hu, R.; Zeng, Y.; McVicar, T.R.; Ren, H.; Song, W.; Wang, Y.; Casa, R.; Qi, J.; Xie, D.; et al. Estimating structural parameters of agricultural crops from ground-based multi-angular digital images with a fractional model of sun and shade components. Agric. For. Meteorol. 2017, 246, 162–177. [Google Scholar] [CrossRef]
  41. Wang, Z.; Chen, W.; Xing, J.; Zhang, X.; Tian, H.; Tang, H.; Bi, P.; Li, G.; Zhang, F. Extracting vegetation information from high dynamic range images with shadows: A comparison between deep learning and threshold methods. Comput. Electron. Agric. 2023, 208, 107805. [Google Scholar] [CrossRef]
  42. Chen, W.; Wang, Z.; Zhang, X.; Li, G.; Zhang, F.; Yang, L.; Tian, H.; Zhou, G. Improving Fractional Vegetation Cover Estimation With Shadow Effects Using High Dynamic Range Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 1701–1711. [Google Scholar] [CrossRef]
  43. Zhang, P.; Sun, X.; Zhang, D.; Yang, Y.; Wang, Z. Lightweight Deep Learning Models for High-Precision Rice Seedling Segmentation from UAV-Based Multispectral Images. Plant Phenomics 2023, 5, 0123. [Google Scholar] [CrossRef]
  44. Yun, C.; Kim, Y.H.; Lee, S.J.; Im, S.J.; Park, K.R. WRA-Net: Wide Receptive Field Attention Network for Motion Deblurring in Crop and Weed Image. Plant Phenomics 2023, 5, 0031. [Google Scholar] [CrossRef] [PubMed]
  45. Joshi, A.; Guevara, D.; Earles, M. Standardizing and Centralizing Datasets for Efficient Training of Agricultural Deep Learning Models. Plant Phenomics 2023, 5, 0084. [Google Scholar] [CrossRef]
  46. Yang, L.; Chen, W.; Bi, P.; Tang, H.; Zhang, F.; Wang, Z. Improving vegetation segmentation with shadow effects based on double input networks using polarization images. Comput. Electron. Agric. 2022, 199, 107123. [Google Scholar] [CrossRef]
  47. Song, W.; Mu, X.; Yan, G.; Huang, S. Extracting the green fractional vegetation cover from digital images using a shadow-resistant algorithm (SHAR-LABFVC). Remote Sens. 2015, 7, 10425–10443. [Google Scholar] [CrossRef]
  48. Sokolova, M.; Lapalme, G. A systematic analysis of performance measures for classification tasks. Inf. Process. Manag. 2009, 45, 427–437. [Google Scholar] [CrossRef]
  49. Liu, Y.; Mu, X.; Wang, H.; Yan, G. A novel method for extracting green fractional vegetation cover from digital images. J. Veg. Sci. 2012, 23, 406–418. [Google Scholar] [CrossRef]
  50. Li, L.; Mu, X.; Macfarlane, C.; Song, W.; Chen, J.; Yan, K.; Yan, G. A half-Gaussian fitting method for estimating fractional vegetation cover of corn crops using unmanned aerial vehicle images. Agric. For. Meteorol. 2018, 262, 379–390. [Google Scholar] [CrossRef]
  51. Lu, Y.; Song, Z.; Li, Y.; An, Z.; Zhao, L.; Zan, G.; Lu, M. A Novel Desert Vegetation Extraction and Shadow Separation Method Based on Visible Light Images from Unmanned Aerial Vehicles. Sustainability 2023, 15, 2954. [Google Scholar] [CrossRef]
  52. Cui, S.; Chen, W.; Gu, W.; Yang, L.; Shi, X. SiamC Transformer: Siamese coupling swin transformer Multi-Scale semantic segmentation network for vegetation extraction under shadow conditions. Comput. Electron. Agric. 2023, 213, 108245. [Google Scholar] [CrossRef]
  53. Hamuda, E.; Glavin, M.; Jones, E. A survey of image processing techniques for plant extraction and segmentation in the field. Comput. Electron. Agric. 2016, 125, 184–199. [Google Scholar] [CrossRef]
  54. Xu, Z.; Li, Y.; Li, B.; Hao, Z.; Lin, L.; Hu, X.; Zhou, X.; Yu, H.; Xiang, S.; Pascal, M.-L.-F.; et al. A comparative study on the applicability and effectiveness of NSVI and NDVI for estimating fractional vegetation cover based on multi-source remote sensing image. Geocarto Int. 2023, 38, 2184501. [Google Scholar] [CrossRef]
  55. Xiao, J.; Moody, A. A comparison of methods for estimating fractional green vegetation cover within a desert-to-upland transition zone in central New Mexico, USA. Remote Sens. Environ. 2005, 98, 237–250. [Google Scholar] [CrossRef]
  56. Meyer, G.E.; Neto, J.C. Verification of color vegetation indices for automated crop imaging applications. Comput. Electron. Agric. 2008, 63, 282–293. [Google Scholar] [CrossRef]
  57. Symeonakis, E.; Korkofigkas, A.; Higginbottom, T.; Boyd, J.; Arnau-Rosalén, E.; Stamou, G.; Karantzalos, K. Towards a Deep Learning Fractional Woody Vegetation Cover Monitoring Framework. In Proceedings of the IGARSS 2022-2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022; IEEE: Piscataway Township, NJ, USA, 2022; pp. 5905–5908. [Google Scholar]
Figure 1. Diagram of the data capture process. With camera parameters being adjusted during data collection, the camera captures normal exposure images and overexposed images in sequence.
Figure 1. Diagram of the data capture process. With camera parameters being adjusted during data collection, the camera captures normal exposure images and overexposed images in sequence.
Applsci 14 07719 g001
Figure 2. Vegetation images at different exposure compensation gears used in the image fusion step of the double-exposure algorithm. (nor: normal exposure; over1: overexposed at gear 1; over2: overexposed at gear 2; over3: overexposed at gear 3; over4: overexposed at gear 4; over5: overexposed at gear 5).
Figure 2. Vegetation images at different exposure compensation gears used in the image fusion step of the double-exposure algorithm. (nor: normal exposure; over1: overexposed at gear 1; over2: overexposed at gear 2; over3: overexposed at gear 3; over4: overexposed at gear 4; over5: overexposed at gear 5).
Applsci 14 07719 g002
Figure 3. Flowchart of vegetation extraction under shadow conditions using the double-exposure algorithm. The double-exposure algorithm consists of three parts: data preprocessing, vegetation extraction, and method evaluation.
Figure 3. Flowchart of vegetation extraction under shadow conditions using the double-exposure algorithm. The double-exposure algorithm consists of three parts: data preprocessing, vegetation extraction, and method evaluation.
Applsci 14 07719 g003
Figure 4. Image fusion results for one set of shadow vegetation experimental data under image preprocessing. (a) Normal-exposure image. (b) Fusion image of the normal-exposure image and overexposed image at gear 1. (c) Fusion image of the normal-exposure image and overexposed image at gear 2. (d) Fusion image of the normal-exposure image and overexposed image at gear 3. (e) Fusion image of the normal-exposure image and overexposed image at gear 4. (f) Fusion image of the normal-exposure image and overexposed image at gear 5.
Figure 4. Image fusion results for one set of shadow vegetation experimental data under image preprocessing. (a) Normal-exposure image. (b) Fusion image of the normal-exposure image and overexposed image at gear 1. (c) Fusion image of the normal-exposure image and overexposed image at gear 2. (d) Fusion image of the normal-exposure image and overexposed image at gear 3. (e) Fusion image of the normal-exposure image and overexposed image at gear 4. (f) Fusion image of the normal-exposure image and overexposed image at gear 5.
Applsci 14 07719 g004
Figure 5. Scatter plots of FVC estimated from normal-exposure images and five double-exposure combinations using the double-exposure algorithm based on the C I V E index compared to the true FVC ((a) scatter plot of the FVC obtained from the normal-exposure images compared to the true FVC; (b) scatter plot of the FVC obtained from the nor+over1 double-exposure fusion image compared to the true FVC; (c) scatter plot of the FVC obtained from the nor+over2 double-exposure fusion image compared to the true FVC; (d) scatter plot of the FVC obtained from the nor+over3 double-exposure fusion image compared to the true FVC; (e) scatter plot of the FVC obtained from the nor+over4 double-exposure fusion image compared to the true FVC; (f) scatter plot of the FVC obtained from the nor+over5 double-exposure fusion image compared to the true FVC;).
Figure 5. Scatter plots of FVC estimated from normal-exposure images and five double-exposure combinations using the double-exposure algorithm based on the C I V E index compared to the true FVC ((a) scatter plot of the FVC obtained from the normal-exposure images compared to the true FVC; (b) scatter plot of the FVC obtained from the nor+over1 double-exposure fusion image compared to the true FVC; (c) scatter plot of the FVC obtained from the nor+over2 double-exposure fusion image compared to the true FVC; (d) scatter plot of the FVC obtained from the nor+over3 double-exposure fusion image compared to the true FVC; (e) scatter plot of the FVC obtained from the nor+over4 double-exposure fusion image compared to the true FVC; (f) scatter plot of the FVC obtained from the nor+over5 double-exposure fusion image compared to the true FVC;).
Applsci 14 07719 g005
Figure 6. Scatter plots of FVC estimated from normal-exposure images and five double-exposure combinations using the double-exposure algorithm based on the E X G compared to the true FVC. ((a) scatter plot of the FVC obtained from the normal-exposure images compared to the true FVC; (b) scatter plot of the FVC obtained from the nor+over1 double-exposure fusion image compared to the true FVC; (c) scatter plot of the FVC obtained from the nor+over2 double-exposure fusion image compared to the true FVC; (d) scatter plot of the FVC obtained from the nor+over3 double-exposure fusion image compared to the true FVC; (e) scatter plot of the FVC obtained from the nor+over4 double-exposure fusion image compared to the true FVC; (f) scatter plot of the FVC obtained from the nor+over5 double-exposure fusion image compared to the true FVC.)
Figure 6. Scatter plots of FVC estimated from normal-exposure images and five double-exposure combinations using the double-exposure algorithm based on the E X G compared to the true FVC. ((a) scatter plot of the FVC obtained from the normal-exposure images compared to the true FVC; (b) scatter plot of the FVC obtained from the nor+over1 double-exposure fusion image compared to the true FVC; (c) scatter plot of the FVC obtained from the nor+over2 double-exposure fusion image compared to the true FVC; (d) scatter plot of the FVC obtained from the nor+over3 double-exposure fusion image compared to the true FVC; (e) scatter plot of the FVC obtained from the nor+over4 double-exposure fusion image compared to the true FVC; (f) scatter plot of the FVC obtained from the nor+over5 double-exposure fusion image compared to the true FVC.)
Applsci 14 07719 g006
Figure 7. Scatter plots of FVC estimated from normal-exposure images and five double-exposure combinations using the double-exposure algorithm based on the HUE index compared to the true FVC. ( (a) scatter plot of the FVC obtained from the normal-exposure images compared to the true FVC; (b) scatter plot of the FVC obtained from the nor+over1 double-exposure fusion image compared to the true FVC; (c) scatter plot of the FVC obtained from the nor+over2 double-exposure fusion image compared to the true FVC; (d) scatter plot of the FVC obtained from the nor+over3 double-exposure fusion image compared to the true FVC; (e) scatter plot of the FVC obtained from the nor+over4 double-exposure fusion image compared to the true FVC; (f) scatter plot of the FVC obtained from the nor+over5 double-exposure fusion image compared to the true FVC.)
Figure 7. Scatter plots of FVC estimated from normal-exposure images and five double-exposure combinations using the double-exposure algorithm based on the HUE index compared to the true FVC. ( (a) scatter plot of the FVC obtained from the normal-exposure images compared to the true FVC; (b) scatter plot of the FVC obtained from the nor+over1 double-exposure fusion image compared to the true FVC; (c) scatter plot of the FVC obtained from the nor+over2 double-exposure fusion image compared to the true FVC; (d) scatter plot of the FVC obtained from the nor+over3 double-exposure fusion image compared to the true FVC; (e) scatter plot of the FVC obtained from the nor+over4 double-exposure fusion image compared to the true FVC; (f) scatter plot of the FVC obtained from the nor+over5 double-exposure fusion image compared to the true FVC.)
Applsci 14 07719 g007
Figure 8. Scatter plots of FVC estimated from normal-exposure images and five double-exposure combinations using the double-exposure algorithm based on the E X R compared to the true FVC. ((a) scatter plot of the FVC obtained from the normal-exposure images compared to the true FVC; (b) scatter plot of the FVC obtained from the nor+over1 double-exposure fusion image compared to the true FVC; (c) scatter plot of the FVC obtained from the nor+over2 double-exposure fusion image compared to the true FVC; (d) scatter plot of the FVC obtained from the nor+over3 double-exposure fusion image compared to the true FVC; (e) scatter plot of the FVC obtained from the nor+over4 double-exposure fusion image compared to the true FVC; (f) scatter plot of the FVC obtained from the nor+over5 double-exposure fusion image compared to the true FVC.)
Figure 8. Scatter plots of FVC estimated from normal-exposure images and five double-exposure combinations using the double-exposure algorithm based on the E X R compared to the true FVC. ((a) scatter plot of the FVC obtained from the normal-exposure images compared to the true FVC; (b) scatter plot of the FVC obtained from the nor+over1 double-exposure fusion image compared to the true FVC; (c) scatter plot of the FVC obtained from the nor+over2 double-exposure fusion image compared to the true FVC; (d) scatter plot of the FVC obtained from the nor+over3 double-exposure fusion image compared to the true FVC; (e) scatter plot of the FVC obtained from the nor+over4 double-exposure fusion image compared to the true FVC; (f) scatter plot of the FVC obtained from the nor+over5 double-exposure fusion image compared to the true FVC.)
Applsci 14 07719 g008
Figure 9. Scatter plots of FVC estimated from normal-exposure images and five double-exposure combinations using the double-exposure algorithm based on the E X G R compared to the true FVC. ((a) scatter plot of the FVC obtained from the normal-exposure images compared to the true FVC; (b) scatter plot of the FVC obtained from the nor+over1 double-exposure fusion image compared to the true FVC; (c) scatter plot of the FVC obtained from the nor+over2 double-exposure fusion image compared to the true FVC; (d) scatter plot of the FVC obtained from the nor+over3 double-exposure fusion image compared to the true FVC; (e) scatter plot of the FVC obtained from the nor+over4 double-exposure fusion image compared to the true FVC; (f) scatter plot of the FVC obtained from the nor+over5 double-exposure fusion image compared to the true FVC.)
Figure 9. Scatter plots of FVC estimated from normal-exposure images and five double-exposure combinations using the double-exposure algorithm based on the E X G R compared to the true FVC. ((a) scatter plot of the FVC obtained from the normal-exposure images compared to the true FVC; (b) scatter plot of the FVC obtained from the nor+over1 double-exposure fusion image compared to the true FVC; (c) scatter plot of the FVC obtained from the nor+over2 double-exposure fusion image compared to the true FVC; (d) scatter plot of the FVC obtained from the nor+over3 double-exposure fusion image compared to the true FVC; (e) scatter plot of the FVC obtained from the nor+over4 double-exposure fusion image compared to the true FVC; (f) scatter plot of the FVC obtained from the nor+over5 double-exposure fusion image compared to the true FVC.)
Applsci 14 07719 g009
Figure 10. Comparative illustration of normal exposure images and five types of double-exposure fusion images under shadow conditions, along with a comparison of vegetation extraction results using five vegetation indices on the images above (in all the images except those in the first column, the white areas indicate vegetation areas, while the black areas denote non-vegetation areas). The vegetation within the red circle is more strongly affected by shadows.
Figure 10. Comparative illustration of normal exposure images and five types of double-exposure fusion images under shadow conditions, along with a comparison of vegetation extraction results using five vegetation indices on the images above (in all the images except those in the first column, the white areas indicate vegetation areas, while the black areas denote non-vegetation areas). The vegetation within the red circle is more strongly affected by shadows.
Applsci 14 07719 g010
Table 1. Accuracy statistics of vegetation classification for five double-exposure fusion combinations of the double-exposure algorithm and normal-exposure images using the CIVE. The higher the accuracy indicator value, the higher the vegetation extraction accuracy.
Table 1. Accuracy statistics of vegetation classification for five double-exposure fusion combinations of the double-exposure algorithm and normal-exposure images using the CIVE. The higher the accuracy indicator value, the higher the vegetation extraction accuracy.
nornor+over1nor+over2nor+over3nor+over4nor+over5
mIOU0.8220.8850.8960.8840.8680.837
IOU0.8420.9050.9160.9040.8890.858
Accuracy0.8990.9420.9490.9410.9300.908
Precision0.9880.9880.9870.9840.9820.980
Recall0.8530.9170.9280.9190.9050.874
Kappa0.7900.8680.8820.8660.8470.809
Table 2. Accuracy statistics of vegetation classification for five double-exposure fusion combinations of the double-exposure algorithm and normal-exposure images using the EXG.
Table 2. Accuracy statistics of vegetation classification for five double-exposure fusion combinations of the double-exposure algorithm and normal-exposure images using the EXG.
nornor+over1nor+over2nor+over3nor+over4nor+over5
mIOU0.8060.8780.8870.8700.8460.815
IOU0.8250.8990.9090.8910.8680.835
Accuracy0.8870.9380.9440.9320.9140.893
Precision0.9850.9880.9850.9820.9790.977
Recall0.8370.9100.9220.9080.8860.854
Kappa0.7700.8590.8700.8490.8200.782
Table 3. Accuracy statistics of vegetation classification for five double-exposure fusion combinations of the double-exposure algorithm and normal-exposure images using the HUE index.
Table 3. Accuracy statistics of vegetation classification for five double-exposure fusion combinations of the double-exposure algorithm and normal-exposure images using the HUE index.
nornor+over1nor+over2nor+over3nor+over4nor+over5
mIOU0.8710.9070.9240.9300.9270.927
IOU0.8560.9100.9330.9430.9400.940
Accuracy0.9350.9610.9700.9730.9720.972
Precision0.8780.9390.9670.9780.9760.977
Recall0.8740.9290.9520.9630.9600.959
Kappa0.8310.8870.9140.9240.9200.918
Table 4. Accuracy statistics of vegetation classification for five double-exposure fusion combinations of the double-exposure algorithm and normal-exposure images using the EXR.
Table 4. Accuracy statistics of vegetation classification for five double-exposure fusion combinations of the double-exposure algorithm and normal-exposure images using the EXR.
nornor+over1nor+over2nor+over3nor+over4nor+over5
mIOU0.6490.7160.7520.7860.8020.793
IOU0.7080.7560.7840.8150.8250.815
Accuracy0.7880.8330.8590.8850.8980.893
Precision0.7970.8480.8740.8990.9040.899
Recall0.8700.8800.8900.9030.9100.905
Kappa0.5330.6360.6900.7380.7600.752
Table 5. Accuracy statistics of vegetation classification for five double-exposure fusion combinations of the double-exposure algorithm using the EXGR.
Table 5. Accuracy statistics of vegetation classification for five double-exposure fusion combinations of the double-exposure algorithm using the EXGR.
nornor+over1nor+over2nor+over3nor+over4nor+over5
mIOU0.8150.8720.8860.8920.8950.888
IOU0.8370.8940.9060.9100.9120.906
Accuracy0.8940.9370.9450.9470.9490.947
Precision0.9450.9640.9680.9690.9720.974
Recall0.8870.9270.9360.9380.9380.930
Kappa0.7800.8520.8690.8750.8790.870
Table 6. Comparison of FVC extraction results between the double-exposure algorithm and the SHARLAB-FVC algorithm for 14 sets of experimental data. In the table, nor+over1, nor+over2, nor+over3, nor+over4, and nor+over5 represent the results under five double-exposure combination conditions. The double-exposure algorithm is based on the HUE index.
Table 6. Comparison of FVC extraction results between the double-exposure algorithm and the SHARLAB-FVC algorithm for 14 sets of experimental data. In the table, nor+over1, nor+over2, nor+over3, nor+over4, and nor+over5 represent the results under five double-exposure combination conditions. The double-exposure algorithm is based on the HUE index.
Data IDFVC Ground Truthnor+over1nor+over2nor+over3nor+over4nor+over5SHAR-LABFVC
10.3800.3620.3640.3640.3630.3630.309
20.4270.4270.4270.4280.4290.4290.402
30.4390.4360.4370.4370.4370.4370.417
40.4260.4320.4310.4300.4300.4300.408
50.4290.4320.4320.4320.4310.4310.425
60.2100.2130.2130.2130.2130.2130.209
70.2240.1930.2010.2010.2010.2000.163
80.6850.6880.6910.6920.6910.6910.660
90.6530.6530.6510.6510.6510.6510.640
100.6090.6160.6210.6240.6240.6230.566
110.2950.2960.2960.2960.2960.2960.261
120.5050.4970.4990.4980.4980.4990.473
130.5630.5570.5580.5570.5570.5570.542
140.6230.6090.6090.6070.6070.6060.613
Table 7. Comparative analysis of vegetation extraction performance under weak shadow conditions (0–40% shadow pixels) using the HUE index-based double-exposure algorithm for five double-exposure fusion combinations. The higher the accuracy indicator value, the higher the vegetation extraction accuracy.
Table 7. Comparative analysis of vegetation extraction performance under weak shadow conditions (0–40% shadow pixels) using the HUE index-based double-exposure algorithm for five double-exposure fusion combinations. The higher the accuracy indicator value, the higher the vegetation extraction accuracy.
nornor+over1nor+over2nor+over3nor+over4nor+over5
0–40%
shadow pixels
mIOU0.9320.9400.9410.9400.9380.938
IOU0.9360.9540.9540.9530.9530.953
Accuracy0.9730.9790.9790.9790.9780.978
Precision0.9600.9810.9820.9820.9810.982
Recall0.9500.9710.9710.9710.9700.970
Kappa0.9200.9360.9370.9360.9340.934
Table 8. Comparative analysis of vegetation extraction performance under moderate shadow conditions (40–70% shadow pixels) using the HUE index-based double-exposure algorithm for five double-exposure fusion combinations.
Table 8. Comparative analysis of vegetation extraction performance under moderate shadow conditions (40–70% shadow pixels) using the HUE index-based double-exposure algorithm for five double-exposure fusion combinations.
nornor+over1nor+over2nor+over3nor+over4nor+over5
40–70%
shadow pixels
mIOU0.8110.8920.9090.9180.9130.914
IOU0.7930.9010.9230.9370.9310.931
Accuracy0.8830.9530.9610.9660.9660.966
Precision0.8100.9270.9580.9730.9660.967
Recall0.8110.9220.9420.9630.9530.952
Kappa0.7560.8650.8930.9080.8990.900
Table 9. Comparative analysis of vegetation extraction performance under strong shadow conditions (70–100% shadow pixels) using the HUE index-based double-exposure algorithm for five double-exposure fusion combinations.
Table 9. Comparative analysis of vegetation extraction performance under strong shadow conditions (70–100% shadow pixels) using the HUE index-based double-exposure algorithm for five double-exposure fusion combinations.
nornor+over1nor+over2nor+over3nor+over4nor+over5
70–100%
shadow pixels
mIOU0.7550.8230.8910.9140.9110.906
IOU0.6840.7850.8810.9180.9150.910
Accuracy0.8780.9170.9520.9650.9630.960
Precision0.7070.8190.9320.9730.9740.974
Recall0.7130.8060.9060.9400.9370.932
Kappa0.6450.7580.8660.9030.8990.893
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, J.; Chen, W.; Ying, T.; Yang, L. Double-Exposure Algorithm: A Powerful Approach to Address the Accuracy Issues of Fractional Vegetation Extraction under Shadow Conditions. Appl. Sci. 2024, 14, 7719. https://doi.org/10.3390/app14177719

AMA Style

Li J, Chen W, Ying T, Yang L. Double-Exposure Algorithm: A Powerful Approach to Address the Accuracy Issues of Fractional Vegetation Extraction under Shadow Conditions. Applied Sciences. 2024; 14(17):7719. https://doi.org/10.3390/app14177719

Chicago/Turabian Style

Li, Jiajia, Wei Chen, Tai Ying, and Lan Yang. 2024. "Double-Exposure Algorithm: A Powerful Approach to Address the Accuracy Issues of Fractional Vegetation Extraction under Shadow Conditions" Applied Sciences 14, no. 17: 7719. https://doi.org/10.3390/app14177719

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop