Next Article in Journal
Visual Programming as Modern and Effective Structural Design Technology—Analysis of Opportunities, Challenges, and Future Developments Based on the Use of Dynamo
Previous Article in Journal
Optimized Implementation of Argon2 Utilizing the Graphics Processing Unit
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Balanced Cloud Shadow Compensation Method in High-Resolution Image Combined with Multi-Level Information

1
The First Surveying and Mapping Institute of Hunan Province, Changsha 410114, China
2
Key Laboratory of Mine Environmental Monitoring and Improving around Poyang Lake of Ministry of Natural Resources, East China University of Technology, Nanchang 330013, China
3
School of Geosciences, Yangtze University, Wuhan 430100, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(16), 9296; https://doi.org/10.3390/app13169296
Submission received: 8 June 2023 / Revised: 27 July 2023 / Accepted: 1 August 2023 / Published: 16 August 2023
(This article belongs to the Section Earth Sciences)

Abstract

:
As clouds of different thicknesses block sunlight, large areas of cloud shadows with varying brightness can appear on the ground. Cloud shadows in high-resolution remote sensing images lead to uneven loss of image feature information. However, cloud shadows still retain feature information, and how to compensate for and restore unbalanced cloud shadow occlusion is of great significance in improving image quality. Though traditional shadow compensation methods can enhance the shaded brightness, the results are inconsistent in a single shadow region with over-compensated or insufficient compensation problems. Thus, this paper proposes a shadow-balanced compensation method combined with multi-level information. Multi-level information comprising the information of a shadow pixel, a local super-pixel centered with the pixel, the global cloud shadow region, and the global non-shadow region information, to comply with the cloud shadow’s internal difference. First, the original image is detected via the cloud shadow detection method and post-processing. The initial shadow is detected combined with designed complex shadow features and morphological shadow index features with threshold methods. Then, post-processing considering shadow area and morphological operation is applied to remove the small, non-cloud-shadow objects. Meanwhile, the initial image is also divided into super-pixel homogeneity regions using the super-pixel segmentation principle. A super-pixel region is between the pixel and the shadow area. Different from pixel and other window regions, it can provide a different measurement levels considering object homogeneity. Thus, a balanced compensation model is designed by combining the feature value of a shadow pixel and the mean and variance of a super-pixel, shadow region, and non-shadow region on the basis of the linear correlation correction principle. The super-pixel around the shadow pixel provides a local reliable homogenous region. It can reflect the internal difference inside the shadow region. Therefore, introducing a super-pixel in the proposed model can effectively compensate for the shaded information in a balanced way. Compared to those of only using pixel and shadow region information, the compensated results introduce super-pixel information, can deal with the homogenous region as a global one, and can be adaptive to the illustration differences in a cloud shadow. The experimental results show that compared to that of other reference methods, the quality of the proposed compensation result is better. The proposed method can enhance brightness and recover detailed information in shadow regions in a more balanced way. The issue of over-compensation and insufficient compensation inside a single shadow region can be resolved. Thus, the total result is similar to that of a non-shadow region. The proposed method can be used to recover the cloud shadow information more self-adaptively to improve image quality and usage in other applications.

1. Introduction

High-resolution remote sensing images have been widely used in Earth observation tasks in recent years [1]. Clouds and cloud shadows often exist during the process of image acquisition [2,3,4]. In a cloud area, an underlying object’s reflection will be obstructed from reaching the sensors, leading to missing object information. Especially when landslides and hazardous events happen, cloud and cloud remote sensing images are very valuable for rescue purposes [5]. Discovering how to restore the shaded information is very significant. Thus, a cloud shadow area where the brightness is lower than that of other areas will be formed [6,7]. Because there is a certain distance between a cloud and the ground, ground objects in the cloud shadow area can also receive some scattered light energy from the sky [8,9]. As a result, with the help of the scattered light, the ground object’s information can be partially presented compared to the shadow of the ground object. However, the influence of a cloud shadow varies according to the thickness and height of the cloud. In general, the thicker a cloud is, the more pronounced the cloud shadow is. The more centrally the object is located, the less scattered light it receives. Thus, in one cloud shadow area, the shelter of the thick cloud is more serious than that of the thin cloud [10,11]. Moreover, since the cloud is thinner around the cloud edge area, the shelter in the central area is more serious than the edge area. As a result, in the same cloud shadow area, the information loss is uneven. In high-resolution remote sensing images, the ground object information in the cloud shadow is also asymmetrical. Therefore, it is of great significance to study the balanced shadow compensation method for asymmetrical cloud shadow areas to improve image utilization.
The proposed cloud shadow removal methods can be classified into two types: cloud-based methods and the shadow-based method. In middle-resolution images, the cloud and cloud shadow are located adjacently [12,13]. Therefore, cloud shadow removal methods are proposed together with cloud removal. They can be classified into four types: spatial-information-based, spectral-information-based, temporal-information-based, and hybrid methods [14]. Temporal-information-based methods are designed for thick cloud removal in large cloud areas. The main algorithm is to replace the cloud and cloud shadow regions with another cloud-free image acquired at another time [15]. However, the temporal differences between multitemporal images lead to spectral reflection changes [16]. Moreover, these methods may be restricted by cloud-free image requirements. Spectral-information-based methods aim to recover the spectral information loss caused by sensor failure or the presence of thin clouds in the multispectral or hyperspectral data, such as through the use of a moderate-resolution imaging spectroradiometer (MODIS). The missing information can be reconstructed through the correlated spectral bands [17]. For example, since MODIS band 6 is correlated with snow- or cloud-covered areas, Wang et al. [18] recovered missing information with the calibrated and geo-located Terra MODIS bands 6 and 7. Spatial-information-based methods are used for single high-resolution remote sensing images. The spatial relationship between the local and non-local regions can be incorporated into cloud and cloud shadow removal. Chai et al. formulated cloud and cloud shadow detection as a semantic segmentation problem and proposed a deep convolutional neural network (CNN)-based method to detect them in Landsat imagery [19]. The CNN-based method extracted multi-level spatial and spectral features from the global image and all the bands based on the cloud, cloud shadow and clear labels. These deep multi-level features are deconvolved to realize detailed segmentation to recognize clouds and cloud shadows. Then, cloud-based methods are mainly designed for middle-resolution images, and the object information is replaced by the information from other temporal images, other bands or adjacent regions [20,21]. The information in itself would be abandoned.
Shadow-based cloud removal methods are suitable for high-resolution remote sensing images. The object’s information in the cloud shadow is more clear than that in lower-resolution images. Therefore, shadow compensation methods for ground objects can be applied for cloud shadow removal. The shadow-based method can be divided into the image enhancement method and the model method [22,23]. These methods do not depend on other image information. Therefore, it is more suitable for the shadow information recovery of high-resolution images. Image enhancement methods mainly use linear stretching, histogram matching [24], logarithmic transformation [25], wallis filtering [26], and other image enhancement principles to improve the brightness of a shadow area. The effect of this method is related to the ability of the image enhancement algorithm. Additionally, the parameters are often determined based on experience. Consequently, achieving adaptive processing according to the change in the shadow shelter degree is challenging. In addition, the model method mainly adopts linear correlation correction (LCC) [27,28], gamma correction [29], the color constancy principle [30], the illumination compensation model [31], and other methods to establish the shadow illumination compensation model. It adds information on non-shadow areas into the compensation model to enhance the compensation of shadow areas. The linear correlation compensation principle [32] is similar to that of local statistical enhancement. They mainly utilize the statistical characteristic mean and variance of a non-shadow area as the target value to establish the compensation model. The principle of this method is simple and effective. On this basis, many methods exist to modify the models, such as by increasing the compensation intensity coefficient and automatically obtaining the compensation parameters [33]. Wang et al. [34] adopt the SVM classification method for shadow detection on roads. Through region matching, it finds the local road unit in a shadow area and non-shadow area to select compensation parameters and complete adaptive compensation accurately. The deep learning method has recently been adopted for shadow removal in high-resolution images [35]. Zhang et al. [36] developed a recurrent shadow attention model (RSAM) to retrieve fine-scale land cover classes within cast shadows and self-shadows along the urban–rural gradient. However, the requirement of sample labels for the model’s construction restricts its application.
With the development of satellite imaging technology, the resolution of remote sensing images is improving gradually. As a result, the influence of a cloud and cloud shadow in high-resolution images has increased. Moreover, the problem of uneven shading in cloud shadows becomes greater as the resolution increases. Cloud-based and shadow-based methods are not able to solve the issue of uneven information loss in cloud shadows of high-resolution images yet. Since these problems are prone to occur in high-resolution images, shadow-based methods are more suitable than cloud-based methods are. However, the majority of automatic shadow compensation methods are based on global image analysis or shadow area unit analysis, which are often designed for shadows of a ground object. When they are directly applied in cloud shadow compensation, a large area of cloud shadow is usually compensated for as a whole. At this point, the compensation model in the same area unit is unchanged. Additionally, compensation will be improved to the same extent in the inconsistent shelter area. Thus, insufficient or excessive compensation problems easily result. Therefore, this compensation method cannot reflect the differences within a region or solve the problem of uneven shelter within a cloud shadow.
Accordingly, in order to realize cloud shadow-balanced compensation, this paper proposes a balanced shadow compensation model combining multi-level information, including pixel, super-pixel, and shadow-related region information. A consistent super-pixel in the shadow region is obtained as a supplementary compensation unit by means of super-pixel segmentation. Then, it applied in linear correlation correction compensation and integrated with region-level and pixel-level compensation information to realize the balanced compensation of a shadow area. The main contributions of this paper are illustrated as follows:
(1)
A solution to the problem of uneven shading caused by cloud shadows in high-resolution images. Because a cloud has different areas of thickness, its shaded extent on ground objects is also inconsistent. This problem has not been discussed so far. However, if it is solved properly, high-resolution remote sensing images can be used more effectively.
(2)
Super-pixel information is introduced to compensate for shadow information. The super-pixel method segments the cloud shadow area into small regions with homogeneous information. Compared to traditional local region information extracted by windows around shadow pixels, a super-pixel can consider the complexity of an object in the shadow from the local shadow area to the global shadow region.
(3)
Multi-level information, including shadow pixel, super-pixel, shadow region, and non-shadow region, are composited together to solve the problems of unevenness in a cloud shadow. This can adaptively compensate for the shadow because coarse- to fine-level information is considered.

2. Materials and Methods

Figure 1 shows the cloud shadow balanced compensation principle. First, a shadow area and its surrounding non-shadow area are obtained via the shadow detection method. Then, the super-pixel units in the shadow region of the image are extracted using the super-pixel segmentation principle. Next, the non-shadow area, shadow area, super-pixel units, and pixel information are integrated in establishing a balanced compensation model based on an improved linear correlation correction model. Thus, it can compensate for the brightness of a shadow area with the change in the pixel, the super-pixel, and the shadow region. Finally, the compensated-brightness feature image is recombined with saturation and hue and converted into a RGB color space to obtain the compensated image. This method can effectively integrate the local information of the shaded shadow area into the compensation model. While compensating, it can comprehensively consider the multi-level information from pixel and super-pixel regions. Consequently, it achieves the same effect as it can for non-shadowed areas while compensating for cloud shadow areas more reasonably and rationally.

2.1. Cloud Shadow Detection and Post-Processing

Before cloud shadow compensation, cloud shadow areas must be detected. Traditional shadows are detected using typical shadow spectral features in pixel-level segmentation. The original image is shown in a RGB color space. Since they are correlated, the original red (R)-, green (G)-, and blue (B)-band spectral information is not able to be used to detect shadows effectively. By transforming the RGB color space into a HIS color space or c1c2c3 color space, shadows show more obvious features in high-hue (H) values, low brightness (I) in the HSI color space, and high c3 values. Therefore, these are traditional shadow features in the shadow detection method. However, they do not comply with cloud shadow detection. Moreover, cloud shadows are much larger and have more serve problems of uneven shading than do the shadows of ground objects. Therefore, this paper combines multiple composite shadow features with the morphological shadow index to detect cloud shadows and involves the post-processing of a shadow to remove non-cloud-shadow regions and fill the shadow holes in cloud shadow regions.
Since simple shadow features cannot combine well to detect all the shadows, we use the condition combination formula, which introduces newly designed shadow signatures and the automatic Otsu threshold strategy [37] to detect shadows. Q and A are composite features defined in (1) and (2). Using Equation (3), the initial shadows can be detected [38].
Q = B I
A = 2 B I G , G T _ G 2 B I 2 G , G > T _ G
C SD = ( i , j ) B ( i , j ) > T _ B & & I ( i , j ) < T _ I Q ( i , j ) > T _ Q & & G ( i , j ) < T _ G A ( i , j ) > T _ A
where I(i, j) represents the value of the intensity component, I, in the HIS space in a pixel (i, j). B′ (i, j) and G′ (i, j) represent the value of nominalized blue and nominalized green components, respectively, in the pixel (i, j). B′ = B/(R + G + B), G′ = G/(R + G + B).
The morphological shadow index (MSI) [39,40], defined in Equation (4), was developed for the automatic shadow extraction of ground objects from an image at first. A distance to shadows is used as the spatial constraint of buildings, based on which some commission errors, such as open areas and bright soil, can be removed. Cloud shadows have a different distance character from buildings. This paper found that cloud shadows have a lower MSI value than buildings do. Thus, we introduce the MSI with a different algorithm to detect cloud shadows. The MSI is calculated based on the differential morphological profiles (DMP) of black top hat-transformed data, defined as    B T H d , s = φ b r e d , s b . It is effective to supplement some undetected shadow areas where the spectrum features are not distinct enough to recognize shadows based on the detection conditions.
M S I = D M P B T H d , s D · S
where D is the number of directions applied to the linear structure element (SE), and s is the size of SE.  S = s m a x s m i n s + 1 φ b r e d , s  represents the closing-by-reconstruction of the brightness image. Generally, ground object shadows have larger MSI values, while the cloud shadows have smaller values. Thus, using Equation (5), the final shadows are united with the  C S D  according to different shadow types.
C CloudSD = ( i , j ) M S I i , j < T _ M S I C SD
Lastly, the initial detection of shadow areas is incomplete, and some high-brightness areas are easy to be missed. Therefore, post-processing operations such as erasing, dilating, opening, closing, small area removal, and cavity filling are carried out on the shaded area based on morphological methods. This allows the retention of a large cloud shadow area so as to obtain the shadow area.
As shown in Figure 2b–f, the shadow feature maps of the H, I, B′, Q, and MSI are compared to illustrate the capability of displaying the shadow feature. The simple H, I, and B’ features in a shadow are less recognizable than is the Q feature for distinguishing between cloud shadows and non-cloud-shadows. The MSI value in cloud shadows is lower than that of the shadow regions of ordinary ground objects. The initial and final shadow detection results are shown in Figure 2g,h.

2.2. Related Area Acquisition

The realization of shadow-balanced compensation introduces multi-level information of shadow and non-shadow areas, super-pixel homogeneity units, and shadow pixels. The following describes how to obtain information about related regions.

2.2.1. Acquisition of Shadow Areas and Non-Shadow Areas

Shadow detection methods and morphological post-processing operations supply a whole cloud shadow region. In Figure 3, the red line is the boundary of the shadow regions extracted using our methods. Then, a non-shadow area is acquired by dilating the shadow area with K pixels. For example, the circular green region of a certain width outside the shadow region is extracted as the non-shadow region of the shadow region. In accordance with this method, different shadow areas and their corresponding non-shadow areas can be obtained. Then, the feature means and variance of the shadows and non-shadows are obtained using feature statistics. The shadow region and the non-shadow region provide a comparison between shadows and non-shadows as a global region. Then, this information is further used to build a shadow compensation model for this region.

2.2.2. Super-Pixel Unit Acquisition

Super-pixel algorithms collect and group a set of pixels into meaningful portions or regions, which can be named super-pixels and used for replacing the solid structure of a pixel grid [41,42]. Super-pixels were generated by clustering pixels depending on their image forms, color similarity, and proximity. Super-pixel units are the segmentation unit formed by merging a series of spatially adjacent pixels with similar features, which can provide locally homogeneous regions around shadow pixels and local statistics between the shadow pixel and the global shadow area. The uneven loss within a cloud shadow area is reflected through the varying shadow shelter degree at different positions in the same shadow area. It is not possible to present internal differences only through region-level characteristic statistics. Therefore, this paper proposes a shadow compensation model based on super-pixel unit information.
Simple linear iterative clustering (SLIC) is a commonly used method for super-pixel segmentation [43]. SLIC is considered a customized version of the k-means method to generate super-pixels for an image. In this paper, the SLIC method is used for the super-pixel segmentation of a shadow image. The super-pixel homogenous unit obtained from it can reflect local shadow information. The SLIC image segmentation method is as follows:
(1)
Initialize the seed point: Assume a raw high-resolution remote sensing image is composed of N pixels. Firstly, the image is divided into K rectangular grids with a size of N/K. The center point of the grid is the initial seed point of the super-pixel, and the distance between adjacent seed points is about  S = N / K . Initialize the center point position of L super-pixels based on the grid steps, S, and assign a label to each seed point. The color features,  F k = [ l k , b k , a k ] T , of the super-pixel k are defined as the average of the pixel colors in the super-pixel cluster, where  l , b , a  are the three color components in the CIELab color space, respectively. The coordinate position of super-pixel k is defined as  X k = x k , y k T . The initial cluster center is  C k = l k , a k , b k , x k , y k T , where  k [ 0 , K 1 ] .
(2)
Similarity measurement: The similarity between seed points and super-pixels in the image will be calculated.
The similarity measurement relationship between any pixel, i, and super-pixel, Ck, is a weighted average based on color and spatial distance. The distance  D i , C k  between i and k can be illustrated in Equation (6). The smaller the D is, the more similar the p and the k are.
D i , C k = d C 2 + m 2 d S S 2
where  d C i , C k = F i F C k 2  is the color difference, and  d S i , C k = X i X C k 2  is the spatial distance between pixels. m is the balance parameter used to balance the proportion of color and distance in the similarity measurement. S is the step distance.
(3)
Iterative update: In the 2S × 2S range centered on the seed point, a search is conducted for the most similar seed point to the super-pixel. Then, its label is assigned to the super-pixel. Additionally, the mean position of all pixels in the new super-pixel set is calculated as the center point. After that, the center residuals of the old and new positions are calculated iteratively to determine whether or not the iteration is over.
(4)
Connectivity optimization: In order to optimize the segmentation result and enhance the connectivity between the super-pixels, the segmentation result with a large area is used to replace the isolated region with a small area.
Algorithm 1 is the SLIC super-pixel segmentation algorithm. Figure 4 shows the SLIC super-pixel segmentation result of image #1 in Figure 2a. It can be seen that the shadow region is divided into serval super-pixels. The homogenous objects can be connected together in the super-pixel. These super-pixels provide the local homogenous regions between the pixel and the global shadow region, which can reflect the difference between different objects. Then, it is used as a shadow analysis unit to help improve processing efficiency and accuracy.
Algorithm 1 SLIC super-pixel segmentation
 Initialize cluster centers   C k = l k , a k , b k , x k , y k T  by sampling pixels at regular grid
steps,  S
 Move cluster centers to the lowest gradient position in a 3 × 3 neighborhood
 Set label   l i = 1  for each pixel, i
 Set distance   d i =  for each pixel, i
repeat
  for   each   cluster   center   C k  do
   for   each   pixel ,   i ,   in   a   2 S ×   2 S   region   around   C k  do
    Calculate the distance D between   C k  and i
    if   D < d i  then
  set   d i = D
  set   l i = k
    end if
   end for
  end for
  /* Update */
  Calculate new cluster centers.
  Calculate residual error, E
until   E threshold

2.3. Multi-Level-Information Shadow-Balanced Compensation

Conventional linear correlation correction (LCC) is an image enhancement method. On the basis of the shadow detection result, each shadow area is regarded as a global analysis object. Additionally, through morphological expansion, the surrounding non-shadow area is obtained. Then, the characteristic value in a shadow area is considered a compensation goal. In the end, overall shadow area compensation can be realized by calculating the brightness mean and mean square error of the shadow area and its surrounding non-shadow area, substituted into Equation (7),
I i , j = σ N S D σ S D I i , j m S D + m N S D
where  I i , j  is the brightness value of shadow pixels  i , j I i , j  is the brightness value after compensation, and  m S D  and  σ S D , and  m N S D  and  σ N S D  are the feature mean value and standard deviation of shadow area and non-shadow area, respectively.
Compared to gamma correction and histogram matching, the linear correlation correction method is more effective in taking the characteristic mean and variance of the non-shadow area as the target value to recover the information of the shadow area [44]. Additionally, the linear correlation correction has a wider application range. After detecting the shadows in image #1 in Figure 5a and image #2 in Figure 5d, linear correlation compensation (LCC) is performed for pixels in the shadow area. The results are shown in Figure 5c,f. It can be seen that there is an obvious problem of unbalanced compensation in the shadow area, including insufficient compensation in the blue box and excessive compensation in the yellow circle. Due to the inconsistent cloud thickness, the cloud shadow shelter degree is different. The cloud shadow position with a thinner cloud is prone to excessive compensation, and the brightness is higher than that in other areas. The cloud shadow position with the thick cloud is prone to insufficient compensation. Such methods that only use the information of shadow pixels, the shadow area, and the non-shadow area will cause the problem of uneven compensation in the cloud shadow area. Additionally, brightness enhancement cannot really be used to achieve the same effect as that when it is used in a non-shadow area. Therefore, this paper proposes a cloud shadow-balanced compensation method combined with multi-level information.
Based on the principle of linear correlation correction, in this study we designed a balanced compensation model that combined multi-level information, as shown in Equation (8). It is integrated with four kinds of information: raw pixel, super-pixel unit, shadow area, and non-shadow area information. Then, it still takes the mean brightness and the sum of the mean square deviation as the target value. Additionally, the original brightness is balanced and compensated for to obtain the brightness value,  I r e c i , j .
I r e c i , j = m N S D + I i , j μ m S D R + υ m S D S σ N S D μ σ S D R + υ σ S D S
where,  m S D S  and  σ S D S , and  m S D R  and  σ S D R  are the mean value and the variance of the shadow region, R, and the super-pixel, S, respectively.  m N S D  and  σ N S D  are the mean value and the variance of the non-shadow area, which is used as the target value.  μ  and  υ  are the weights of the region, R, and super-pixel, S, and  μ + υ = 1 . Generally,  μ = υ = 0.5 I i , j  and  I r e c i , j  are the original brightness value and the compensated value of the pixel  i , j , respectively.
This balanced compensation model makes full use of the super-pixel homogeneous unit information. Additionally, it provides shadow shelter information for a super-pixel unit in a shadow area. The internal unbalanced shadow compensation problem can be solved more effectively by establishing a balanced bridge between the shadow area and the shadow pixel. Additionally, the compensation result is closer to that of the non-shadow area. On the basis of the linear compensation formula, the mean and variance data of the shadow area are substituted into the formula. Then, the compensation results are compared with those of the compensation models that only use the shadow area information, only use the super-pixel unit information, and synthesize the information of the shadow area and the super-pixel unit.

3. Experimental Results

3.1. Data Description

Several high-resolution remote sensing images from GF-2 and aerial images with cloud shadows are tested. As shown in Figure 6a, Figure 7a and Figure 8a, three presentative images are selected to show the cloud detection and compensation method. Figure 6 shows an aerial remote sensing image with a resolution of 20 cm located in a mountainous area of China. Since the resolution is high, the cloud area is enormous, and the information loss is profound. Figure 7 shows a GF-2 satellite image in a plain rural area of China with a resolution of 0.8 m. Under a cloud shadow, different plants are shaded. Figure 8 shows a satellite remote sensing image of the urban area.

3.2. Experimental Evaluation Index

The difference between the mean brightness, B, and the mean gradient, T, of the pixels in the shadow area and that between the mean brightness, BNSD, and the mean gradient, TNSD, in the non-shadow area are the main indexes with which to measure the shadow compensation effect [26]. According to the brightness characteristics after shadow compensation, the normalized difference ratio is calculated in accordance with Equations (9) and (10). The compensation quality index,  Q B + T , is calculated using Equation (11) after counting the brightness difference, Δ B , and average gradient,  Δ T , in the non-shadow area. Additionally, the quality of pixel compensation in the shadow area is calculated. Then, the compensation result can be quantitatively evaluated.
Δ B 2 = B B N S D B + B N S D 2
Δ T 2 = T T N S D T + T N S D 2
Q B + T = Δ B 2 + Δ T 2

3.3. Comparative Experimental Results

3.3.1. Qualitative Evaluation

As shown in Figure 6, Figure 7 and Figure 8 three comparative experiments are illustrated. First, the cloud shadows are detected using the detection method. The results are shown in Figure 6b, Figure 7b and Figure 8b. Then, other compensation methods such as linear correlation correction (LCC) [28], histogram matching (HM) [24], gamma transformation (GT) [29], corresponding shadow restoration (CSR) [44], and the Wallis and LCC combined method (WLC) [26] are selected as reference methods to compensate for the shadow area. The results are shown in Figure 6d–h, Figure 7d–h and Figure 8d–h. They are compared with the those of the proposed method in this paper, as shown in Figure 6i, Figure 7i and Figure 8i. In addition, Table 1 shows the quantitative compensation quality.
In Figure 6, the cloud shadow obscures large areas of the residential region in the valley and parts of the mountain. Because of the thick cloud, information in the cloud shadow is almost impossible to use. Several compensation methods can effectively recover the shadow information. However, the contrast enhancement of LCC and GT is relatively low. Additionally, the details’ presentation capability is slightly weak, especially around the forest area. Similarly, the results of the HM method show excessive contrast enhancement, resulting in contrast distortion. The CSR using non-shadow information from the global cloud shadow is not adaptive to different internal shadow regions. The compensation result shows a low contrast, so the detailed information in the shadow region is not recovered as it is in the non-shadow area. The WLC method combined the Wallis and LCC methods with internal window compensation. The contrast is improved too greatly to lose the real information. For example, the trees are over-recovered. However, the proposed method makes full use of different information levels for neutralization and compensates for such detailed information in a more balanced way.
As shown in Figure 7, the cloud shadow covers most of the farmland area. Additionally, the farmland information is relatively simple. Thus, the internal shadow detection results have an inaccurate recognition problem in some shadow areas. Additionally, a non-shadow area between two adjacent cloud shadows is mistakenly identified as a shadow area. In this case, the non-shadow area is also directly increased in brightness through shadow compensation, leading to an obvious over-compensation problem. The LCC, GT, and CSR show a lower contrast in the compensation results. On the contrary, the WLC shows over-contrast compensation. The HM shows the difference between the dark and light area in the compensation result. As a result, if a non-shadow area is included in the shadow accidentally, it will affect the compensation results to make it more bright and to make the dark area darker. Nevertheless, the proposed method can enhance the brightness to make it more balanced. The over-compensation in the non-shadow area is effectively alleviated because the information of the super-pixel area around the pixel is fully utilized to offset the degree of brightness enhancement resulting from compensation.
As shown in Figure 8, the building information in the shadow area is complex and changeable, and the details are more abundant. Moreover, there is an inconsistent shelter degree problem in the cloud shadow. The shelter degree of the edge is larger than that of the inner central region, leading to the problem that the inner region is relatively dark compared to other compensation results. The LCC and GT show similar compensation results. The contrast and brightness are improved but lower than the non-shadow object values. HM enhanced the contrast greatly so that the regions around the shadow edge are over-compensated while the central region is insufficiently compensated. CSR shows a lower contrast improvement, although it can enhance brightness. However, the detailed information can not be restored well. The shadow area consists of much detailed information. If the local information is not considered in the compensation model, the thickness of the cloud will affect the compensation results. WLC also used the window information unsuitable because the contrast is over-compensated. Therefore, if the local region is unsuitable, the compensation results will be abnormal. Nonetheless, the method proposed in this paper balances the problem of inconsistency in compensation. The overall compensation effect is very uniform. Additionally, the presentation of detailed information is closer to that of the non-shadow area.
Four windows in Figure 6d, Figure 7d and Figure 8d were named from W1–W4; the six compensation results were cut and are shown in Figure 9. Columns a–e are the results of LCC, HM, GT, CSR, WLC, and our compensation approaches. W1 is a region covered with forests. The brightness and contrast in this region are lower than those in other regions. After compensation, the general brightness was improved via all the methods. However, the contrast improvement differed greatly. HM extended the difference between bright and dark areas, leading to woods becoming partially darker. The LCC, GM, CSR cannot show the details in the forest. WLC improved the contrast greatly, while the details are over-enhanced. Our results recovered the brightness and contrast in a balanced way. W2–W4 are shadow regions with clouds of different thicknesses. Some areas are brighter, and some areas are darker. In addition, some non-cloud shadow regions may have been detected mistakenly. In this way, different compensation methods enhanced the shaded information with different levels of effectiveness. Sometimes HM and GT may over-compensate the shaded information in some brighter areas. They are enhanced too much to lose the real information, as shown in Figure 9W2b and Figure 9W3c. In contrast, LCC offers some stable compensation capability. The brightness is also be enhanced, as shown in Figure 9W2a. WLC combines information from the LCC and Wallis compensation results. The pixel information can be mixed. However, it did not consider more suitable local region information, resulting in an over-compensated contrast. Our method considered multi-level information from pixel, super-pixel, and shadow regions. The brightness change in local regions can be sensed and adjusted with their change, showing better-balanced compensation results.

3.3.2. Quantitative Evaluation

Table 1 shows the comparison of the shadow compensation results with the target values in the non-shadow area. According to the statistical mean brightness and mean gradient, the smaller QB+T is, the closer it is to the non-shadow area. It can be seen from the table that the compensation quality of the method proposed in this paper is the minimum value among all methods. However, because this method is based on an overall measure of the global region, it cannot reflect detailed information. In order to more clearly describe the effect of balanced compensation details, the brightness histogram of the shadow area of the image after compensation is calculated in Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10. In theory, the histogram of high-quality images presents a smooth normal distribution curve. In comparison, it can be seen that the histograms of the three reference methods all have a certain mutation phenomenon. However, the histograms’ distributions are smooth after compensation through the method proposed in this paper. This further shows that the method proposed in this paper improves the proportionality of the effect. Additionally, they are closer to those of the histogram of the non-shadow area under normal lighting conditions.

3.4. Discussion

3.4.1. The Effectiveness of Combined Region, Super-Pixel, and Pixel Information

The compensated information combined multi-level information from the pixel level, super-pixel level, and shadow region level. The combination makes the compensated results find the pixel information change on different levels and adapt to this change. Therefore, the strategy provides a more balanced way to compensate for shadow information self-adaptively. The results of image #1 in Figure 5a compensated for by the global shadow region, the super-pixel information, and multi-level information are shown in Figure 10a–c, respectively. The detailed shadow compensated results are shown in Figure 10d,e. The multi-level information combined with the super-pixel local information can provide information from the local to the global regions. It can be seen that the imbalance of the internal brightness distribution is obvious while only compensating for the global shadow area.Uneven compensation can be balanced by using the statistics of super-pixel units. However, a contrast over-compensation problem may occur, leading to partial compensation distortion, as shown in Figure 10e.
The statistical results of the brightness histogram in the shadow area are shown in Figure 11. Figure 11a is the histogram of Figure 10a, which was compensated for only using the region information by setting  μ = 1 , υ = 0 . The adjacent brightness values have great differences, and the variance is larger. Figure 11b is the histogram of Figure 10b, which was compensated for only using the super-pixel information by setting  μ = 0 , υ = 1 . The histogram is much smoother. However, the total brightness is not improved enough, as shown in Figure 11a. Figure 11c is the histogram of Figure 10c, which was compensated for using the region information and super-pixel information together by setting  μ = 0.5 , υ = 0.5 . The histogram is closer to a normal image without cloud shadows, which is smooth and conforms to a normal distribution. Thus, using the balanced compensation model, integrated with the statistical information of a shadow area and the super-pixel unit, balances the uneven compensation effectively and does not lead to compensation distortion problems.

3.4.2. Reducing the Influence of Cloud Thickness

The proposed method can reduce the influence of cloud thickness and the cloud detection results. Thin and thick clouds have different thicknesses. On sunny days, they have different influences on ground objects in terms of shading. These phenomena cause image information loss and imbalance. Moreover, the detection of a cloud shadow is much more challenging and difficult as a result. If the non-cloud areas are recognized as clouds mistakenly, since our proposed method considers multiple-level information, including local super-pixel information, the over-compensation would not appear even if the non-cloud-shadow region is compensated for. This will reduce the effectiveness of compensation in terms of cloud shadow detection accuracy and make it more adaptive to be used in different conditions. As shown in Figure 12, based on the original cloud shadow detection results in Figure 12(a1), mask 2 in Figure 12(a2) and mask 3 in Figure 12(a3) are acquired by dilating the original mask 5 and 10 pixels, respectively. As a result, there are some non-cloud-shadow regions included in mask 2 and mask 3. By adopting the different compensation results in these three masks, the ability to deal with shaded regions of different extents of shading can be compared. From the second row in Figure 12b to the seventh row in Figure 12g, these shadow masks are compensated for via LCC, HM, GT, CSR, WLC, and our methods. The compensation results of LCC, HM, GT, CSR are not able to solve the problem of imbalance. If the non-shadows are included in the shadow mask, they will be over-compensated for. Especially for HM and GT, the insufficiency and over-compensation is obvious. WLC considers small window information from Wallis and LCC aspects, reducing problem of imbalance However, the contrast is over-enhanced. There is more noise in the compensation results. Our method obtains the most balanced compensation results. Even if some non-shadows are included in the shadow mask, they will not be over-compensated for. The center of a cloud shadow with the most shaded loss also can be sufficiently compensated for. This is owed to the super-pixel local information around the pixel and the multi-level information combination strategy. Therefore, the proposed method can adaptively compensate for information affected by uneven shading, as long as the cloud shadow can be detected. The wrong detection of non-shadows has little effect on the final compensation results.

4. Conclusions

Cloud shadows are very common phenomea in high-resolution remote sensing images. The shaded information loss of ground objects varies with the thickness of the cloud. Performing cloud shadow compensation to recover the shaded information is very useful for further image utilization and analysis. However, the traditional shadow compensation method does not take the following problem into consideration: that there are over-compensation and insufficient compensation results obtained using these methods. Therefore, this paper designs a balanced compensation method integrated with multi-level information to solve the problem of unbalanced information loss in cloud shadow regions. The compensated image can be further used for other applications such as image interpretation, object recognition, and so on. By combining the information of non-shadow and shadow areas, super-pixel, and pixel, the traditional compensation model of linear correlation correction is improved. Thus, the compensation results can balance the unbalanced compensation of the shadow area more effectively. The reasonably homogeneous pixel regions can be obtained via super-pixel segmentation. Additionally, they can be incorporated into the compensation model to reflect the local information of shadow pixels accurately. Therefore, it is an effective way to balance and mitigate the problem of imbalanced compensation. The experimental results show that the proposed method in this paper can effectively and reasonably balance the problems of unbalanced compensation, such as insufficient compensation and excessive compensation a the shadow area. It makes the detailed presentation ability more balanced and closer to the effect of that of the non-shadow area from global and local feedback, which leads to the realization of balanced cloud shadow compensation. This algorithm can be used as an image preprocessing step to improve the image and its related production quality.
Our main effort is to solve the problem of information loss due to uneven shading caused by the cloud shadow. This method allows the effective use of the super-pixel local and regional global information to solve the problem of unbalanced compensation. However, we only discussed the cloud shadow obscurely in this paper. Since cloud shadows always existswith the clouds, differently from cloud shadows, thick clouds obscure ground information greatly. The traditional replacement strategy may change the real information. In order to ensure precision, the aim of future work should be to detect and compensate for clouds and cloud shadows together as accurately as possible.

Author Contributions

Conceptualization, Y.L. and X.G.; methodology, Y.K., X.G. and Y.Z.; writing—original draft preparation, Y.L. and X.G.; writing—review and editing, Y.L. and X.G.; supervision, Y.L., B.L. and Y.K.; funding acquisition, Y.L., X.G., B.W. and Y.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by Open Fund of Key Laboratory of Mine Environmental Monitoring and Improving around Poyang Lake, Ministry of Natural Resources (no. MEMI-2021-2022-08); Research Foundation of the Department of Natural Resources of Hunan Province (no. 2022-03, 20230153CH, 20230130LY); Open Fund of Hunan Provincial Key Laboratory of Geo-Information Engineering in Surveying, Mapping and Remote Sensing, Hunan University of Science and Technology (E22205); Open Fund of National Engineering Laboratory for Digital Construction and Evaluation Technology of Urban Rail Transit (no. 2023ZH01 and no. 2021ZH02); The National Natural Science Foundation of China (no. 4217021074).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

MODIS Moderate resolution imaging spectroradiometer
CNN Convolutional neural network
LCC Linear correlation correction
RSAM Recurrent shadow attention model
MSI Morphological shadow index
DMP differential morphological profiles
SE Structure element
SLIC Simple linear iterative clustering
HM Histogram matching
GT Gamma transformation
CSR Corresponding shadow restoration
WLC Wallis and linear correlation correction

References

  1. Gao, X.; Wang, M.; Yang, Y.; Li, G. Building Extraction From RGB VHR Images Using Shifted Shadow Algorithm. IEEE Access 2018, 6, 22034–22045. [Google Scholar] [CrossRef]
  2. Baetens, L.; Desjardins, C.; Hagolle, O. Validation of Copernicus Sentinel-2 Cloud Masks Obtained from MAJA, Sen2Cor, and FMask Processors Using Reference Cloud Masks Generated with a Supervised Active Learning Procedure. Remote Sens. 2019, 11, 433. [Google Scholar] [CrossRef]
  3. Yang, B. Supervised Nonlinear Hyperspectral Unmixing With Automatic Shadow Compensation Using Multiswarm Particle Swarm Optimization. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5529618. [Google Scholar] [CrossRef]
  4. Duan, P.; Hu, S.; Kang, X.; Li, S. Shadow Removal of Hyperspectral Remote Sensing Images With Multiexposure Fusion. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5537211. [Google Scholar] [CrossRef]
  5. Valjarević, A.; Djekić, T.; Stevanović, V.; Ivanović, R.; Jandziković, B. GIS numerical and remote sensing analyses of forest changes in the Toplica region for the period of 1953–2013. Appl. Geogr. 2018, 92, 131–139. [Google Scholar] [CrossRef]
  6. Xianjun, G.; Youchuan, W.; Yuanwei, Y.; Peipei, H. Automatic Cloud Shadow Removal in Single Aerial Image. J. Tianjin Univ. (Sci. Technol.) 2014, 47, 771–777. [Google Scholar]
  7. Zhang, G.; Gao, X.; Yang, J.; Yang, Y.; Tan, M.; Xu, J.; Wang, Y. A multi-task driven and reconfigurable network for cloud detection in cloud-snow coexistence regions from very-high-resolution remote sensing images. Int. J. Appl. Earth Obs. Geoinf. 2022, 114, 103070. [Google Scholar] [CrossRef]
  8. Zhiwei; Shen, H.; Weng, Q.; Zhang, Y.; Dou, P.; Zhang, L. Cloud and Cloud Shadow Detection for Optical Satellite Imagery: Features, Algorithms, Validation, and Prospects. ISPRS J. Photogramm. Remote Sens. 2022, 188, 89–108. [Google Scholar] [CrossRef]
  9. Bocharov, D.A.; Nikolaev, D.P.; Pavlova, M.A.; Timofeev, V.A. Cloud Shadows Detection and Compensation Algorithm on Multispectral Satellite Images for Agricultural Regions. J. Commun. Technol. Electron. 2022, 67, 728–739. [Google Scholar] [CrossRef]
  10. Zhang, G.; Gao, X.; Yang, Y.; Wang, M.; Ran, S. Controllably Deep Supervision and Multi-Scale Feature Fusion Network for Cloud and Snow Detection Based on Medium- and High-Resolution Imagery Dataset. Remote Sens. 2021, 13, 4805. [Google Scholar] [CrossRef]
  11. Li, Z.; Shen, H.; Li, H.; Xia, G.; Gamba, P.; Zhang, L. Multi-Feature Combined Cloud and Cloud Shadow Detection in Gaofen-1 Wide Field of View Imagery. Remote Sens. Environ. 2017, 191, 342–358. [Google Scholar] [CrossRef]
  12. Chen, Y.; He, W.; Yokoya, N.; Huang, T.-Z. Blind cloud and cloud shadow removal of multitemporal images based on total variation regularized low-rank sparsity decomposition. ISPRS J. Photogramm. Remote Sens. 2019, 157, 93–107. [Google Scholar] [CrossRef]
  13. Liu, X.; Yang, F.; Wei, H.; Gao, M. Shadow Compensation from UAV Images Based on Texture-Preserving Local Color Transfer. Remote Sens. 2022, 14, 4969. [Google Scholar] [CrossRef]
  14. Shen, H.; Li, X.; Cheng, Q.; Zeng, C.; Yang, G.; Li, H.; Zhang, L. Missing Information Reconstruction of Remote Sensing Data: A Technical Review. IEEE Geosci. Remote Sens. Mag. 2015, 3, 61–85. [Google Scholar] [CrossRef]
  15. Li, X.; Shen, H.; Zhang, L.; Zhang, H.; Yuan, Q.; Yang, G. Recovering Quantitative Remote Sensing Products Contaminated by Thick Clouds and Shadows Using Multitemporal Dictionary Learning. IEEE Trans. Geosci. Remote Sens. 2014, 52, 7086–7098. [Google Scholar] [CrossRef]
  16. Zhang, Q.; Yuan, Q.; Li, J.; Li, Z.; Shen, H.; Zhang, L. Thick cloud and cloud shadow removal in multitemporal imagery using progressively spatio-temporal patch group deep learning. ISPRS J. Photogramm. Remote Sens. 2020, 162, 148–160. [Google Scholar] [CrossRef]
  17. Wang, T.; Shi, J.; Letu, H.; Ma, Y.; Li, X.; Zheng, Y. Detection and Removal of Clouds and Associated Shadows in Satellite Imagery Based on Simulated Radiance Fields. J. Geophys. Res. Atmos. 2019, 124, 7207–7225. [Google Scholar] [CrossRef]
  18. Wang, L.; Qu, J.; Xiong, X.; Hao, X.; Xie, Y.; Che, N. A New Method for Retrieving Band 6 of Aqua MODIS. IEEE Geosci. Remote. Sens. Lett. 2006, 3, 267–270. [Google Scholar] [CrossRef]
  19. Chai, D.; Newsam, S.; Zhang, H.K.; Qiu, Y.; Huang, J. Cloud and cloud shadow detection in Landsat imagery based on deep convolutional neural networks. Remote Sens. Environ. 2019, 225, 307–316. [Google Scholar] [CrossRef]
  20. Cheng, Q.; Shen, H.; Zhang, L.; Yuan, Q.; Zeng, C. Cloud removal for remotely sensed images by similar pixel replacement guided with a spatio-temporal MRF model. ISPRS J. Photogramm. Remote Sens. 2014, 92, 54–68. [Google Scholar] [CrossRef]
  21. Wang, N.; Li, W.; Tao, R.; Du, Q. Graph-based block-level urban change detection using Sentinel-2 time series. Remote Sens. Environ. 2022, 274, 112993. [Google Scholar] [CrossRef]
  22. Mostafa, Y. A Review on Various Shadow Detection and Compensation Techniques in Remote Sensing Images. Can. J. Remote Sens. 2017, 43, 545–562. [Google Scholar] [CrossRef]
  23. Zigh, E.; Belbachir, M.F.; Kadiri, M.; Djebbouri, M.; Kouninef, B. New shadow detection and removal approach to improve neural stereo correspondence of dense urban VHR remote sensing images. Eur. J. Remote Sens. 2015, 48, 447–463. [Google Scholar] [CrossRef]
  24. Tsai, V. A comparative study on shadow compensation of color aerial images in invariant color models. IEEE Trans. Geosci. Remote Sens. 2006, 44, 1661–1671. [Google Scholar] [CrossRef]
  25. Wan, C.-Y.; King, B.A.; Li, Z. An Assessment of Shadow Enhanced Urban Remote Sensing Imagery of a Complex City-Hong Kong. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 39, 177–182. [Google Scholar] [CrossRef]
  26. Yang, Y.; Ran, S.; Gao, X.; Wang, M.; Li, X. An Automatic Shadow Compensation Method via a New Model Combined Wallis Filter with LCC Model in High Resolution Remote Sensing Images. Appl. Sci. 2020, 10, 5799. [Google Scholar] [CrossRef]
  27. Yamazaki; Liu, W.; Takasaki, M. Characteristics of Shadow and Removal of Its Effects for Remote Sensing Imagery. In Proceedings of the 2009 IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2009, Cape Town, South Africa, 12–17 July 2009. [Google Scholar]
  28. Yang, J.; He, Y.; Caspersen, J. Fully constrained linear spectral unmixing based global shadow compensation for high resolution satellite imagery of urban areas. Int. J. Appl. Earth Obs. Geoinform. 2015, 38, 88–98. [Google Scholar] [CrossRef]
  29. Aboutalebi, M.; Torres-Rua, A.F.; Mckee, M.; Kustas, W.; Coopmans, C. Behavior of Vegetation/Soil Indices in Shaded and Sunlit Pixels and Evaluation of Different Shadow Compensation Methods Using Uav High-Resolution Imagery over Vineyards. In Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping III; SPIE: Bellingham, WA, USA, 2018. [Google Scholar]
  30. Han, H.; Han, C.; Huang, L.; Lan, T.; Xue, X. Irradiance Restoration Based Shadow Compensation Approach for High Resolution Multispectral Satellite Remote Sensing Images. Sensors 2020, 20, 6053. [Google Scholar] [CrossRef]
  31. Li, H.; Zhang, L.; Shen, H. An Adaptive Nonlocal Regularized Shadow Removal Method for Aerial Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2013, 52, 106–120. [Google Scholar] [CrossRef]
  32. Sarabandi, P.; Pooya; Yamazaki, F.; Matsuoka, M.; Kiremidjian, A. Shadow Detection and Radiometric Restoration in Satellite High Resolution Images. In Proceedings of the 2004 IEEE International Proceedings on Geoscience and Remote Sensing Symposium, 2004, IGARSS’04, Anchorage, AK, USA, 20–24 September 2004. [Google Scholar]
  33. Zhang, H.; Sun, K.; Li, W. Object-Oriented Shadow Detection and Removal From Urban High-Resolution Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6972–6982. [Google Scholar] [CrossRef]
  34. Wang, C.; Xu, H.; Zhou, Z.; Deng, L.; Yang, M. Shadow Detection and Removal for Illumination Consistency on the Road. IEEE Trans. Intell. Veh. 2020, 5, 534–544. [Google Scholar] [CrossRef]
  35. Li, Y.; Wei, F.; Zhang, Y.; Chen, W.; Ma, J. HS2P: Hierarchical spectral and structure-preserving fusion network for multimodal remote sensing image cloud and shadow removal. Inf. Fusion 2023, 94, 215–228. [Google Scholar] [CrossRef]
  36. Zhang, Y.; Chen, G.; Vukomanovic, J.; Singh, K.K.; Liu, Y.; Holden, S.; Meentemeyer, R.K. Recurrent Shadow Attention Model (RSAM) for shadow removal in high-resolution urban land-cover mapping. Remote Sens. Environ. 2020, 247, 111945. [Google Scholar] [CrossRef]
  37. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  38. Xianjun, G.; Youchuan, W.; Yuanwei, Y.; Peipei, H. Automatic Shadow Detection and Automatic Compensation in High Resolution Remote Sensing Images. Acta Autom. Sin. 2014, 40, 1709–1720. [Google Scholar]
  39. Huang, X.; Zhang, L. Morphological Building/Shadow Index for Building Extraction From High-Resolution Imagery Over Urban Areas. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2011, 5, 161–172. [Google Scholar] [CrossRef]
  40. Jiménez, L.I.; Plaza, J.; Plaza, A. Efficient implementation of morphological index for building/shadow extraction from remotely sensed images. J. Supercomput. 2016, 73, 482–494. [Google Scholar] [CrossRef]
  41. Ibrahim, A.; El-kenawy, E.-S.M. Image Segmentation Methods Based on Superpixel Techniques: A Survey. J. Comput. Sci. Inf. 2020, 15, 1–11. [Google Scholar]
  42. Wang, M.; Liu, X.; Gao, Y.; Ma, X.; Soomro, N.Q. Superpixel Segmentation: A Benchmark. Signal Process. Image Commun. 2017, 56, 28–39. [Google Scholar]
  43. Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S. Slic Superpixels Compared to State-of-the-Art Superpixel Methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef]
  44. Mostafa, Y.; Abdelwahab, M.A. Corresponding regions for shadow restoration in satellite high-resolution images. Int. J. Remote Sens. 2018, 39, 7014–7028. [Google Scholar] [CrossRef]
Figure 1. Flowchart of balanced shadow compensation.
Figure 1. Flowchart of balanced shadow compensation.
Applsci 13 09296 g001
Figure 2. Cloud shadow feature images and detection results. (a) Original image #1. (b) H characteristic diagram. (c) I characteristic diagram. (d) B′characteristic diagram. (e) Q characteristic diagram. (f) MSI characteristic diagram. (g) Initial shadow results. (h) Shadow post-processing results. Red pixels in (g,h) are the detected shadow pixels.
Figure 2. Cloud shadow feature images and detection results. (a) Original image #1. (b) H characteristic diagram. (c) I characteristic diagram. (d) B′characteristic diagram. (e) Q characteristic diagram. (f) MSI characteristic diagram. (g) Initial shadow results. (h) Shadow post-processing results. Red pixels in (g,h) are the detected shadow pixels.
Applsci 13 09296 g002aApplsci 13 09296 g002b
Figure 3. Related regions of shadow compensation.
Figure 3. Related regions of shadow compensation.
Applsci 13 09296 g003
Figure 4. SLIC segmentation results of image #1.
Figure 4. SLIC segmentation results of image #1.
Applsci 13 09296 g004
Figure 5. Results of shadow detection and compensation via LCC. (a) Original image #1. (b) Shadow detection result. (c) LCC compensation result. (d) Original image #2. (e) Shadow detection result. (f) LCC compensation result. Redpixels in (b,e) represent detected shadows. Yellow circle in (c,f) are regions with over compensation. Blue frames in (c,f) are regions with insufficient compensation.
Figure 5. Results of shadow detection and compensation via LCC. (a) Original image #1. (b) Shadow detection result. (c) LCC compensation result. (d) Original image #2. (e) Shadow detection result. (f) LCC compensation result. Redpixels in (b,e) represent detected shadows. Yellow circle in (c,f) are regions with over compensation. Blue frames in (c,f) are regions with insufficient compensation.
Applsci 13 09296 g005
Figure 6. Comparison of shadow compensation results in aerial image #3. (a) Original aerial image #1. (b) Initial shadow detection result. (c) Final shadow detection result. (d) LCC result. (e) HM result. (f) GT result. (g) CSR result. (h) WLC result. (i) The proposed method’s result. Red pixels in (b,c) are detected shadows.
Figure 6. Comparison of shadow compensation results in aerial image #3. (a) Original aerial image #1. (b) Initial shadow detection result. (c) Final shadow detection result. (d) LCC result. (e) HM result. (f) GT result. (g) CSR result. (h) WLC result. (i) The proposed method’s result. Red pixels in (b,c) are detected shadows.
Applsci 13 09296 g006
Figure 7. Comparison of shadow compensation results in satellite image #4. (a) Original satellite image #2. (b) Initial shadow detection result. (c) Final shadow detection result. (d) LCC result. (e) HM result. (f) GT result. (g) CSR result. (h) The WLC result. (i) The proposed method’s result. Red pixels in (b,c) are detected shadows.
Figure 7. Comparison of shadow compensation results in satellite image #4. (a) Original satellite image #2. (b) Initial shadow detection result. (c) Final shadow detection result. (d) LCC result. (e) HM result. (f) GT result. (g) CSR result. (h) The WLC result. (i) The proposed method’s result. Red pixels in (b,c) are detected shadows.
Applsci 13 09296 g007aApplsci 13 09296 g007b
Figure 8. Comparison of shadow compensation results in aerial image #5. (a) Original satellite image #3. (b) Initial shadow detection result. (c) Final shadow detection result. (d) LCC result. (e) HM result. (f) GT result. (g) CSR result. (h) the WLC result. (i) The proposed method’s results. Red pixels in (b,c) are detected shadows.
Figure 8. Comparison of shadow compensation results in aerial image #5. (a) Original satellite image #3. (b) Initial shadow detection result. (c) Final shadow detection result. (d) LCC result. (e) HM result. (f) GT result. (g) CSR result. (h) the WLC result. (i) The proposed method’s results. Red pixels in (b,c) are detected shadows.
Applsci 13 09296 g008
Figure 9. The detailed shadow compensation results of windows 1–4 in image #3–5. (a) LCC. (b) HM. (c) GT. (d) CSR. (e) WLC. (f) Ours.
Figure 9. The detailed shadow compensation results of windows 1–4 in image #3–5. (a) LCC. (b) HM. (c) GT. (d) CSR. (e) WLC. (f) Ours.
Applsci 13 09296 g009aApplsci 13 09296 g009b
Figure 10. Comparison of shadow compensation results of image #1 in Figure 5a using shadow information of different regions. (a) results using shadow region information. (b) results using super-pixel information. (c) results combining multi-level information. (d) the detailed compensated results of (a). (e) the detailed compensated results of (b). (f) the detailed compensated results of (c). The yellow frame in (ac) is an area with different compensation results. The yellow circle and the red frame in (df) are the zoomed region with obvious difference.
Figure 10. Comparison of shadow compensation results of image #1 in Figure 5a using shadow information of different regions. (a) results using shadow region information. (b) results using super-pixel information. (c) results combining multi-level information. (d) the detailed compensated results of (a). (e) the detailed compensated results of (b). (f) the detailed compensated results of (c). The yellow frame in (ac) is an area with different compensation results. The yellow circle and the red frame in (df) are the zoomed region with obvious difference.
Applsci 13 09296 g010
Figure 11. Histgram comparison of shadow compensation results using shadow information of different regions. (a) Histgram of image #1 using shadow region information; u = 1 and v = 0. (b) Histgram of image #1 using super-pixel information; u = 0 and v = 1. (c) Histgram of image #1 combining multi-level information; u = 0.5 and v = 0.5.
Figure 11. Histgram comparison of shadow compensation results using shadow information of different regions. (a) Histgram of image #1 using shadow region information; u = 1 and v = 0. (b) Histgram of image #1 using super-pixel information; u = 0 and v = 1. (c) Histgram of image #1 combining multi-level information; u = 0.5 and v = 0.5.
Applsci 13 09296 g011
Figure 12. Comparison of different shadow compensation results using based on different cloud shadow masks. (a1) orginal shadow mask 1. (a2) mask 2 obtained by dilating 5 pixels. (a3) mask 3 obtained by dilating 10 pixels. (b1) LCC result of mask 1. (b2) LCC result of shadow mask 2. (b3) LCC result of mask 3. (c1) HM result of mask 1. (c2) HM result of shadow mask 2. (c3) HM result of mask 3. (d1) GT result of mask 1. (d2) GT result of shadow mask 2. (d3) GT result of mask 3. (e1) CSR result of mask 1. (e2) CSR result of shadow mask 2. (e3) CSR result of mask 3. (f1) WLC result of mask 1. (f2) WLC result of shadow mask 2. (f3) WLC result of mask 3. (g1) our result of mask 1. (g2) our result of mask 2. (g3) our result of shadow mask 3.
Figure 12. Comparison of different shadow compensation results using based on different cloud shadow masks. (a1) orginal shadow mask 1. (a2) mask 2 obtained by dilating 5 pixels. (a3) mask 3 obtained by dilating 10 pixels. (b1) LCC result of mask 1. (b2) LCC result of shadow mask 2. (b3) LCC result of mask 3. (c1) HM result of mask 1. (c2) HM result of shadow mask 2. (c3) HM result of mask 3. (d1) GT result of mask 1. (d2) GT result of shadow mask 2. (d3) GT result of mask 3. (e1) CSR result of mask 1. (e2) CSR result of shadow mask 2. (e3) CSR result of mask 3. (f1) WLC result of mask 1. (f2) WLC result of shadow mask 2. (f3) WLC result of mask 3. (g1) our result of mask 1. (g2) our result of mask 2. (g3) our result of shadow mask 3.
Applsci 13 09296 g012aApplsci 13 09296 g012b
Table 1. Comparison of shadow compensation results of brightness between images #3–5.
Table 1. Comparison of shadow compensation results of brightness between images #3–5.
Image NameCompensation MethodsQB+T   Δ B   Δ T Compensated Value of
Shadow Region
The Target Value of
Non-Shadow Region
BTBT
#3LCC0.1537 0.0458 0.1361 94.1068 7.1721 103.13109.7768
HM0.0856 0.0327 0.0580 96.6070 8.2348
GT0.3340 0.1360 0.3168 78.4356 4.8815
CSR0.4217 0.0655 0.3664 90.4523 3.9766
WLC0.0446 0.0398 0.1869 95.2281 10.6901
Ours0.0101 0.0517 0.1688 93.0003 9.5813
#4LCC0.0453 0.0166 0.1794 68.1414 12.3940 70.4357 13.5691
HM0.0124 0.0077 0.1488 69.3547 13.2379
GT0.0598 0.1144 0.0647 88.6381 12.0379
CSR0.0331 0.0419 0.1738 64.7673 12.6989
WLC0.0126 0.0173 0.2213 68.0377 13.9168
Ours0.0065 0.0250 0.2325 66.9946 13.3928
#5LCC0.0500 0.1978 0.1994 67.5162 18.3480 100.8094 20.2783
HM0.0322 0.1899 0.1310 68.6253 21.6295
GT0.0842 0.2082 0.2377 66.0666 17.1301
CSR0.1690 0.2214 0.3223 64.2688 14.4151
WLC0.0529 0.1938 0.2379 68.0768 22.5449
Ours0.0149 0.1769 0.2226 70.5020 20.8922
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lei, Y.; Gao, X.; Kou, Y.; Wu, B.; Zhang, Y.; Liu, B. Balanced Cloud Shadow Compensation Method in High-Resolution Image Combined with Multi-Level Information. Appl. Sci. 2023, 13, 9296. https://doi.org/10.3390/app13169296

AMA Style

Lei Y, Gao X, Kou Y, Wu B, Zhang Y, Liu B. Balanced Cloud Shadow Compensation Method in High-Resolution Image Combined with Multi-Level Information. Applied Sciences. 2023; 13(16):9296. https://doi.org/10.3390/app13169296

Chicago/Turabian Style

Lei, Yubin, Xianjun Gao, Yuan Kou, Baifa Wu, Yue Zhang, and Bo Liu. 2023. "Balanced Cloud Shadow Compensation Method in High-Resolution Image Combined with Multi-Level Information" Applied Sciences 13, no. 16: 9296. https://doi.org/10.3390/app13169296

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop