Next Article in Journal
A Vision-Based Two-Stage Framework for Inferring Physical Properties of the Terrain
Next Article in Special Issue
Verification of an Accommodative Response for Depth Measurement of Floating Hologram Using a Holographic Optical Element
Previous Article in Journal
Validation of Support for Creation of License Drawings Using Application for openBIM-Based Automatic Generation of 2D Drawings
Previous Article in Special Issue
A Design for a Manufacturing-Constrained Off-Axis Four-Mirror Reflective System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Shadow Detection for Multispectral Satellite Remote Sensing Images in Invariant Color Spaces

1
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(18), 6467; https://doi.org/10.3390/app10186467
Submission received: 20 August 2020 / Revised: 8 September 2020 / Accepted: 13 September 2020 / Published: 17 September 2020
(This article belongs to the Collection Optical Design and Engineering)

Abstract

:
Shadow often results in difficulties for subsequent image applications of multispectral satellite remote sensing images, like object recognition and change detection. With continuous improvement in both spatial and spectral resolutions of satellite remote sensing images, a more serious impact occurs on satellite remote sensing image interpretation due to the existence of shadow. Though various shadow detection methods have been developed, problems of both shadow omission and nonshadow misclassification still exist for detecting shadow well in high-resolution multispectral satellite remote sensing images. These shadow detection problems mainly include high small shadow omission and typical nonshadow misclassification (like bluish and greenish nonshadow misclassification, and large dark nonshadow misclassification). For further resolving these problems, a new shadow index is developed based on the analysis of the property difference between shadow and the corresponding nonshadow with several multispectral band components (i.e., near-infrared, red, green and blue components) and hue and intensity components in various invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ), respectively. The shadow mask is further acquired by applying an optimal threshold determined automatically on the shadow index image. The final shadow image is further optimized with a definite morphological operation of opening and closing. The proposed algorithm is verified with many images from WorldView-3 and WorldView-2 acquired at different times and sites. The proposed algorithm performance is particularly evaluated by qualitative visual sense comparison and quantitative assessment of shadow detection results in comparative experiments with two WorldView-3 test images of Tripoli, Libya. Both the better visual sense and the higher overall accuracy (over 92% for the test image Tripoli-1 and approximately 91% for the test image Tripoli-2) of the experimental results together deliver the excellent performance and robustness of the proposed shadow detection approach for shadow detection of high-resolution multispectral satellite remote sensing images. The proposed shadow detection approach is promised to further alleviate typical shadow detection problems of high small shadow omission and typical nonshadow misclassification for high-resolution multispectral satellite remote sensing images.

Graphical Abstract

1. Introduction

More complex details of land covers (e.g., buildings, towers, vegetation, farms and roads) are obtained easily from high spatial resolution (HSR) multispectral satellite remote sensing images which are captured by the recently launched HSR satellites (like IKONOS, GeoEye-1, QuickBird, WorldView-2, WorldView-3 and Jilin-1) [1,2,3,4,5,6,7]. However, shadow inevitably formed by land objects and clouds affects more seriously for these HSR image applications, such as change detection, object recognition and image classification. Additional cues are obtained from the HSR images with the palpable shadow, such as the general shape and structure of cast objects, the illumination direction and the position of the sun, as well as parameters of the satellite sensor. These cues are also helpful in numerous applications, like building detection, height estimation, 3D reconstruction, change surveillance, scene interpretation and position estimation of the sun and satellites [4,5,8,9,10,11,12]. On the other hand, shadow in HSR images may cause serious shape distortion of cast objects, false color tone and loss of feature information, which may result in negative effects in subsequent image applications [13]. Given either the useful or troublesome influence of shadow in HSR multispectral satellite remote sensing images, in order to improve the utilization of HSR multispectral satellite remote sensing images, shadow detection is an important scientific issue for HSR multispectral remote sensing images, which is usually the first step followed by shadow compensation and image utilization [9,12,14].
Much research has been developed on shadow detection for both color aerial images and multispectral satellite remote sensing images in recent decades. Huang et al. [15] proposed a shadow detection method through developing an imaging model indicating the increased amount of hue values in shadow regions compared with the corresponding nonshadow ones. A certain threshold was employed to obtain shadow candidate in accordance with the increased hue values in shadow regions, and two other thresholds were subsequently used with respect to blue (B) and green (G) components to refine the shadow candidate by eliminating greenish and bluish nonshadow objects. Huang et al. developed a useful imaging model and the deduced shadow detection algorithm was firstly dedicated to resolving the bluish and greenish nonshadow object misclassification problem in color aerial images, even though thresholds were selected manually. Moreover, Sarabandi et al. [16] proposed a C3 shadow detection method by studying the shadow identification results of both IKONOS and QuickBird multispectral images through C1, C2 and C3 components in the color space C1C2C3, respectively. The C3-based algorithm could identify the broad outline of large shadow regions. However, most greenish nonshadow objects were misclassified. Similarly, Arevalo et al. [17] presented a semi-automatic shadow detection algorithm built on the C3 component of C1C2C3 color space and a region-growing procedure for HSR pan-sharpening satellite remote sensing images. Comparative experiments revealed that the presented shadow detection approach achieved higher accuracies and better robustness against the RGB-based algorithm by Huang et al. [15] and the C3-based algorithm by Sarabandi et al. [16] Considering all available bands of the multispectral image, Besheer et al. [18] proposed a modified C3 (MC3) index through developing an improved C1C2C3 invariant color space by employing the near-infrared (NIR) band information in addition to visible bands (i.e., red (R), green and blue bands) in the original C1C2C3 invariant color space. Then, the shadow was segmented with a bimodal histogram threshold. The MC3 method delivered an improved performance by picking up the NIR component into consideration in contrast to the C3 method by Sarabandi et al. [16] and Arevalo et al. [17].
Additionally, based on the Huang’s imaging model [15] and the Phong illumination model [19], Tsai [20] presented an automatic property-based shadow detection approach utilizing the ratio of hue value over intensity value, called the spectral ratio index (SRI) shadow detection method. Subsequently, the Otsu thresholding method [21] was used to determine an optimal threshold automatically. The SRI algorithm was tested with comparative studies in various invariant color spaces (HIS, HSV, HCV, YIQ and YCbCr) for color aerial images. The comparative results showed that the SRI shadow detection approach drew higher shadow detection accuracies in HIS, YIQ and YCbCr color spaces, though some greenish grass in nonshadow regions was still misclassified more or less. Subsequently, Khekade et al. [22] further enhanced the shadow detection results of the SRI algorithm by Tsai [20] particularly in the YIQ invariant color space by using a series of post-processing methods (e.g., histogram equalization and box filter). Comparative experiments in color aerial images against the original SRI images of Tsai showed that the enhanced shadow detection method improved the shadow omission problem in the visual aspect. On the foundation of Tsai’s efficient shadow detection algorithm, Chung et al. [23] proposed a modified ratio map by applying an exponential function to the SRI by Tsai, and presented a successive thresholding scheme (STS) rather than only using a global threshold [20]. Experiments in color aerial images revealed that the proposed algorithm by Chung et al. [23] showed an improved performance in detecting shadow in images containing low brightness objects. Inspired by the STS procedure by Chung et al. [23]. Silva et al. [24] extended the SRI method by Tsai [20] specifically in the CIELCh color space by applying a natural logarithm function to the original ratio map to compress the original values, resulting in the logarithmic spectral ratio index (LSRI) algorithm. Then, the ratio map was segmented by applying multilevel thresholding. This modified ratio method performed better in color aerial images by accurately detecting shadow and avoiding misclassifying dark areas compared with the original ratio method by Tsai [20] and the STS method by Chung et al. [23]. In addition, Ma et al. [25] presented a similar shadow detection method based on the normalized saturation-value index (NSVDI) in the HSV color space. A rough shadow index image was formed at first with the NSVDI method. Then the rough shadow index image was segmented to obtain the final shadow image with a certain threshold. This NSVDI method performed well in detecting large shadow in IKONOS multispectral images despite omitting some small shadow. Mostafa et al. [26] also presented a shadow detector index (SDI) for shadow detection in HSR multispectral satellite remote sensing images. The SDI algorithm was developed by first analyzing the difference between shadow and typical nonshadow, particularly for vegetation, in terms of green and blue components, and subsequently applying the neighborhood valley-emphasis method (NVEM) to binarize the SDI index image for obtaining the shadow image [27]. The SDI approach performed well in classifying shadow from vegetation, and acquired high shadow detection accuracies, except for the shortcomings of some small shadow omission and misclassification of some dull red roof.
Though an increasing number of shadow detection algorithms have been put forward for detecting shadow in HSR multispectral satellite remote sensing images and color aerial images in recent years, shadow detection problems still need a further settlement, mainly including high small shadow omission and typical nonshadow misclassification (like bluish and greenish dark nonshadow misclassification, as well as large dark nonshadow misclassification). Therefore, shadow detection is still challenging for HSR multispectral satellite remote sensing images.
In this paper, we first construct a logarithmic shadow index (LSI) and subsequently develop an LSI shadow detection approach for shadow detection of HSR multispectral satellite remote sensing images, particularly for further settling problems of high small shadow omission and typical nonshadow misclassification (like bluish and greenish dark nonshadow misclassification, as well as large dark nonshadow misclassification). Our presented LSI shadow detection algorithm employs special properties of shadow, namely, the dramatical decrease of NIR component, the higher hue value and the lower intensity value, by further studying properties of shadow in terms of both multispectral band components (mainly including visible bands and NIR band) and invariant color components in various invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr, YIQ) compared with the corresponding nonshadow. Based on the proposed LSI, we acquire the shadow image by firstly segmenting the shadow index image automatically with an optimal threshold determined with the NVEM thresholding method [27] and subsequently optimizing the initial shadow image with a certain morphological operation. For verifying the shadow detection performance of our proposed LSI algorithm, comparative experiments are carried out with many images from WorldView-3 and WorldView-2 acquired at different time and sites, and shadow detection performance is particularly assessed both qualitatively and quantitatively against several standard shadow detection algorithms (i.e., MC3 [18], NSVDI [25], LSRI [24], SDI [26] and SRI [20]) with two WorldView-3 test images of Tripoli, Libya.
The rest proceeds as follows. The LSI shadow detection is detailly developed step by step in Section 2. Comparative experiments and performance assessments are conducted both qualitatively and quantitatively in Section 3. The influential elements and sensitivity factors are separately discussed in Section 4. Finally, conclusions are drawn in Section 5.

2. Method

In accordance with the Phong illumination model [19] and contributions in other studies [14,15,20,28], compared with nonshadow regions, similar ground objects in shadow regions often obviously possess the following properties:
  • Dramatic decrease in terms of NIR component compared with R, G and B components.
  • Higher hue (H) values.
  • Lower intensity (I) values.
These shadow properties above are easily found in multispectral images and several invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ). Taking these properties into consideration, NIR, H and I components are particularly employed in our presented shadow detection approach that is accomplished step by step from Step 1 to Step 4, as depicted in Figure 1 and stated in detail as follows.

2.1. Step 1: Color Space Conversion

Chromaticity and luminance are powerful descriptors for color images [28]. The appropriate description of both chromaticity and luminance simplifies image characteristic extraction and image interpretation [29]. Colors for image expression are often regarded as a certain combination of R, G and B stimuli in RGB color space in accordance with the provision of the Commission Internationale del’Eclairage (CIE) [20,29]. Several color spaces are briefly introduced in terms of the RGB color space as follows, in which chromaticity and luminance components are usually well decoupled.
In particular, the HSV color space consists of value (V), saturation (S) and hue (H) components. Smith described the arithmetic relation between components of the HSV color space and those of the RGB color space as Equations (1)–(3) [20,29]:
V = 1 3 R + G + B
S = 1 3 R + G + B min R , G , B
H = θ , if   B G 360 θ , if   B > G
where θ is obtained with the following equation.
θ = cos 1 1 2 R G + R B R G 2 + R B G B
Similarly, the HIS color space describes the color image in terms of intensity (I), saturation (S) and hue (H) components, in which saturation and hue components together constitute the chromaticity term and intensity is also known as luminance [29]. The HIS color space is usually computed from the RGB color space with Equations (5)–(7) [20]:
I V 1 V 2 = 1 3 1 3 1 3 6 6 6 6 6 3 6 6 6 3 0 R G B
S = V 1 2 + V 2 2
H = tan 1 V 2 V 1   if   V 1 0
where H is undefined under the condition of V 1 = 0 .
In addition, the YCbCr color space is often employed in JPEG, MPEG and H2.63 [20,30]. Equation (8) describes the linear relations between components in the YCbCr color space and those in the RGB color space.
Y C b C r = 0.257 0.504 0.098 0.148 0.291 0.439 0.439 0.368 0.071 R G B + 16 128 128
Besides, the YIQ color space is regarded as a regulation widely utilized in the National Television Standards Commission (NTSC) [31]. During the color image description, Y component is in proportion to luminance used in gamma correction, and I and Q components together represent chromaticity, namely, saturation and hue components [20,29]. The YIQ color space is obtained with Equation (9) in terms of the RGB color space.
Y I Q = 0.299 0.587 0.114 0.596 0.275 0.321 0.212 0.523 0.311 R G B
Additionally, the CIELCh color space is a polar representation of the CIELAB color space by the CIE to imitate how human eyes perceive color information. L and h components are often taken as luminance and hue components, respectively. For more details about the CIELCh color space, please refer to the work by Gonzalez [29] and Silva [24]. The arithmetic relation between the CIELCh color space and the RGB color space is described with Equations (10)–(16):
X Y Z = 0.412 0.358 0.18 0.213 0.715 0.072 0.019 0.119 0.95 R G B
L = 116 Y Y n 1 3 16   if   Y Y n > 0.008856 903.3 Y Y n   if   Y Y n 0.008856  
f ( x ) = x 1 3       if   x > 0.008856 7.787 x + 16 116   if   x 0.008856  
a = 500 f X X n f Y Y n
b = 200 f Y Y n f Z Z n
C = a 2 + b 2
h = atan 2 b , a + 360   if   atan 2 b , a < 0   atan 2 b , a 360   if   atan 2 b , a 0  
where X n = 95.047 , Y n = 100.00 and Z n = 108.883 respectively refer to reference values of XYZ, and atan 2 is used in many standard libraries well coining with the condition a = 0 [32].

2.2. Step 2: NIR, H and I Extraction

In addition to the often utilized R, G and B components of the target image, NIR information attracts more attention ever than before along with the spectral resolution improvement of HSR remote sensing images by recently launched optical satellites [6,28,33]. Theoretically, in accordance with the Phong’s illumination model [19] and the Huang’s imaging model [15], the diffusion part of the incident light maintains the difference between shadow and nonshadow. Based on the diffusion part expression shown in Equation (17) and the electromagnetic wave theory in which the surface albedo is positively proportional to the wavelength, namely, the NIR component obtains a bigger surface albedo value than those of R, G and B components. The decrease values between shadow and nonshadow can be described with Inequation (18) in terms of NIR, R, G and B components [34,35]:
C d = m d λ f c λ e λ c d λ d λ
where C d is sensor response to the diffusion part of the incident light, m d is a parameter only depending on the geometry information, f c λ denotes the spectral sensitivity in the function of wavelength λ , e λ is the quantity of incident light, and c d λ is the surface albedo.
N I R d > Γ d
where N I R d is the decreased value between shadow and nonshadow in terms of NIR component, and Γ d R d , G d , B d are the decrease values between shadow and nonshadow in terms of R, G and B components, respectively.
In order to effectively decouple chromaticity and luminance, input images are at first converted to express in several typical invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ) with the usually utilized R, G and B components in the RGB color space. Chromaticity and luminance are usually well decoupled in these invariant color spaces described above. Note that the Q component in the YIQ color space and the Cr component in the YCbCr color space are often regarded as the equivalent term with the H component in the HSV, HIS and CIELCh color spaces, which are together denoted as hue-equivalent (H) component. Similarly, the V component in the HSV color space, the Y components in both the YCbCr and YIQ color space, and the L component in the CIELCh color space are usually regarded as equivalent representations of the I component in the HIS color space, which will be expressed as intensity-equivalent (I) components [14,20]. H and I components are respectively extracted from these invariant color spaces. Additionally, Huang et al. [15] provide derivations about hue and intensity components between shadow and nonshadow, as presented in Equations (19) and (20) with which conclusions are drawn that bigger hue values and lower intensity values are usually obtained for shadow compared with the nearby nonshadow shown in Inequations (21) and (22):
H s h w = tan 1 3 G n s h w G d B n s h w B d R n s h w R d G n s h w G d + R n s h w R d B n s h w B d
I s h w = 1 3 R n s h w R d + G n s h w G d + B n s h w B d
H s h w > H n s h w
I s h w < I n s h w
where H s h w and I s h w are hue and intensity components of shadow, R n s h w , G n s h w and B n s h w are R, G and B components of the nearby nonshadow.
Consequently, a dramatical decrease often appears in terms of NIR component compared with R, G and B components for surface features in shadow regions compared with the same type surface features in the nearby nonshadow regions, as illustrated in Figure 2a with samples from typical objects in HSR images (taking the WorldView-3 as an example). Accordingly, the NIR component of input images is additionally extracted to further coordinate with the shadow index construction described as follows. Accordingly, H and I of shadow possess the properties above, as illustrated in Figure 2b,c with samples from typical objects in HSR images (taking the WorldView-3 as an example). Hence, both H and I in these invariant color spaces are employed in the proposed shadow detection approach presented in the following.

2.3. Step 3: LSI Construction

Coupled with the NIR component and H and I components obtained with various invariant color spaces in Step 3, we construct a logarithmic shadow index (LSI) in this step to further enhance the difference between shadow and the corresponding nonshadow based on the shadow properties mentioned previously.
In particular, an initial shadow index (ISI) is first constructed with NIR, H and I components as follows:
I S I = N I R × I H I + H
where NIR indicates the near-infrared component, H implies the equivalent hue component and I refers to the equivalent intensity component.
The developed ISI fully employs shadow properties of higher hue, lower intensity and dramatical decrease in terms of NIR component when compared with the corresponding nearby nonshadow containing the same type features.
Additionally, an obvious distinction appears between the linear function f ( x ) = x and the natural logarithm function f ( x ) = ln x + 1 in compressing the data scale, as shown in Figure 3. Thus, the difference between the linear function and the natural logarithm function in compressing the data scale is further considered in LSI construction.
Subsequently, in order to further improve the distinction between shadow and the corresponding nonshadow, a certain natural logarithmic operation is particularly applied over ISI further compressing ISI to a narrower scale [24] at the pixel level as follows:
L S I = ln N I R × I H I + H + 1
where “+1” is aimed at avoiding the calculation of ln(0).
Additionally, a significant importance appears in real-time and approximate real-time image processing (taking the shadow detection as an example) for HSR satellites [14,28]. A great attention is attached to the timesaving shadow detection algorithm for shadow processing on HSR satellites. Accordingly, the proposed LSI algorithm is promised to be a timesaving one, because the shadow index of the proposed LSI algorithm is simply constructed with equivalent hue and intensity components as well as the NIR component.

2.4. Step 4: Binarization

A shadow mask is often accomplished by binarizing the previously acquired shadow index image with a certain threshold manually selected or automatically with a certain thresholding algorithm [20,23,24]. Several thresholding methods are widely used in the image binarization stage, such as the Otsu method [21], the valley-emphasis method (VEM) [36], and the (NVEM) thresholding method [27]. The Otsu thresholding method is a typical automatic one widely used in image binarization for images with a histogram distributed in a bimodal form [21]. However, difficulties occur when image histogram appears in a unimodal or approximately unimodal distribution. Additionally, in order to determine optimal threshold values for both unimodal and bimodal distributions, Ng [36] attempts to revise the Otsu method by applying a weight to the Otsu method resulting in the VEM thresholding method. Based on the study by Otsu and Ng [21,36], Fan et al. [27] propose the NVEM thresholding method, in which the between-class variance is further modified with the sum of the neighborhood gray probability with an interval of 2m + 1. According to the description in the work by Fan et al. [27], the NVEM thresholding method is briefly introduced as follows.
The gray probability of a certain gray value g is calculated with Equation (25), and the sum of the neighborhood gray probability with an interval of 2m + 1 is calculated with Equation (26):
h ( g ) = f ( g ) n , g = 0 , 1 , , L 1
h ¯ ( g ) = i = m m h ( g + i )
where f g is the pixel number of gray value g, L is the image gray level, and n is the total pixel number.
The image is initially divided into two classes (background and object, or object and background) with a certain threshold t. The probabilities of the two classes are calculated with Equation (27).
p 0 ( t ) = g = 0 t h ( g ) p 1 ( t ) = g = t + 1 L 1 h ( g )
Then, the mathematical expectations of the two classes are computed with Equation (28).
μ 0 ( t ) = g = 0 t g h ( g ) / p 0 ( t ) μ 1 ( t ) = g = t + 1 L 1 g h ( g ) / p 1 ( t )
With consideration of the sum of the neighborhood gray probability in an interval of 2m + 1, the between-class variance is modified by Fan et al. as shown in Equation (29).
ξ ( t ) = 1 h ¯ ( t ) p 0 ( t ) μ 0 2 ( t ) + p 1 ( t ) μ 1 2 ( t )
Finally, the optimal threshold T is determined by maximizing the modified between-class variance with t in the range of 0 to L 1 , as shown in Equation (30).
T = arg max 0 < t < L 1 ξ ( t )
As described above, we particularly employ the NVEM thresholding method for its efficiency and automation. Consequently, a shadow candidate is generated by binarizing the LSI image with the NVEM thresholding method. Accordingly, the LSI index image is particularly segmented with the solution of Equation (31):
A = 0 ,   if   L S I < T 1 ,   otherwise
where T is the optimal threshold determined with the NVEM thresholding method for the binarization of the LSI index image, and A is the binarized result with the acquired optimal threshold T.
Additionally, we optimize the shadow candidate by applying a series of morphological operations over the binary shadow candidate. In particular, the morphological opening and closing operations are mainly employed with a certain structuring element [29], as presented in Equations (32) and (33). The morphological operation contributes to the final optimized shadow image.
A o p e n = A B = A B B
A c l o s e = A o p e n ¥ B = A o p e n B B
where B is the morphological structuring element, A o p e n is the shadow result by the opening operation with the morphological structuring element B, and A c l o s e is the corresponding shadow result by applying the closing operation with the morphological structuring element B on the opening result of the initial shadow image.

3. Experiments and Performance Assessment

3.1. Test Images

The proposed LSI shadow detection approach is developed over a DELL personal computer under the 64-bit Windows7 operation system equipped with a 3.2 GHz CPU and 4 GB RAM. In order to verify the shadow detection performance of the proposed LSI algorithm, comparative experiments are carried out with many test images from WorldView-3 of Tripoli, Libya and Rio de Janeiro, Brazil, and WorldView-2 of Washington DC, USA captured at a different time (called WV3-Tripoli, WV3-Rio and WV2-WDC respectively), which is discussed in next section (Section 4: Discussion). In this section, both qualitative and quantitative assessments are especially provided in the following to evaluate the shadow detection performance of the proposed LSI method and several standard shadow detection algorithms (i.e., MC3 [18], NSVDI [25], LSRI [24], SDI [26] and SRI [20]) with two WorldView-3 test images of Tripoli, Libya [37], as shown in Figure 4a,b. Additionally, reference images of shadow regions are also provided with the corresponding panchromatic versions of test images in Figure 4a,b with a spatial resolution of 0.31 m, as shown in Figure 5a,b. Particularly, the test image Tripoli-1 in Figure 4a is a 400 × 300 pixel image that covers typical ground objects, such as shadow, various scale urban buildings, asphalt roads, bare land and grass. The test image Tripoli-2 in Figure 4b is a 260 × 195 pixel image mainly consisting of shadow, buildings, asphalt roads, grass, playgrounds and parks.
Specific details can be further discussed through qualitative visual comparison in the subjective evaluation way. Moreover, the shadow detection performance of a certain shadow detection algorithm is also quantified with shadow detection accuracy measurements by employing the objective evaluation method. Qualitative and quantitative evaluation are both carried out over the shadow detection results by the proposed LSI approach and five other comparative methods (i.e., MC3 [18], NSVDI [25], LSRI [24], SDI [26] and SRI [20].) in the following comparative experiments.

3.2. Qualitative Visual Sense Comparison

Figure 6 and Figure 7 respectively present the binary shadow detection results of test images Tripoli-1 and Tripoli-2 by the proposed LSI shadow detection approach and five other comparative methods (i.e., MC3 [18], NSVDI [25], LSRI [24], SDI [26] and SRI [20]) in comparative experiments. Particularly, Figure 6a–e and Figure 7a–e list shadow detection results by the proposed LSI shadow detection approach in various invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ). Figure 6f–i and Figure 7f–i illustrate shadow detection results by five other comparative methods (i.e., MC3 [18], NSVDI [25], LSRI [24], SDI [26] and SRI [20]). Shadow detection results are usually intuitively evaluated through visual comparison [14,20]. In order to evaluate the ability of different color spaces in decoupling chromaticity and luminance, shadow detection results are first compared through the qualitative visual sense comparison, respectively, which are processed by the proposed LSI shadow detection approach in various invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ), as presented in Figure 6a–e and Figure 7a–e.
In Figure 6a–e, shadow is correctly classified to a great extent. Specifically, most ground objects in nonshadow regions are well distinguished from shadow, such as bluish housetops (region A in Figure 6a–e), dark asphalt roads and bare areas (regions B1 and B2 in Figure 6a–e), grass and isolated vegetation (regions C1 and C2 in Figure 6a–e). Moreover, continuous shadow (region E in Figure 6a–e) and shadow containing highlight ground objects (regions F1 and F2 in Figure 6a–e) are also identified properly. Good coherence occurs among shadow detection results by the proposed LSI shadow detection approach in these five invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ).
Similarly, shadow detection results by the LSI algorithm for the test image Tripoli-2 in Figure 7a–e also declare a good agreement between these shadow detection results and the corresponding reference image in Figure 5b. In Figure 7a–e, shadow is also specifically distinguished from typical ground objects, like greenish parts in the playground (region A in Figure 7a–e), asphalt roads and dark elements on tops of urban buildings (regions B1 and B2 in Figure 7a–e), as well as continuously distributed grass (region C in Figure 7a–e). Moreover, highlight shadow (region F in Figure 7a–e) is also outlined. Similar with shadow detection results for the test image Tripoli-1 in these invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ), shadow detection results in Figure 7a–e also show no obvious visual sense difference among these shadow detection results in these invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ).
Based on the good coherence among shadow detection results in these invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ), shadow detection results by the proposed LSI approach for test images Tripoli-1 and Tripoli-2 in the HSV color space in Figure 6b and Figure 7b are particularly selected for the comparison with shadow detection results by other five comparative methods (i.e., MC3 [18], NSVDI [25], LSRI [24], SDI [26] and SRI [20]) shown in Figure 6f–i and Figure 7f–i.
As described above, shadow is well distinguished from most typical ground objects in the shadow detection result for the test image Tripoli-1 by the proposed LSI approach, as shown in Figure 6b. In Figure 6f, the shadow detection result by MC3 also shows good shadow detection effect on grass and large continuous shadow. However, many parts of bluish housetops (region A in Figure 6f) and partial dark asphalt roads (region B1 in Figure 6f) are wrongly classified as shadow in Figure 6f. Moreover, in Figure 6g, the shadow detection result by NSVDI show more serious misclassification of bluish housetops and dark asphalt roads, although big shadow regions are detected. Similarly, most large shadow regions are well detected by SDI and SRI, like building shadow, as shown in Figure 6i,j. However, bluish housetops and dark asphalt roads are still mostly wrongly identified as shadow. Moreover, parts of grass and isolated vegetation are also identified as shadow by SDI and SRI (regions C1 and C2 in Figure 6i). Different from shadow detection results in Figure 6f,g,i,j, the nonshadow misclassification problem is mostly avoided in the shadow detection result by LSRI, as shown in Figure 6h. However, shadow is not always detected completely (region E in Figure 6h), and highlight parts in shadow regions are partially omitted (regions F1 and F2 in Figure 6h), which reveals that LSRI can not deliver a excellent shadow detection performance. Compared with shadow detection results by these five comparative algorithms (i.e., MC3 [18], NSVDI [25], LSRI [24], SDI [26] and SRI [20]) for the test image Tripoli-1, the shadow detection result by the proposed LSI algorithm alleviates problems of shadow omission and typical nonshadow misclassification to a greater extent. Accordingly, a better visual sense is acquired by LSI.
As shown in Figure 7b, shadow is effectively distinguished from bluish parts of the artificial playground (region A in Figure 7b), dark asphalt roads and tops of buildings (regions B1 and B2 in Figure 7b) and continuous distributed greenish grass (region C in Figure 7b). Moreover, highlight parts in shadow areas are also correctly identified (region F in Figure 7b). Shadow is also well separated from grass by MC3 as shown in Figure 7f. However, there are still too many nonshadow regions misclassified, such as most bluish parts in the playground (region A in Figure 7f) and dark asphalt roads and tops of buildings (regions B1 and B2 in Figure 7f). Similarly, although most shadow regions are well identified by NSVDI, SDI and SRI, the nonshadow misclassification problem is still obvious in Figure 7g,i,j, like bluish parts of the playground (region A in Figure 7g,i,j), dark asphalt roads (region B1 in Figure 7g,i,j) and greenish grass (region C in Figure 7g,i,j). By contrast, in Figure 7h, most shadow regions and nonshadow regions are well separated, like bluish parts of the playground (region A in Figure 7h), dark asphalt roads and tops of buildings (regions B1 and B2 in Figure 7h) and continuously distributed greenish grass (region C in Figure 7h), which shows that a relatively good detection effect is achieved by LSRI. Satisfactory overall shadow detection effect is obtained in Figure 7h, even though parts of the highlighted shadow are still omitted. As can be observed in Figure 7b,f–h, results by LSI and LSRI show a better visual sense.
In general, compared with shadow detection results for test images Tripoli-1 and Tripoli-2 by other five shadow detection methods (i.e., MC3 [18], NSVDI [25], LSRI [24], SDI [26] and SRI [20].), the proposed LSI approach effectively distinguish shadow from several typical nonshadow (like bluish and greenish nonshadow misclassification, and large dark nonshadow misclassification), and well detect most highlighted parts of shadow. A conclusion can be drawn that the proposed LSI algorithm further resolves problems of shadow omission and typical nonshadow misclassification, and delivers a better visual sense.

3.3. Quantitative Evaluation

Different from the qualitative visual sense comparison mentioned above, a quantitative assessment is also performed by calculating the confusion matrix for shadow detection results of both test images Tripoli-1 and Tripoli-2. Particularly, several shadow detection accuracy measurements utilized in the objective assessment are specifically calculated with the confusion matrix [26,38,39,40]. These corresponding measurements are computed at the pixel level with Equations (34)–(38) [9,14,20], including the producer’s accuracy ( ρ s and ρ n ), the user’s accuracy ( μ s and μ n ), the committed error ( e c ), the omitted error ( e o ), and the overall accuracy ( τ ):
ρ s = T P T P + F N × 100 % ρ n = T N T N + F P × 100 %
μ s = T P T P + F P × 100 % μ n = T N T N + F N × 100 %
e c = F P T N + F P × 100 %
e o = F N T P + F N × 100 %
τ = T P + T N T P + T N + F P + F N × 100 %
where TP (true positive) indicates the total number of true shadow pixels correctly identified, TN (true negative) refers to the number of true nonshadow pixels correctly classified, FP (false positive) is the number of true nonshadow pixels wrongly identified as shadow ones, FN (false negative) reveals the number of true shadow pixels wrongly classified as nonshadow ones, TP + FN and TN + FP respectively denote the number of true shadow pixels and true nonshadow pixels in the original image, TP + FP and TN + FN respectively indicate the number of shadow pixels and nonshadow pixels in the classified resulting image, and TP + TN + FP + FN is the total number of the whole image.
Ideal shadow detection methods usually own high values of the producer’s accuracy, the user’s accuracy and the overall accuracy, as well as low values of the committed error and the omitted error. In particular, the overall accuracy is the most important measurement among these shadow detection accuracy measurements described above, which states the overall shadow detection ability of a certain shadow detection algorithm. Accordingly, these shadow detection accuracy measurements are mainly employed for evaluating the performance of the proposed LSI shadow detection approach in comparative experiments. Additionally, these shadow detection accuracy measurements are respectively presented in Table 1 and Table 2 for shadow detection results by the LSI algorithm in five invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ) and five comparative algorithms (i.e., MC3 [18], NSVDI [25], LSRI [24], SDI [26] and SRI [20]) in comparative experiments with test images Tripoli-1 and Tripoli-2.
As shown in Table 1, high values are achieved for shadow detection results by the LSI algorithm in various invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ) for the test image Tripoli-1 in terms of the nonshadow producer’s accuracy (about 95%), the nonshadow user’s accuracy (about 94%) and the overall accuracy (over 92%). Additionally, relatively high and stable values are also obtained in terms of the shadow producer’s accuracy and the shadow user’s accuracy, and relatively low values are also acquired in terms of the committed error and the omitted error. Generally speaking, ideal shadow detection accuracy measurements are achieved for shadow detection results by the LSI algorithm in these invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ) for the test image Tripoli-1, which not only reveals the good capability of these invariant color spaces in decoupling chromaticity and luminance, but also states the excellent performance and robustness of the LSI algorithm.
Similarly, as presented in Table 2, relatively high and consistent accuracy measurements are also acquired for the test image Tripoli-2 for shadow detection results by the proposed LSI algorithm in these invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ). In particular, not only very high values are obtained in terms of the nonshadow producer’s accuracy (about 98%), the shadow user’s accuracy (about 94%) and the overall accuracy (approximately 91%), but also relatively low values (less than 2%) is acquired for the committed error measurement. In general, the proposed LSI approach acquires relatively ideal and stable shadow detection accuracy measurements for the test image Tripoli-2 in these invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ).
Accordingly, the time consumption is respectively summarized for shadow detection by the LSI algorithm in various invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ) and five comparative algorithms (i.e., MC3 [18], NSVDI [25], LSRI [24], SDI [26] and SRI [20]) for test image Tripoli-1 and Tripoli-2, as presented in Table 3. As can be observed in Table 3, time consumption values of the LSI algorithm are relatively small for the shadow detection in these invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ) for test images Tripoli-1 and Tripoli-2 because of the simple computation of these invariant color spaces except for the time consumption of shadow detection in the CIELCh color space due to its complex calculation from the RGB color space. Particularly, the least time is consumed for shadow detection in the HSV color space by the proposed LSI algorithm for both test images Tripoli-1 and Tripoli-2. Hence, the proposed LSI shadow detection algorithm delivers the most timesaving performance in the HSV color space.
Considering the excellent and stable performance in these invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ) and the most timesaving performance of the proposed LSI algorithm in the HSV color space, for the sake of simplicity, shadow detection performance comparison is particularly conducted between shadow detection results by the LSI algorithm in the HSV color space and five comparative shadow detection algorithms (i.e., MC3 [18], NSVDI [25], LSRI [24], SDI [26] and SRI [20]) for both test images Tripoli-1 and Tripoli-2.
For the test image Tripoli-1, a higher value of the overall accuracy (over 92%) is acquired for the shadow detection result by the proposed LSI algorithm in the HSV color space, compared with the overall accuracy of the result by other five contrast methods (i.e., MC3 [18], NSVDI [25], LSRI [24], SDI [26] and SRI [20]). Although relatively high values of the overall accuracy are also obtained by MC3 and LSRI, an obvious difference (about 3%) is also found compared with that of the proposed LSI approach, which indicates that the proposed LSI method performs better for the shadow detection of the test image Tripoli-1. In addition, not only relatively low values of the committed error and the omitted error but also high values of the shadow user’s accuracy are acquired by MC3, which reveals that the MC3 method performs relatively well for shadow detection of the test image Tripoli-1. However, even though relatively high values of the shadow producer’s accuracy and the nonshadow user’s accuracy, as well as relatively low values of the omitted error are obtained by NSVDI, SDI and SRI, both relatively low overall accuracy and high committed error still obstacle the effective shadow detection performance for the test image Tripoli-1, which indicates the poor performance of NSVDI, SDI and SRI in effectively detecting shadow of the test image Tripoli-1. Therefore, there is still further study for NSVDI, SDI and SRI in detecting shadows of HSR satellite images. By contrast, relatively high overall accuracy and low committed error are acquired by LSRI for the test image Tripoli-1, revealing that the LSRI method performs well in correctly distinguishing shadow against easily-confused nonshadow for the test image Tripoli-1. In general, the proposed LSI algorithm delivers higher values of the nonshadow producer’s accuracy (over 95%), the nonshadow user’s accuracy (about 94%) and the overall accuracy (over 92%), stable values of the shadow producer’s accuracy (about 83%) and the shadow user’s accuracy (over 87%), and lower committed error (less than 5%), which reveals the excellent shadow detection performance and robustness of the proposed LSI algorithm for the test image Tripoli-1.
Similarly, for the test image Tripoli-2, the proposed LSI approach also achieves a higher overall accuracy value than that of the other five contrast methods (i.e., MC3 [18], NSVDI [25], LSRI [24], SDI [26] and SRI [20]). Additionally, relatively low values of the overall accuracy are obtained by MC3, NSVDI and SDI, although the corresponding omitted error values are relatively low, which indicates the poor performance of MC3, NSVDI and SDI for shadow detection of the test image Tripoli-2. Relatively low values of the overall accuracy and the user’s accuracy are also acquired by SRI, revealing that great room for improvement remains for SRI for shadow detection of the test image Tripoli-2. In contrast, better performance is shown in the result by LSRI with relatively high values of the overall accuracy (close to 89%) and the user’s accuracy as well as low omitted error (about 5%), even though various shadow detection accuracy measurements are slightly inferior to those of the proposed LSI approach. Consequently, the proposed LSI algorithm presents a better performance for the test image Tripoli-2.
Through comparing shadow detection results of test images Tripoli-1 and Tripoli-2 by the proposed LSI approach and other five methods (i.e., MC3 [18], NSVDI [25], LSRI [24], SDI [26] and SRI [20]) both qualitatively and quantitatively, a conclusion can be drawn that the proposed LSI shadow detection approach further settles typical shadow detection problems of high shadow omission and typical nonshadow misclassification (like bluish and greenish nonshadow misclassification, and large dark nonshadow misclassification), and delivers a relatively excellent, robustness and timesaving performance for shadow detection of HSR satellite images.

4. Discussion

The proposed LSI shadow detection algorithm performs well in several invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ) in the comparative experiments previously with test images Tripoli-1 and Tripoli-2. Notably, the LSI algorithm performance is mainly affected by operations in the latter two steps of the workflow (i.e., Step 3 and Step 4). In this section, corresponding discussions are provided to analyze both the influence of the logarithmic operation and the sensitivity of the threshold parameter m as well as the structuring element of the morphological operation. Accordingly, additional experiments are conducted to analyze the influential factors above on shadow detection results with test images Tripoli-1 and Tripoli-2.

4.1. Influence Analysis of the Logarithmic Operation

As described in Step 3, the initial shadow index is additionally refined with a logarithmic operation resulting in the logarithmic shadow index for further improving the capability of separating shadow against nonshadow. In particular, the logarithmic operation compresses the initial shadow index and expand the discrimination between pixel values of shadow and nonshadow [24]. In this part, the impact of the logarithmic operation is analyzed by comparing the performance distinction between shadow detection results respectively by the initial shadow index and the logarithmic shadow index in several variant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ) in the additional experiments with test images Tripoli-1 and Tripoli-2. Figure 8 illustrates the overall accuracies of shadow detection results by the initial shadow index and the logarithmic shadow index in the additional experiments with test images Tripoli-1 and Tripoli-2, respectively.
As illustrated in Figure 8a,b, it can be observed that higher overall accuracies are acquired for shadow detection results in these invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ) with the LSI shadow index for both test image Tripoli-1 and test image Tripoli-2, when compared with the corresponding overall accuracies for shadow detection results with the ISI shadow index. Accordingly, relatively high overall accuracies are obtained for shadow detection results with both ISI and LSI indices for test images in most invariant color spaces mentioned above, which delivers significant information that the distinction between shadow and nonshadow is significantly expanded by the employed shadow properties against the corresponding nonshadow (i.e., higher hue, lower intensity and dramatical decrease in NIR component). Furthermore, the obvious distinction of the overall accuracy between shadow detection results with LSI shadow index and the overall accuracy with ISI shadow index reveals that the applied logarithmic operation further reinforce the difference between shadow and nonshadow for LSI construction in Step 3, which contributes to a good performance of the LSI shadow detection algorithm. Therefore, we finally accomplish the shadow detection of test images in various invariant color spaces based on the LSI shadow index.

4.2. Sensitivity Analysis of the Neighborhood Parameter

In this study, the shadow detection result is initially acquired through binarizing the shadow index image with a certain optimal threshold by the NVEM thresholding algorithm, as presented in Step 4 of the workflow. However, according to the thresholding solution of Equations (26)–(30), the optimal threshold is sensitive to the neighborhood parameter m. As noted in related studies, uncertainties appear in the binarization of natural images while determining the optimal threshold with different neighborhood parameter m values [27]. Hence, in order to further explore the impact of the neighborhood parameter m on the shadow detection performance of HSR multispectral satellite remote sensing images, we respectively run additional experiments in various invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ) with the neighborhood parameter m set from 1 to 40 with an interval of 1 for test images Tripoli-1 and Tripoli-2. Figure 9 depicts the sensitivity of the LSI algorithm performance to the neighborhood parameter m of the NVEM thresholding method for test images Tripoli-1 and Tripoli-2, respectively.
As illustrated in Figure 9a,b, the overall accuracies keep relatively high values and a stable trend in various invariant color spaces with the neighborhood parameter m from 1 to 28 for Tripoli-1 and with the neighborhood parameter m from 1 to 20 for Tripoli-2, which together states that excellent performance and robustness are acquired with a not-very-big neighborhood parameter m for test images in these invariant color spaces. The difference between Figure 9a,b also explains that the neighborhood parameter m depends on the target image. Accordingly, we process Tripoli-1 with an optimal neighborhood parameter m = 25 , and Tripoli-2 with m = 2 , respectively.

4.3. Sensitivity Analysis of the Morphological Operation

The shadow detection results are usually subsequently processed with a certain denoising algorithm, such as the morphological operation [29] and the box filtering process [22]. In our study, the final shadow detection results are achieved through optimizing shadow candidates with a certain morphological operation. However, the structuring element is a significant influential factor for the effective utilization of the morphological operation. Therefore, both the morphological structuring element type and the morphological structuring element scale α should be taken into consideration. In this part, we deliver the sensitivity analysis about the impact of the morphological structuring element on the LSI shadow detection algorithm through carrying out additional experiments in several invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ) with various structuring element types (i.e., cube, diamond, disk, sphere and square types) and different structuring element scales α set from 1 to 20 with an interval of 1 for test images Tripoli-1 and Tripoli-2. Figure 10 presents the sensitivity of the LSI algorithm to the morphological structuring element for test images Tripoli-1 and Tripoli-2, respectively.
As depicted in Figure 10a,c,e,g,i for Tripoli-1, the higher overall accuracies of shadow detection results by the morphological operation with the structuring element type of both cube and square state the better performance of the LSI algorithm optimized with the morphological operation of the structuring element types of cube and square, and the decreasing trend of the overall accuracy along with the increase of the structuring element scale α reveals that more effective information is processed as noise with a bigger structuring element scale. Additionally, the similarity of the overall accuracy in Figure 10a,c,e,g,i for Tripoli-1 proves the excellent performance and good stability of the LSI algorithm for Tripoli-1 in these invariant color spaces. Similarly, the same phenomenon appears for Tripoli-2, as presented in Figure 10b,d,f,h,j. In accordance with the decreasing trend of the overall accuracy along with the increase of the structuring element scale for various structuring element types in these invariant color spaces for test images presented in Figure 10a–j, we optimize the binary shadow detection results by applying the morphological operation with a structuring element type of cube and a structuring element scale α = 1 , which results in the final shadow detection image.

4.4. LSI Method Generalization Analysis

As described in Section 2, many test images (i.e., WV3-Tripoli, WV3-Rio and WV2-WDC) are employed to explore the validity of the proposed LSI method. The LSI method generalization is particularly analyzed with the overall accuracy measurement of shadow detection results for test images above (i.e., WV3-Tripoli, WV3-Rio and WV2-WDC), since the overall accuracy is the most powerful evidence for the shadow detection performance. Figure 11a–c respectively depict the overall accuracy measurements of shadow detection results of 16 test images of WV3-Tripoli, 16 test images of WV3-Rio and 16 test images of WV-WDC by the proposed LSI method in various invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ).
As can be observed in Figure 11a-c, relatively high values of the overall accuracy measurement are acquired for most test images of WV3-Tripoli, WV3-Rio and WV2-WDC, which shows the good shadow detection ability of the proposed LSI method for most test images of WV3-Tripoli, WV3-Rio and WV2-WDC. Additionally, stable and high values of the overall accuracy measurement are obtained for test images of WV3-Tripoli, WV3-Rio and WV2-WDC in HIS, HSV, CIELCh and YIQ spaces, although the LSI method fails in detecting shadow in six test images of WV3-Rio in the YCbCr space. Through comparing the overall accuracy measurements for shadow detection results of test images of WV3-Tripoli, WV3-Rio and WV2-WDC, a conclusion can be drawn that the proposed LSI method is able to further complete shadow detection tasks and delivers an excellent shadow detection performance for HSR multispectral satellite remote sensing images. Provided this situation, two test images of WV3-Tripoli are employed in this paper to specifically evaluate the shadow detection performance of the proposed LSI method against other comparative shadow detection algorithms, as discussed in Section 2 previously.

5. Conclusions

In this paper, we develop and validate a logarithmic shadow index (LSI)-based shadow detection approach mainly employing the properties of typical invariant color components in various invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ) in terms of both higher hue and lower intensity components, as well as the dramatical decrease of near-infrared component against the visible band components (i.e., red, green and blue components). Additionally, a better visual sense and higher overall accuracies (over 92% for the test image Tripoli-1 and approximately 91% for the test image Tripoli-2) are acquired by the proposed LSI shadow detection approach against other comparative algorithms (i.e., MC3 [18], NSVDI [25], LSRI [24], SDI [26] and SRI [20]) in the comparative experiments, which reveals that the excellent performance and robustness of the proposed LSI shadow detection approach for high-resolution satellite images. Therefore, the proposed LSI shadow detection approach is a promising one, further settling typical shadow detection problems of high small shadow omission and typical nonshadow misclassification for high-resolution satellite images. In the future, we will further research the shadow detection techniques considering the interference of water, snow and desert on the base of our current study.

Author Contributions

Conceptualization, H.H., X.X., T.L. and L.H.; Data curation, H.H., C.H. (Changhong Hu) and L.H.; Formal analysis, H.H., C.H. (Chengshan Han) and X.X.; Investigation, H.H.; Methodology, H.H. and X.X.; Project administration, C.H. (Chengshan Han) and X.X.; Resources, C.H. (Chengshan Han) and X.X.; Software, H.H., T.L., C.H. (Changhong Hu) and L.H.; Supervision, C.H. (Chengshan Han) and X.X.; Validation, H.H. and C.H. (Changhong Hu); Visualization, H.H. and L.H.; Writing—original draft, H.H.; Writing—review & editing, H.H., C.H. (Chengshan Han), T.L. and X.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Scientific and Technological Developing Scheme of Ji Lin Province (20190302082GX).

Acknowledgments

The authors would like to thank the kind support of Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences. The authors also express their gratitude to Digital Global Inc. for providing the WorldView-2 and WorldView-3 image samples.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Prati, A.; Mikic, I.; Trivedi, M.M.; Cucchiara, R. Detecting moving shadows: Algorithms and evaluation. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 918–923. [Google Scholar] [CrossRef] [Green Version]
  2. Massalabi, A.; He, D.C.; Benie, G.B.; Beaudry, E. Detecting information under and from shadow in panchromatic Ikonos images of the city of Sherbrooke. In Proceedings of the 2004 IEEE International Geoscience and Remote Sensing Symposium, Anchorage, AK, USA, 20–24 September 2004. [Google Scholar]
  3. Finlayson, G.D.; Hordley, S.D.; Lu, C.; Drew, M.S. On the removal of shadows from images. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 59–68. [Google Scholar] [CrossRef] [PubMed]
  4. Tian, J.; Qi, X.; Qu, L.; Tang, Y. New spectrum ratio properties and features for shadow detection. Pattern Recognit. 2016, 51, 85–96. [Google Scholar] [CrossRef]
  5. Kang, X.; Li, S.; Huang, Y.; Lin, H.; Benediktsson, J.A. Extended random walker for shadow detection in very high resolution remote sensing images. IEEE Trans. Geosci. Remote Sens. 2017, 56, 867–876. [Google Scholar] [CrossRef]
  6. Schläpfer, D.; Hueni, A.; Richter, R. Cast shadow detection to quantify the aerosol optical thickness for atmospheric correction of high spatial resolution optical imagery. Remote Sens. 2018, 10, 200. [Google Scholar] [CrossRef] [Green Version]
  7. Zhao, J.; Zhong, Y.; Zhang, L. Detail-Preserving Smoothing Classifier Based on Conditional Random Fields for High Spatial Resolution Remote Sensing Imagery. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2440–2452. [Google Scholar] [CrossRef]
  8. Dare, P.M. Shadow analysis in high-resolution satellite imagery of urban areas. Photogramm. Eng. Remote Sens. 2005, 71, 169–177. [Google Scholar] [CrossRef] [Green Version]
  9. Arevalo, V.; Gonzalez, J.; Ambeosio, G. Shadow detection in colour high-resolution satellite images. Int. J. Remote Sens. 2008, 29, 1945–1963. [Google Scholar] [CrossRef]
  10. Cai, D.; Li, M.; Bao, Z.; Chen, Z.; Wei, W.; Zhang, H. Study on shadow detection method on high resolution remote sensing image based on HIS space transformation and NDVI index. In Proceedings of the 2010 18th International Conference on Geoinformatics, Beijing, China, 18–20 June 2010. [Google Scholar]
  11. Al-Najdawi, N.; Bez, H.E.; Singhai, J.; Edirisinghe, E.A. A survey of cast shadow detection algorithms. Pattern Recognit. Lett. 2012, 33, 752–764. [Google Scholar] [CrossRef]
  12. Duan, G.Y.; Gong, H.; Zhao, W.J.; Tang, X.M.; Chen, B.B. An index-based shadow extraction approach on high-resolution images. In Proceedings of the International Symposium on Satellite Mapping Technology and Application, Nanjing, China, 6–8 November 2013. [Google Scholar]
  13. Zhu, X.; Chen, R.; Xia, H.; Zhang, P. Shadow removal based on YCbCr color space. Neurocomputing 2015, 151, 252–258. [Google Scholar] [CrossRef]
  14. Liu, J.; Fang, T.; Li, D. Shadow detection in remotely sensed images based on self-adaptive feature selection. IEEE Trans. Geosci. Remote Sens. 2011, 49, 5092–5103. [Google Scholar]
  15. Huang, J.J.; Xie, W.X.; Tang, L. Detection of and compensation for shadows in colored urban aerial images. In Proceedings of the 5th World Congress on Intelligent Control and Automation, Hangzhou, China, 15–19 June 2004. [Google Scholar]
  16. Sarabandi, P.; Yamazaki, F.; Matsuoka, M.; Kiremidjian, A. Shadow detection and radiometric restoration in satellite high resolution images. In Proceedings of the 2004 IEEE International Geoscience and Remote Sensing Symposium, Anchorage, AK, USA, 20–24 September 2004. [Google Scholar]
  17. Arevalo, V.; González, J.; Valdes, J.; Ambrosio, G. Dectecting shadows in Quickbird satellite images. In Proceedings of the ISPRS Commission VII Mid-term Symposium “Remote Sensing: From Pixels to Processes”, Enschede, The Netherlands, 8–11 May 2006. [Google Scholar]
  18. Besheer, M.; Abdelhafiz, A. Modified invariant color model for shadow detection. Int. J. Remote Sens. 2015, 36, 6214–6223. [Google Scholar] [CrossRef]
  19. Phong, B.T. Illumination for computer generated pictures. Graph. Image Process. 1975, 18, 311–317. [Google Scholar] [CrossRef] [Green Version]
  20. Tsai, V.J.D. A comparative study on shadow compensation of color aerial images in invariant color models. IEEE Trans. Geosci. Remote Sens. 2006, 44, 1661–1671. [Google Scholar] [CrossRef]
  21. Otsu, N. A threshold selection method from gray level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  22. Khekade, A.; Bhoyar, K. Shadow detection based on RGB and YIQ color models in color aerial images. In Proceedings of the 1st International Conference on Futuristic Trend in Computational Analysis and Knowledge Management (ABLAZE 2015), Greater Noida, India, 25–27 February 2015. [Google Scholar]
  23. Chung, K.-L.; Lin, Y.R.; Huang, Y.H. Efficient shadow detection of color aerial images based on successive thresholding scheme. IEEE Trans. Geosci. Remote Sens. 2009, 47, 671–681. [Google Scholar] [CrossRef]
  24. Silva, G.F.; Carneiro, G.B.; Doth, R.; Amaral, L.A.; Azevedo, D.F.G.d. Near real-time shadow detection and removal in aerial motion imagery application. J. Photogramm. Remote Sens. 2017, 2017, 104–121. [Google Scholar] [CrossRef]
  25. Ma, H.J.; Qin, Q.M.; Shen, X.Y. Shadow segmentation and compensation in high resolution satellite images. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2008), Boston, MA, USA, 7–11 July 2008. [Google Scholar]
  26. Mostafa, Y.; Abdelhafiz, A. Accurate shadow detection from high-resolution satellite images. IEEE Geosci. Remote Sens. Lett. 2017, 14, 494–498. [Google Scholar] [CrossRef]
  27. Fan, J.L.; Lei, B. A modified valley-emphasis method for automatic thresholding. Pattern Recognit. Lett. 2012, 33, 703–708. [Google Scholar] [CrossRef]
  28. Han, H.Y.; Han, C.S.; Xue, X.C.; Hu, C.H.; Huang, L.; Li, X.Z.; Lan, T.J.; Wen, M. A mixed property-based automatic shadow detection approach for VHR multispectral remote sensing images. Appl. Sci. 2018, 8, 1883. [Google Scholar] [CrossRef] [Green Version]
  29. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 3rd ed.; Publishing House of Electronics Industry: Beijing, China, 2010; pp. 58–65, 649–661. [Google Scholar]
  30. Kumar, P.; Sengupta, K.; Lee, A. A comparative study of different color spaces for foreground and shadow detection for traffic monitoring system. In Proceedings of the IEEE 5th International Conference on Intelligent Transportation Systems, Singapore, September 3–6 2002. [Google Scholar]
  31. Gonzalez, R.C.; Woods, R.E.; Eddins, S.L. Digital Image Processing Using MATLAB, 2nd ed.; Publishing House of Electronics Industry: Beijing, China, 2014; pp. 125–126. [Google Scholar]
  32. Ford, A.; Roberts, A. Colour Space Conversions; Westminster University: London, UK, 1998; pp. 1–31. [Google Scholar]
  33. Huang, H.; Sun, G.Y.; Rong, J.; Zhang, A.Z. Multi-feature combined for building shadow detection in GF-2 Images. In Proceedings of the 2018 5th International Workshop on Earth Observation and Remote Sensing Applications, Xi’an, China, 18–20 June 2018. [Google Scholar]
  34. Gevers, T.; Smeulders, A.W.M. Color-based object recognition. Pattern Recognit. 1999, 32, 453–464. [Google Scholar]
  35. Shafer, S.A. Using color to separate reflection component. Color Res. Appl. 1985, 10, 210–218. [Google Scholar]
  36. Ng, H.F. Automatic thresholding for defect detection. Pattern Recognit. Lett. 2006, 27, 1644–1649. [Google Scholar]
  37. DG2017_WorldView-3_DS. Available online: https://dg-cms-uploads-production.s3.amazon-aws.com/uploads/document/file/95/DG2017_WorldView-3_DS.pdf (accessed on 25 July 2018).
  38. Story, M.; Congalton, R.G. Accuracy assessment: A user’s perspective. Photogramm. Eng. Remote Sens. 1986, 52, 397–399. [Google Scholar]
  39. Sun, J. Principles and Applications of Remote Sensing, 3rd ed.; Wuhan University Press: Wuhan, China, 2016; pp. 18–21, 220–222. [Google Scholar]
  40. Congalton, R.G.; Green, K. Assessing the Accuracy of Remotely Sensed Data: Principles and Practices, 2nd ed.; Lewis Publishers: New York, NY, USA, 1999; pp. 56–61. [Google Scholar]
Figure 1. The flow chart of the logarithmic shadow index (LSI) shadow detection algorithm.
Figure 1. The flow chart of the logarithmic shadow index (LSI) shadow detection algorithm.
Applsci 10 06467 g001
Figure 2. Dramatic decrease in terms of near-infrared (NIR) component, higher hue (H) component value and lower intensity (I) component value in shadow regions with samples for typical objects (taking the WorldView-3 as an example). (a) NIR. (b) H. (c) I.
Figure 2. Dramatic decrease in terms of near-infrared (NIR) component, higher hue (H) component value and lower intensity (I) component value in shadow regions with samples for typical objects (taking the WorldView-3 as an example). (a) NIR. (b) H. (c) I.
Applsci 10 06467 g002
Figure 3. Comparison between the linear function f ( x ) = x and the natural logarithm function f ( x ) = ln x + 1 in compressing data scale.
Figure 3. Comparison between the linear function f ( x ) = x and the natural logarithm function f ( x ) = ln x + 1 in compressing data scale.
Applsci 10 06467 g003
Figure 4. Test images from WorldView-3 images of Tripoli, Libya. (a) Tripoli-1. (b) Tripoli-2.
Figure 4. Test images from WorldView-3 images of Tripoli, Libya. (a) Tripoli-1. (b) Tripoli-2.
Applsci 10 06467 g004
Figure 5. Reference images for test images of Tripoli, Libya. (a) Tripoli-1. (b) Tripoli-2.
Figure 5. Reference images for test images of Tripoli, Libya. (a) Tripoli-1. (b) Tripoli-2.
Applsci 10 06467 g005
Figure 6. Shadow detection results of various shadow detection algorithms for Tripoli-1. (a) HIS. (b) HSV. (c) CIELCh. (d) YCbCr. (e) YIQ. (f) MC3 [18]. (g) Normalized saturation-value index (NSVDI) [25]. (h) Logarithmic spectral ratio index (LSRI) [24]. (i) Shadow detector index (SDI) [26]. (j) Spectral ratio index (SRI) [20].
Figure 6. Shadow detection results of various shadow detection algorithms for Tripoli-1. (a) HIS. (b) HSV. (c) CIELCh. (d) YCbCr. (e) YIQ. (f) MC3 [18]. (g) Normalized saturation-value index (NSVDI) [25]. (h) Logarithmic spectral ratio index (LSRI) [24]. (i) Shadow detector index (SDI) [26]. (j) Spectral ratio index (SRI) [20].
Applsci 10 06467 g006aApplsci 10 06467 g006b
Figure 7. Shadow detection results of various shadow detection algorithms for Tripoli-2. (a) HIS. (b) HSV. (c) CIELCh. (d) YCbCr. (e) YIQ. (f) MC3. (g) NSVDI. (h) LSRI. (i) SDI. (j) SRI.
Figure 7. Shadow detection results of various shadow detection algorithms for Tripoli-2. (a) HIS. (b) HSV. (c) CIELCh. (d) YCbCr. (e) YIQ. (f) MC3. (g) NSVDI. (h) LSRI. (i) SDI. (j) SRI.
Applsci 10 06467 g007aApplsci 10 06467 g007b
Figure 8. Overall accuracies of shadow detection results with the initial shadow index and the logarithmic shadow index for test images. (a) Tripoli-1. (b) Tripoli-2.
Figure 8. Overall accuracies of shadow detection results with the initial shadow index and the logarithmic shadow index for test images. (a) Tripoli-1. (b) Tripoli-2.
Applsci 10 06467 g008
Figure 9. The sensitivity of the LSI algorithm to the neighborhood parameter m for test images. (a) Tripoli-1. (b) Tripoli-2.
Figure 9. The sensitivity of the LSI algorithm to the neighborhood parameter m for test images. (a) Tripoli-1. (b) Tripoli-2.
Applsci 10 06467 g009
Figure 10. The sensitivity of the LSI algorithm to the morphological structuring element in variant color space for test images. (a) Tripoli-1 in HIS. (b) Tripoli-2 in HIS. (c) Tripoli-1 in HSV. (d) Tripoli-1 in HSV. (e) Tripoli-1 in CIELCh. (f) Tripoli-2 in CIELCh. (g) Tripoli-1 in YCbCr. (h) Tripoli-2 in YCbCr. (i) Tripoli-1 in YIQ. (j) Tripoli-2 in YIQ.
Figure 10. The sensitivity of the LSI algorithm to the morphological structuring element in variant color space for test images. (a) Tripoli-1 in HIS. (b) Tripoli-2 in HIS. (c) Tripoli-1 in HSV. (d) Tripoli-1 in HSV. (e) Tripoli-1 in CIELCh. (f) Tripoli-2 in CIELCh. (g) Tripoli-1 in YCbCr. (h) Tripoli-2 in YCbCr. (i) Tripoli-1 in YIQ. (j) Tripoli-2 in YIQ.
Applsci 10 06467 g010aApplsci 10 06467 g010b
Figure 11. LSI method generalization analysis. (a) WV3-Tripoli. (b) WV3-Rio. (c) WV2-WDC.
Figure 11. LSI method generalization analysis. (a) WV3-Tripoli. (b) WV3-Rio. (c) WV2-WDC.
Applsci 10 06467 g011
Table 1. Shadow detection accuracy measurements of various shadow detection algorithms for the test image Tripoli-1.
Table 1. Shadow detection accuracy measurements of various shadow detection algorithms for the test image Tripoli-1.
MethodColor Space ρ s ( % ) ρ n ( % ) μ s ( % ) μ n ( % ) τ e c ( % ) e o ( % )
LSIHIS83.9495.5387.2194.2492.444.4716.06
HSV83.6795.7587.7394.1792.534.2516.33
CIELCh84.2495.6187.4594.3592.584.3915.76
YCbCr80.989688.0393.2992419.02
YIQ83.6695.6587.4894.1692.464.3516.34
MC3 [18]C1C2C385.5287.4771.2694.3386.9612.5314.48
NSVDI [25]HSV96.4163.6749.0897.9972.3936.333.59
LSRI [24]CIELCh63.6997.6590.7788.188.62.3536.31
SDI [26]NIR-RGB99.233.7335.2299.1451.1766.270.8
SRI [20]HIS87.9165.1547.8293.6971.2234.8512.09
Table 2. Shadow detection accuracy measurements of various shadow detection algorithms for the test image Tripoli-2.
Table 2. Shadow detection accuracy measurements of various shadow detection algorithms for the test image Tripoli-2.
MethodColor Space ρ s ( % ) ρ n ( % ) μ s ( % ) μ n ( % ) τ e c ( % ) e o ( % )
LSIHIS73.3998.2194.5389.7590.851.7926.61
HSV73.4898.2394.5889.7890.891.7726.52
CIELCh72.8998.4395.1589.690.861.5727.11
YCbCr72.5497.9793.7889.4390.432.0327.46
YIQ72.6498.3294.889.590.711.6827.36
MC3C1C2C394.1920.9633.4489.5342.6879.045.81
NSVDIHSV92.0465.4952.9295.1273.3634.517.96
LSRICIELCh75.0694.8485.9990.0288.985.1624.94
SDINIR-RGB97.6140.3540.8297.5657.3359.652.39
SRIHIS79.6165.1449.0588.3469.4334.8620.39
Table 3. Time consumption of various shadow detection algorithms for test images Tripoli-1 and Tripoli-2.
Table 3. Time consumption of various shadow detection algorithms for test images Tripoli-1 and Tripoli-2.
Time Used (ms)Color SpaceTripoli-1Tripoli-2
LSIHIS23.229.90
HSV16.726.32
CIELCh90.7354.86
YCbCr23.1313.41
YIQ29.3012.93
MC3C1C2C318.2410.94
NSVDIHSV21.4814.09
LSRICIELCh620.41388.29
SDINIR-RGB18.409.34
SRIHIS94.5423.90

Share and Cite

MDPI and ACS Style

Han, H.; Han, C.; Lan, T.; Huang, L.; Hu, C.; Xue, X. Automatic Shadow Detection for Multispectral Satellite Remote Sensing Images in Invariant Color Spaces. Appl. Sci. 2020, 10, 6467. https://doi.org/10.3390/app10186467

AMA Style

Han H, Han C, Lan T, Huang L, Hu C, Xue X. Automatic Shadow Detection for Multispectral Satellite Remote Sensing Images in Invariant Color Spaces. Applied Sciences. 2020; 10(18):6467. https://doi.org/10.3390/app10186467

Chicago/Turabian Style

Han, Hongyin, Chengshan Han, Taiji Lan, Liang Huang, Changhong Hu, and Xucheng Xue. 2020. "Automatic Shadow Detection for Multispectral Satellite Remote Sensing Images in Invariant Color Spaces" Applied Sciences 10, no. 18: 6467. https://doi.org/10.3390/app10186467

APA Style

Han, H., Han, C., Lan, T., Huang, L., Hu, C., & Xue, X. (2020). Automatic Shadow Detection for Multispectral Satellite Remote Sensing Images in Invariant Color Spaces. Applied Sciences, 10(18), 6467. https://doi.org/10.3390/app10186467

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop