Next Article in Journal
Coal and Gangue Recognition Method Based on Local Texture Classification Network for Robot Picking
Previous Article in Journal
Improvement of the Transglycosylation Efficiency of a Lacto-N-Biosidase from Bifidobacterium bifidum by Protein Engineering
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Shadow Removal Method Based upon Color Transfer and Color Tuning in UAV Imaging

by
Gilberto Alvarado-Robles
,
Francisco J. Solís-Muñoz
,
Marco A. Garduño-Ramón
,
Roque A. Osornio-Ríos
and
Luis A. Morales-Hernández
*
HSPdigital CA-Mecatronica Engineering Faculty, Autonomous University of Queretaro, San Juan del Rio 76806, Mexico
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(23), 11494; https://doi.org/10.3390/app112311494
Submission received: 11 October 2021 / Revised: 18 November 2021 / Accepted: 23 November 2021 / Published: 4 December 2021
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
Through the increasing use of unmanned aerial vehicles as remote sensing tools, shadows become evident in aerial imaging; this fact, alongside the higher spatial resolution obtained by high-resolution mounted cameras, presents a challenging issue when performing different image processing tasks related to urban areas monitoring. Accordingly, the state-of-the-art reported works can correct the shadow regions, but the heterogeneity between the corrected shadow and non-shadow areas is still evident and especially noticeable in concrete and asphalt regions. The present work introduces a local color transfer methodology to shadow removal which is based on the CIE L*a*b (Lightness, a and b) color space that considers chromatic differences in urban regions, and it is followed by a color tuning using the HSV color space. The quantitative comparison was executed by using the shadow standard deviation index (SSDI), where the proposed work provided low values that improve up to 19 units regarding other tested methods. The qualitative comparison was visually realized and proved that the proposed method enhances the color correspondence without losing texture information. Quantitative and qualitative results validate the results of color correction and texture preservation accuracy of the proposed method against other published methodologies.

1. Introduction

A shadow is a phenomenon that is presented when a light source is totally or partially obstructed by an object [1]; according to the shadow location, they can be classified into cast shadow (the part that is cast on the ground or other objects by high objects) and self-shadow (the part of the object that is not illuminated) [2]. With the increasing development of remote sensing technology, the shadow effect becomes more noticeable in aerial imaging, which, together with the higher spatial resolution, brings new challenges to the image preprocessing step. Specifically, unmanned aerial vehicles (UAVs) have been increasingly used over the last few years; recently, this technology has taken place in the areas of object detection [3,4,5], agriculture [6,7,8], and urban zones analysis [9,10,11]. Within the applications focused on urban areas, the presence of cast shadows and self-shadows in aerial images might also cause shape distortion in objects and loss of color information [12] since urban surface features are rather complex with a great variety of shadows resulting from occlusion caused by buildings, bridges, and trees [13]. It is also known that in color aerial images, color characteristics are valid descriptors that simplify identifying characteristics of visual interpretation applications [14]; this fact enhances the need for new methodologies for shadow removal that can retrieve color and texture information from aerial images.
Shadow removal methodologies pursue obtaining shadow-free images since it can facilitate improving the performance of tasks, such as object recognition, object tracking, and information enhancement [15]; due to the previously stated, several shadow removal methods have been developed. Such methods are conducted by using close-shot images [16,17,18], video surveillance [19,20,21], and aerial images [22,23,24]. Numerous methods attending to shadow removal in outdoor scenes have been developed. One of the leading strategies applied when performing shadow removal in outdoor scenes is illumination correction [25,26,27]. Highlighted examples in published works are methods that execute shadow removal in UAV aerial images using retinex theory [28] as the basis of computing the illumination correction, wherein the work published by Guo et al. [29] computes an improved luminance image and executes an illumination correction based upon the color transfer formula. Other recently published methodology bases itself on the use of the image matting process. The research published by Amin et al. [30] proposes a relighting method that does not rely on user interaction and maximizes the natural appearance of the output relighted images. Although the results are acceptable, the number of erosion-dilation iterations is empirically justified for the tested dataset, mainly consisting of close camera shots. Illumination correction methods usually depict a high consistency in color restoration and shadow boundaries. However, these methods usually maintain the chromatic features of shadowed regions, leading to color differences, especially in urban aerial scenes. As an alternative method when performing the shadow removal, color transfer [31] is presented. The local color transfer methods use spatial similarity to set up a relationship between pixel features of the single image shadows and non-shadows to restore the shadowed regions, in which such relationship predominantly consists of a statistical correlation [32,33] that can be performed using different color spaces or a single-color feature.
Recently in the deep learning field, convolutional neural network (CNN) architectures have been used to enhance the results of these methods [34]. Furthermore, a deep learning-related approach that uses generative adversarial networks (GANs) is used by Inuoue et al. [35] to perform the shadow detection and removal process, where they proposed a SynShadow model, a large-scale dataset of shadow/shadow-free/matte image triplets, and the pipeline to synthesize the diverse and realistic triplets. An adversarial neural network (ANN) based solution has been recently proposed by Tang et al. [36]. The authors targeted obtaining a shadow detection and removal procedure by taking care of the image color consistency at the mask silhouette region. This problem is tackled using the multiscale and global feature (MSGF) and the direction feature (DF) algorithms, resulting in an improvement in the balance error rate (BER) index for shadow detection and the root mean square error (RMSE) index for shadow removal when compared to ground-truth images for the image shadow triplets dataset (ISTD) and Stony Brook University (SBU) public datasets. ANN solutions require a comprehensive dataset of training images, sometimes produced by a manual image transform process that could require experts’ time. For a shadow removal ANN, shadow mask creation could be performed manually or in an automated fashion with a specific algorithm. For the case of GAN-based solutions, such as the one published by Ding et al. [37], the training process is described as a semi-supervised task, which does not rely as heavily on supervised data to set its model parameters; this solution uses a multi-step coarse-to-fine-grained shadow removal deep learning image generation process. However, solutions based on the deep learning processing scheme require a large amount of computing power for the training process, as the number of generations, the training dataset image count, and the input data size is substantially large. In addition to the aforementioned, pre-established image input and output sizes exist for the ANN models, requiring a resizing at both ends for effectively computing the shadow detection and removal processes.
Since complex scenarios could involve a heterogeneous mix of materials and textures, algorithms need to adapt themselves for any case. The study presented by Fan et al. [38] tries to solve this issue by two key steps: the first step is devoted to a filtering process where the textures are taken away, while the second and last step effectively incorporates depth cue data to the preprocessed input. This method shows an advantage for its second step, as no extra information or specific purpose hardware, such as a LiDAR capture device, is required, reducing the implementation cost and enabling the ability to handle more scenarios. Aerial imaging-related works are some of the emerging issues in the shadow removal task, in which the methods often base their work on illumination feature correction [14] and local color transfer [39]. Notwithstanding, the applications in UAV imaging still lack accurate results in color-corrected and texture-preserving shadows; this is mainly noticeable in the case of urban scenes that often contain several heterogeneous regions. Additionally, the capturing of aerial images take place at outdoor scenes, where two light sources mainly illuminate the environment: a light source (the sun) and a diffuse source (the sky) with different spectral power distributions; the skylight has components in the wavelengths from 450 to 495 nm of the visible spectrum [40]. Shadows are perceived when the direct illumination of the sun is blocked, and a region is only illuminated by the light of the sky; all regions covered by shadows appear to be more bluish. This phenomenon is especially noticeable in regions that present low saturated colors when illuminated by sunlight (concrete and asphalt); images captured with standard cameras also tend to capture images in which the bluish regions are visually and numerically discerned [41].
In this work, we present a novel method for shadow removal based on the characteristics of the CIE L*a*b (luminance, a, and b) color space and the HSV (Hue, Saturation, and Value) color space. The proposed work aims to correct the chromatic discrepancies found in concrete and asphalt areas in UAV-captured images under light and shadow conditions. This method consists of a proposed local color transfer algorithm that executes the color correction separating the colors’ statistics according to chromatic features into an image, uses a dilation process to the shadow mask, and a final color tuning that enhances the local color transfer results. The present proposal offers a shadow removal tool that improves the results presented in local color transfer algorithms when UAV-captured images are used. Moreover, this work considers both cast shadows and self-shadows. The proposed final color tuning step reduces the discontinuity found in similar regions that are found under shadows. In addition, this method presents an algorithm with a relatively low computational load that, at the same time, is suitable for implementing in a parallel computational model, owing to the independence of its functions. This new methodology was tested over aerial urban RGB (Red, Green, and Blue) images captured using a standard drone device. The study cases considered were captured at urban scenes containing mainly asphalt and concrete regions; such urban scenes are covered by shadow at different ratios in order to test the proposed color correction method. Likewise, the proposed work was also compared against algorithms in the state-of-the-art by visual qualitative analysis and quantitatively using the shadow standard deviation index (SSDI). The experiments demonstrated an improvement in color correction results at asphalt and concrete regions, and at the same time, accurately preserved texture information. In the following sections, we present the proposed method development describing the algorithm and method details; the results section, where the results of the experiments are exposed; the discussion section follows. Finally, the conclusions are stated.

2. Method Development

2.1. Proposed Methodology

The proposed method considers the previously discussed skylight effect and aims to improve the color correction in shaded regions by performing a statistical pixel-by-pixel local color transfer algorithm. The proposed method consists of three main blocks: input data, which consists of the image capturing process and shadow mask selection as input data; the color transfer algorithm, in which the proposed color correction method is executed to obtain an image with color correction at shadowed areas; and lastly, the color tuning process that executes a final color correction to improve the output image. The proposed methodology is schematized in Figure 1.

2.1.1. Input Data

The proposed work uses an RGB UAV-captured image defined as i, and a shadow image mask M as input data. Urban region shots were taken by a setup consisting of a UAV device (DJI Phantom 4) with a 12.4 MP camera mounted. At a variable flight height, capturing altitude was set for each image in order for the shot to contain enough information while preserving a good ground resolution. As it has been observed in the scheme displayed in Figure 2, the scenes captured different shadow regions, where such regions might contain self-shadows and cast shadows that depend on the light source occlusion.
The shadow mask (M) used was computed using the method developed in our previous work [42].

2.1.2. Color Transfer Algorithm

The first phase of the proposed algorithm consists of creating a B and L S pixels thresholding. This is accomplished by transforming i into HSV color space for the computation. The proposed thresholding operation is expressed in Equation (1).
L b ( x , y ) = 1 , if B ( x , y ) = argmax x , y S ( x , y ) < L s 0 , otherwise
where B represents the blue channel into an RGB image (i), S is the saturation in i, ( x , y ) is a position in an image i, and L s is a constant value to define low saturated regions. Its range is defined between 0 and 255, and in this work, 25 is selected.
The following process related to L b is the creation of the corresponding K s and K u image masks. An operation that creates a pair of different masks that include only the regions mentioned above is performed; this operation is expressed in Equation (2).
K s = L b M K u = L b M c
where K s and K u are the shadowed and unshadowed regions masks, respectively, that conform L b , M corresponds to the shadow mask of i, and M c represents the complement of the shadow mask.
Aiming to reduce the statistical inconsistencies found in shadow and unshadowed regions into processed images, we propose applying a morphological dilation to M; this is defined as ψ = δ λ 5 ( M ) , which allows increasing the statistical coincidence between shadowed and unshadowed regions.
The operations mentioned above allow us to perform a statistical analysis divided into two groups: local color transfer through ψ and i and color transfer using L b statistics through K s and K u . The color transfer algorithm is based upon the work proposed by [31], where its main operation is executed in parallel for the mentioned pairs. Equation (3) describes the operation for the proposed color transfer algorithm.
T o ( x , y ) = p o ( x , y ) x o ψ ¯ σ o ψ σ o ψ c + x o ψ c ¯ , if L b = 0 p o ( x , y ) x o K s ¯ σ o K s σ o K u + x o K u ¯ , otherwise
where T o is a pixel in channel o of the output image at the position ( x , y ) , p o ( x , y ) is a pixel in i at channel o, x o ψ ¯ and x o K s ¯ are the means of the shadowed regions at channel o, x o ψ c ¯ and x o K u ¯ are the means of the unshadowed regions at channel o, σ o ψ and σ o K s belong to standard deviations of the shadowed regions, and σ o ψ c and σ o K u are the standard deviations of the unshadowed regions. Finally, o refers to L, a, and b channels from a given CIE L*a*b image representation. After local color transfer computing, the output image T contains color correction at the shadowed regions. To visualize the results from the process, Figure 3a is used to illustrate the input image (i) to be processed by the proposed methodology. Figure 3b depicts the resulting image T.

2.1.3. Color Tuning Process

It can be discerned in Figure 3b that the color correction result shows good texture preservation and color recovery. Nevertheless, certain inconsistencies are still found when color-corrected regions are observed and compared to unshadowed regions. For instance, asphalt regions present discrepancies in the color perceived; this is especially noticeable for a yellowy color in the regions marked in red rectangles. One further noticeable inconsistency is the color in green areas (marked in blue rectangles); such areas contain oversaturated colors. Therefore, a final tuning process is proposed to be applied in the corrected regions only. In this case, the HSV color space is used to perform color tuning, taking the statistical reference values obtained from the color-corrected image in terms of hue and saturation. This adjustment is executed according to the criterion expressed in Equation (4).
T ( x , y ) = H t ( x , y ) = H ( x , y ) S t ( x , y ) = S ( x , y ) + μ s 2 , if R = max or G = max H t ( x , y ) = H ( x , y ) μ h μ h H ( x , y ) S t ( x , y ) = S ( x , y ) μ s μ s S ( x , y ) , otherwise
where T is the tuned image at the position ( x , y ) in the HSV color space; H and S are the hue and saturation of T, H t , and S t are the tuned hue and saturation in T ; μ h and μ s are the means of hue and saturation in T, respectively. The resulting image T is displayed in Figure 4b. When it is compared to Figure 4a, which corresponds to color transfer algorithm output T, it can be noticed that after color tuning, the inconsistencies found in asphalt regions are reduced. It is most noticeable in the regions marked in red rectangles, in which the color consistency between shadowed and unshadowed regions is improved. In green areas, it is shown that color saturation is corrected, and the visual consistency increases regarding unshadowed vegetation without losing relevant texture information; this is marked in blue rectangles.
It can be observed that the results displayed in Figure 3 and Figure 4 still have noticeable shadow boundaries; this is mainly owing to the complexity that shadows in urban aerial imaging present, which complicates the shadow mask creation process. However, the color inconsistency is reduced and improved after final color tuning.

2.2. Quantitative Analysis

Even though some shadow removal datasets are available [29,43,44], they have been constructed for close-shot images. Currently, there is no dataset related to shadow removal in urban aerial images, mainly because of the higher cost of image capturing; this complicates the analysis of shadow removal results as no reference point could be taken. For the quantitative evaluation, in this work, we use the shadow standard deviation index (SSDI) proposed by [25]. The SSDI computing is carried out for each channel (R, G, and B) of the output image T, defined by σ s n s , as shown in Equation (5).
σ s n s = 1 B b = 1 B 1 N i = 1 N F b , i s F b n s ¯ 2
where b is the current channel of the image, B is the total number of channels at the corrected image, i is the pixel in the shadow regions, and N is the total number of pixels in the shadow regions. F s is the corrected shadow region, and F n s ¯ is the means of the corresponding unshadowed sample set of the same channel. The SSDI is useful for measuring the variation of the corrected shadow regions regarding unshadowed regions. A low SSDI value specifies that the corrected shadow regions are consistent with the unshadowed regions, and a high SSDI value indicates that the corrected shadow regions are not consistent with unshadowed regions.

2.3. Study Cases

The submitted method was tested using UAV-captured images. In this case, the mentioned images were captured with the previously described UAV device; the resolution of each picture is approximately 3800 × 2800 pixels. As previously stated, there is no public shadow removal dataset for remote sensing images; thus, the test images were shot at an arbitrary capturing height, as specified in the experimental setup. This work’s primary objective for image acquisition was to include concrete and asphalt regions, which are commonly found in urban areas. The main criteria for the study cases selection were that the images depicted urban scenes that contain multiple colors, textures, and a considerable amount of asphalt and concrete regions. The study cases must contain different cast shadow and self-shadow coverage at different proportions. Such parameters are helpful to test the performance of the present method in terms of color correction consistency and texture preservation results. The study cases used in this work are displayed in Figure 5.
As shown in Figure 5, all study cases contain scenes that accomplish the mentioned criteria, in which each one contains shadows that cover different regions in the urban image. Specifically, Figure 5a–c contain shadows in a range of 40–60%, which are, in this work, defined as high shadow (HS) images. This group of images is tested in order to analyze the color correction results under conditions where shadowed regions cover up a larger area of the scene. In contrast, Figure 5d–f are about 20–25% covered by shadows; in this work, they are defined as slight shadow (SS) images. Moreover, in Figure 5d,e, a manual correction was performed to refine the shadow mask; this is mainly considered for testing the proposed work over images where the bluish effect in shadows is lighter than in others. In the case of those scenes, the shadowed regions are small regarding the HS group, which can present differences regarding the first group of study cases. It is also noticeable that the darker shadows in concrete and asphalt regions tend to increase the displacement to blue wavelengths in such regions.
Thus, to compare the present method, we selected two deep learning works: The first one is presented by Cun et al. [16], which develops a Shadow Matting Generative Adversarial Network (SMGAN) to synthesize realistic shadow mattings from a given shadow mask and shadow-free image. The second is the method proposed by Inoue et al. [35], using a GAN with a proposed SynShadow, a large-scale synthetic shadow/shadow-free/matte image triplets dataset, and a pipeline to synthesize images. Both methods were implemented as end-to-end shadow detection and removal. In the case of the second group of works, the methodologies were implemented by using the same shadow masks computed and used for our proposal. The methods tested were a shadow removal algorithm proposed by Luo et al. [25], which is based on an illumination correction algorithm, and the work published by Murali and Govindan. [33], a local color transfer method that uses the CIE L*a*b color space.

3. Results

The shadow removal results obtained for each tested method for HS images are displayed in Figure 6, where, for the sake of brevity, Cun et al.’s method is referenced as SMGAN, Inoue et al.’s method is referenced as SynShadow, Luo et al.’s method is referenced as Illumination Correction, and Murali and Govindan’s method is referenced as Color transfer.
It is evident that when shadow removal is performed in tested HS images, the results are hindered due to the high amount of land covered by such shadows. Likewise, it is noticeable that self-shadows typically found in trees and shrubs present a difficulty for color recovering. In Figure 6a–c, the results obtained in the Illumination correction method are displayed. It can be observed that the illumination correction algorithm loses color recovery accuracy when shadows cover asphalt and concrete; this can be seen as a bluish color in such regions. This fact makes shadow boundaries evident despite the boundary correction proposed in the tested work. Additionally, it is noticeable in the marked regions that green areas lose color accuracy and texture information. In Figure 6d–f, the results provided by the SMGAN method are depicted. It can be appreciated that this method keeps the texture features in the corrected regions but lacks an accurate color correction and narrows the image resolution to reduce the computational load. Figure 6g–i depicts the results for the color transfer method. It is shown that despite using a color transfer method based on the CIE L*a*b color space, the concrete and asphalt regions tend to keep the blue-like color or acquire a tone similar to other dominant regions in the unshadowed regions. The condition mentioned above is especially noticeable in Figure 6i in the region marked with a yellow rectangle, where the corrected shadows in asphalt become green-like. Additionally, in the grass regions highlighted with arrows, it can be noticed that such regions look blurred. In the case of Figure 6j–l, it can be observed that the SynShadow method shows visually accurate results that enhance the contour smoothing compared to traditional methods, but the corrected regions are still evident due to the chromatic differences found mainly in the asphalt areas, as signaled in the yellow rectangle. Lastly, corrected green areas are over lighted. Finally, Figure 6m–o depict the results of the proposed method. Although boundaries are still noticeable, it is discerned that the color in asphalt areas maintains an accurate visual consistency regarding the unshadowed one as marked in the yellow rectangles. In addition, it can be noticed that the grass areas highlighted keep texture information. As seen in Figure 6, HS images intricate the shadow removal task due to the limited information contained in unshadowed regions. In spite of this, the proposed method shows visual consistency for color and texture in grass and asphalt regions. The following set of results for SS images is shown in Figure 7.
As shown in Figure 7a–c, the illumination correction algorithm tends to depict blue-like colors in asphalt and concrete. In Figure 7d–f, the SMGAN method modifies the unshadowed areas, which represents a complete alteration of the image information. The results displayed in Figure 7g–i show that color correction in concrete and asphalt regions also tend to keep the blue-like color or acquire a tone similar to the other statistical dominant regions. Figure 7j–l shows that shadow removal results present an accurate color correspondency, wherein corrected regions can be observed with slight over-illumination; this is especially noticeable in the regions signaled with a yellow rectangle. Regarding Figure 7j–l, the results of the proposed method are depicted. It is noticeable that the color in the asphalt and concrete areas is recovered with visual accuracy, but in some corrected regions, it is observed that it is dark regarding unshadowed contiguous regions. Summarizing the results displayed in Figure 7, it is evident in the regions marked in a yellow rectangle that color consistency is improved in the proposed method, in which shadow boundaries are not as evident as the other traditional tested methods. Furthermore, it is quite remarkable that corrected self-shadows on the vegetation highlighted present an improvement in texture preservation. Likewise, it can be appreciated that self-shadows are also corrected. As seen in Figure 6 and Figure 7, the proposed method delivers accurate visual results, in which color correction and texture preservation are the principal issues attended to in this work. In order to complement the quantitative analysis, visual analysis for the results obtained was realized. In Figure 8, the comparison on specific zones is realized.
The analyzed zones are delimited with blue and red squares as depicted in Figure 8, in which the qualitative criteria to evaluate the shadow removal results are color correction and texture preservation. As observed in Figure 8b–g, the Illumination correction method provides results in which color and texture information are not visually consistent regarding unshadowed regions with similar land cover, and the texture information in green areas are blurred. In Figure 8c–h, a resolution loss due to the SMGAN method is noticeable. The end-to-end process also shows a poor shadow detection step in the study case analyzed. In the case of Figure 8d–i, it is evident that the texture information is accurately preserved. However, color information is visually inconsistent regarding unshadowed regions with the same land cover; specifically, Figure 8d shows over-saturated colors with acceptable texture preservation, and the asphalt in Figure 8i displays bluish colors in the corrected regions. The SynShadow method results shown in Figure 8e–j depicts an over-illuminated correction in green areas (see Figure 8e), and the corrected asphalt regions show color inconsistency in the shadow boundaries, as shown in Figure 8e. Additionally, the image resolution is evidently reduced in the provided results. Figure 8f–k shows the results obtained with our proposed method; the color correspondence provided in the corrected asphalt regions (see Figure 8k) is improved compared to the other tested methods. Although shadow boundaries are still visible, the difference in terms of color between unshadowed and corrected regions is reduced regarding the rest of the methods. Moreover, as it is noticed in Figure 8f, green areas present an accurate texture and color when compared with the rest of works. It was seen in Figure 8 that the proposed work shows visually accurate results during qualitative analysis than the tested methods.
As mentioned above, the quantitative analysis was executed by using SSDI, and Table 1 displays the results obtained in each study case for all methods compared.
Table 1 presents the SSDI results computed for the methods tested. In the specific case of study case 1, the proposed work presents an improvement of up to 19 units compared to the Illumination correction method. Nonetheless, in test 2, it is discerned that the SynShadow method presents a lower SSDI value. In the average results, it is observed near values between Illumination correction, color transfer, and SMGAN methods, and in the case of the SynShadow method and our proposed method, the average results are numerically near, where our proposal is lower by about 0.25. According to qualitative and quantitative analysis, it was demonstrated that the proposed work provides accurate color correction and texture preservation results, improving the other tested methods. The results validated the proposed method as an alternative solution to automatically perform shadow removal tasks in urban aerial images without resizing the input image.

4. Discussion

As proven in the previous section, the shadow removal task executed over images containing urban aerial scenes still presents a challenging task. According to the experiments executed, the Illumination correction method presented noticeable bluish colors in corrected zones that include asphalt and concrete; this is especially visible in Figure 6a–c. This result is mainly caused by the lack of chromatic correction in the shadowed regions that include elements that present low saturated colors when illuminated by sunlight and are bluish when illuminated by a skylight. In the case of the Color transfer method, the chromatic correction is executed. However, as it is mainly observed in Figure 6i and Figure 7i, the corrected shadows display a slight green color; this is caused during the color transfer process. The unshadowed low saturated regions are classified with the rest of the unshadowed colors contained in the image, which causes an ambiguous classification, leading to color transfer results with corrected regions that show a slight color tone similar to the dominant region (green color in the experiments performed).
It can be seen in Figure 6d–f and Figure 7d–f that the SMGAN method presents variated results, in which it is noticed that in the case of Figure 6f, the corrected regions show accurate color and texture results, but such results are not extended to the rest of the images. Additionally, in Figure 7d–e, it is observed that the method modifies the colors in the entire image. Lastly, the SynShadow method results are depicted in Figure 6j–l and Figure 7j–l; it is evident that the SynShadow method performs acceptable boundary smoothing. Nevertheless, the color correction results are still visually perceptible; also, shadowed green areas correction tends to provide over-illuminated pixels, which can be mainly observed in Figure 6j,l, and detailed in Figure 8. Deep learning methods present an innovative and functional methodology that is able to execute such tasks as an end-to-end process; it can be discerned in that deep learning-based methods provide results that enhance the boundaries smoothing when it is visually compared to traditional methods. Nevertheless, the results can vary depending on the training process and the method development, and the computational load implies an image resizing that can lead to data loss. In addition, although the shadow boundaries smoothing is executed, the color correction results provided still keep visual evidence of the corrected regions.
The results of this study’s experiments are shown in Figure 6m–o and Figure 7m–o. It was demonstrated through them that executing the color classification grouping for the regions that contain concrete and asphalt enhances the color correction results and avoids the statistical misclassification of such regions. It was also demonstrated that the dilation applied to the shadow mask improves the statistical relation between shadowed and unshadowed regions; this can be appreciated in improved texture preservation, especially in green areas, as detailed in Figure 8. The relatively low computational load allows this method to be executed over high-resolution images. Nonetheless, the boundaries smoothing is still deficient in most of the study cases tested; this opens the opportunity of improving the present results working in boundaries processing. The present work presents an alternative tool that can be suitable to any shadow detection algorithm to process the automatic or semi-automatic process end-to-end.

5. Conclusions

In the proposed work, a methodology for cast shadows and self-shadows removal was presented. The proposed approach offers a tool based on color transfer for color correction in shadowed regions in urban aerial scenes captured with UAV. The presented work was tested under different urban scenes containing roads, concrete sidewalks, and green areas, where scenes presented different percentages of shadows were transformed into scenes that presented different darkness levels, and texture features were also considered. During the qualitative analysis, the advantages that this work shows over the other tested methods were demonstrated. Despite preserving shadow boundaries, the color consistency and texture preservation provided visually accurate results; this is mainly noticeable in vegetation, road, and sidewalk textures, which were successfully conserved. Likewise, according to the SSDI results, the proposed method provided far better results than the other tested methods in all the study cases in this work, which proves its accuracy. The qualitative and quantitative results validate this work as a valuable and affordable tool in aerial urban areas shadow removal tasks. Additionally, this methodology is helpful as a preprocessing step to execute remote sensing, pattern recognition, and image segmentation tasks. Further works under this topic would focus on enhancing the quality of shadow removal results in terms of shadow boundaries since it still involves a challenging task. Likewise, in future works, photogrammetric processing of the corrected images will be executed for specific applications.

Author Contributions

Conceptualization, G.A.-R., R.A.O.-R. and L.A.M.-H.; methodology, G.A.-R.; software, G.A.-R., M.A.G.-R. and F.J.S.-M.; validation, G.A.-R.; formal analysis, R.A.O.-R.; investigation, G.A.-R.; resources, G.A.-R. and L.A.M.-H.; data curation, G.A.-R.; writing—original draft preparation, G.A.-R.; writing—review and editing, R.A.O.-R., L.A.M.-H. and F.J.S.-M.; visualization, G.A.-R. and F.J.S.-M.; supervision, L.A.M.-H. and R.A.O.-R.; project administration, L.A.M.-H.; funding acquisition, G.A.-R., R.A.O.-R., L.A.M.-H. and M.A.G.-R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the Mexican National Council for Science and Technology (CONACYT) through Alvarado-Robles’ Ph.D., grant number 666566/487077.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Qiao, X.; Yuan, D.; Li, H. Urban Shadow Detection and Classification Using Hyperspectral Image. J. Indian Soc. Remote Sens. 2017, 45, 945–952. [Google Scholar] [CrossRef]
  2. Wu, Z.; He, L.; Hu, Z.; Zhang, Y.; Wu, G. Hierarchical Segmentation Evaluation of Region-Based Image Hierarchy. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 2718–2727. [Google Scholar] [CrossRef]
  3. Wu, B.; Liang, A.; Zhang, H.; Zhu, T.; Zou, Z.; Yang, D.; Tang, W.; Li, J.; Su, J. Application of conventional UAV-based high-throughput object detection to the early diagnosis of pine wilt disease by deep learning. For. Ecol. Manag. 2021, 486, 118986. [Google Scholar] [CrossRef]
  4. Zhang, H.; Sun, M.; Li, Q.; Liu, L.; Liu, M.; Ji, Y. An empirical study of multi-scale object detection in high resolution UAV images. Neurocomputing 2021, 421, 173–182. [Google Scholar] [CrossRef]
  5. Tian, G.; Liu, J.; Yang, W. A dual neural network for object detection in UAV images. Neurocomputing 2021, 443, 292–301. [Google Scholar] [CrossRef]
  6. Al-Naji, A.; Fakhri, A.B.; Gharghan, S.K.; Chahl, J. Soil color analysis based on a RGB camera and an artificial neural network towards smart irrigation: A pilot study. Heliyon 2021, 7, e06078. [Google Scholar] [CrossRef]
  7. Radoglou-Grammatikis, P.; Sarigiannidis, P.; Lagkas, T.; Moscholios, I. A compilation of UAV applications for precision agriculture. Comput. Netw. 2020, 172, 107148. [Google Scholar] [CrossRef]
  8. Hamuda, E.; Mc Ginley, B.; Glavin, M.; Jones, E. Automatic crop detection under field conditions using the HSV colour space and morphological operations. Comput. Electron. Agric. 2017, 133, 97–107. [Google Scholar] [CrossRef]
  9. Isibue, E.W.; Pingel, T.J. Unmanned aerial vehicle based measurement of urban forests. Urban For. Urban Green. 2020, 48, 126574. [Google Scholar] [CrossRef]
  10. Lyu, Y.; Vosselman, G.; Xia, G.S.; Yilmaz, A.; Yang, M.Y. UAVid: A semantic segmentation dataset for UAV imagery. ISPRS J. Photogramm. Remote Sens. 2020, 165, 108–119. [Google Scholar] [CrossRef]
  11. Shao, H.; Song, P.; Mu, B.; Tian, G.; Chen, Q.; He, R.; Kim, G. Assessing city-scale green roof development potential using Unmanned Aerial Vehicle (UAV) imagery. Urban For. Urban Green. 2021, 57, 126954. [Google Scholar] [CrossRef]
  12. Ghandour, A.J.; Jezzini, A.A. Building shadow detection based on multi-thresholding segmentation. Signal Image Video Process. 2018, 13, 349–357. [Google Scholar] [CrossRef]
  13. Mo, N.; Zhu, R.; Yan, L.; Zhao, Z. Deshadowing of Urban Airborne Imagery Based on Object-Oriented Automatic Shadow Detection and Regional Matching Compensation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 585–605. [Google Scholar] [CrossRef]
  14. Silva, G.F.; Carneiro, G.B.; Doth, R.; Amaral, L.A.; Azevedo, D.F. Near real-time shadow detection and removal in aerial motion imagery application. ISPRS J. Photogramm. Remote Sens. 2018, 140, 104–121. [Google Scholar] [CrossRef]
  15. Zhang, L.; Zhang, Q.; Xiao, C. Shadow Remover: Image Shadow Removal Based on Illumination Recovering Optimization. IEEE Trans. Image Process. 2015, 24, 4623–4636. [Google Scholar] [CrossRef] [PubMed]
  16. Cun, X.; Pun, C.M.; Shi, C. Towards Ghost-Free Shadow Removal via Dual Hierarchical Aggregation Network and Shadow Matting GAN. Proc. AAAI Conf. Artif. Intell. 2020, 34, 10680–10687. [Google Scholar] [CrossRef]
  17. Gong, H.; Cosker, D. User-assisted image shadow removal. Image Vis. Comput. 2017, 62, 19–27. [Google Scholar] [CrossRef] [Green Version]
  18. Chen, Q.; Zhang, G.; Yang, X.; Li, S.; Li, Y.; Wang, H.H. Single image shadow detection and removal based on feature fusion and multiple dictionary learning. Multimed. Tools Appl. 2018, 77, 18601–18624. [Google Scholar] [CrossRef]
  19. Varghese, A.; Sreelekha, G. Sample-based integrated background subtraction and shadow detection. IPSJ Trans. Comput. Vis. Appl. 2017, 9, 1–12. [Google Scholar] [CrossRef]
  20. Zheng, L.; Ruan, X.; Chen, Y.; Huang, M. Shadow removal for pedestrian detection and tracking in indoor environments. Multimed. Tools Appl. 2017, 76, 18321–18337. [Google Scholar] [CrossRef]
  21. Khare, M.; Srivastava, R.K.; Jeon, M. Shadow detection and removal for moving objects using Daubechies complex wavelet transform. Multimed. Tools Appl. 2018, 77, 2391–2421. [Google Scholar] [CrossRef]
  22. Zigh, E.; Kouninef, B.; Kadiri, M. Removing Shadows Using RGB Color Space in Pairs of Optical Satellite Images. J. Indian Soc. Remote Sens. 2017, 45, 431–441. [Google Scholar] [CrossRef]
  23. Elbakary, M.I.; Iftekharuddin, K.M. Shadow detection of man-made buildings in high-resolution panchromatic satellite images. IEEE Trans. Geosci. Remote Sens. 2014, 52, 5374–5386. [Google Scholar] [CrossRef]
  24. Anoopa, S.; Dhanya, V.; Kizhakkethottam, J.J. Shadow Detection and Removal Using Tri-Class Based Thresholding and Shadow Matting Technique. Procedia Technol. 2016, 24, 1358–1365. [Google Scholar] [CrossRef] [Green Version]
  25. Luo, S.; Shen, H.; Li, H.; Chen, Y. Shadow removal based on separated illumination correction for urban aerial remote sensing images. Signal Process. 2019, 165, 197–208. [Google Scholar] [CrossRef]
  26. He, K.; Zhen, R.; Yan, J.; Ge, Y. Single-Image Shadow Removal Using 3D Intensity Surface Modeling. IEEE Trans. Image Process. 2017, 26, 6046–6060. [Google Scholar] [CrossRef] [PubMed]
  27. Shedlovska, Y.I.; Hnatushenko, V.V. Shadow detection and removal using a shadow formation model. In Proceedings of the 2016 IEEE 1st International Conference on Data Stream Mining and Processing, DSMP 2016, Lviv, Ukraine, 23–27 August 2016; pp. 187–190. [Google Scholar] [CrossRef]
  28. Land, E.H.; McCann, J.J. Lightness and Retinex Theory. J. Opt. Soc. Am. 1971, 61, 1–11. [Google Scholar] [CrossRef]
  29. Guo, R.; Dai, Q.; Hoiem, D. Single-image shadow detection and removal using paired regions. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 20–25 June 2011; pp. 2033–2040. [Google Scholar] [CrossRef]
  30. Amin, B.; Mohsin Riaz, M.; Ghafoor, A. Automatic shadow detection and removal using image matting. Signal Process. 2020, 170, 107415. [Google Scholar] [CrossRef]
  31. Reinhard, E.; Ashikhmin, M.; Gooch, B.; Shirley, P. Color transfer between images. IEEE Comput. Graph. Appl. 2001, 21, 34–41. [Google Scholar] [CrossRef]
  32. Lorenzi, L.; Melgani, F.; Mercier, G.; Bazi, Y. Assessing the reconstructability of shadow areas in VHR images. IEEE Trans. Geosci. Remote Sens. 2013, 51, 2863–2873. [Google Scholar] [CrossRef]
  33. Murali, S.; Govindan, V.K. Shadow detection and removal from a single image: Using LAB color space. Cybern. Inf. Technol. 2013, 13, 95–103. [Google Scholar] [CrossRef] [Green Version]
  34. Khan, S.H.; Bennamoun, M.; Sohel, F.; Togneri, R. Automatic Shadow Detection and Removal from a Single Image. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 431–446. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Inoue, N.; Yamasaki, T. Learning from Synthetic Shadows for Shadow Detection and Removal. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 4187–4197. [Google Scholar] [CrossRef]
  36. Tang, J.; Luo, Q.; Guo, F.; Wu, Z.; Xiao, X.; Gao, Y. SDRNet: An end-to-end shadow detection and removal network. Signal Process. Image Commun. 2020, 84, 115832. [Google Scholar] [CrossRef]
  37. DIng, B.; Long, C.; Zhang, L.; Xiao, C. ARGAN: Attentive recurrent generative adversarial network for shadow detection and removal. In Proceedings of the IEEE International Conference on Computer Vision, Taipei, Taiwan, 22–25 September 2019; pp. 10212–10221. [Google Scholar] [CrossRef] [Green Version]
  38. Fan, X.; Wu, W.; Zhang, L.; Yan, Q.; Fu, G.; Chen, Z.; Long, C.; Xiao, C. Shading-aware shadow detection and removal from a single image. Vis. Comput. 2020, 36, 2175–2188. [Google Scholar] [CrossRef]
  39. Hu, X.; Fu, C.W.; Zhu, L.; Qin, J.; Heng, P.A. Direction-aware spatial context features for shadow detection and removal. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 2795–2808. [Google Scholar] [CrossRef] [Green Version]
  40. Gu, L.; Robles-Kelly, A. Shadow modelling based upon Rayleigh scattering and Mie theory. Pattern Recognit. Lett. 2014, 43, 89–97. [Google Scholar] [CrossRef] [Green Version]
  41. Huerta, I.; Holte, M.B.; Moeslund, T.B.; Gonzàlez, J. Chromatic shadow detection and tracking for moving foreground segmentation. Image Vis. Comput. 2015, 41, 42–53. [Google Scholar] [CrossRef] [Green Version]
  42. Alvarado-Robles, G.; Osornio-Ríos, R.A.; Solís-Muñoz, F.J.; Morales-Hernández, L.A. An Approach for Shadow Detection in Aerial Images Based on Multi-Channel Statistics. IEEE Access 2021, 9, 34240–34250. [Google Scholar] [CrossRef]
  43. Vicente, T.F.; Hoai, M.; Samaras, D. Leave-One-Out Kernel Optimization for Shadow Detection and Removal. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 682–695. [Google Scholar] [CrossRef]
  44. Yoon, H.J.; Kim, K.J.; Chun, J.C. Shadow detection and removal from photo-realistic synthetic urban image using deep learning. Comput. Mater. Contin. 2020, 62, 459–472. [Google Scholar] [CrossRef]
Figure 1. Proposed methodology.
Figure 1. Proposed methodology.
Applsci 11 11494 g001
Figure 2. Image capturing scheme.
Figure 2. Image capturing scheme.
Applsci 11 11494 g002
Figure 3. Input image (a) and the proposed color transfer result (b). Red rectangles focus on concrete areas and blue in vegetation or grass.
Figure 3. Input image (a) and the proposed color transfer result (b). Red rectangles focus on concrete areas and blue in vegetation or grass.
Applsci 11 11494 g003
Figure 4. Color transfer result (a); color tuning result (b). Red rectangles focus on concrete areas and blue in vegetation or grass.
Figure 4. Color transfer result (a); color tuning result (b). Red rectangles focus on concrete areas and blue in vegetation or grass.
Applsci 11 11494 g004
Figure 5. Test images, urban scenes covered by about 40–60% of shadows: study cases 1 (a), 2 (b), and 3 (c). The second group of urban scenes covered by about 20–25% of shadows: study cases 4 (d), 5 (e), and 6 (f).
Figure 5. Test images, urban scenes covered by about 40–60% of shadows: study cases 1 (a), 2 (b), and 3 (c). The second group of urban scenes covered by about 20–25% of shadows: study cases 4 (d), 5 (e), and 6 (f).
Applsci 11 11494 g005
Figure 6. Illumination correction results for study cases 1 (a), 2 (b), and 3 (c); SMGAN method results for study cases 1 (d), 2 (e), and 3 (f); color transfer results for study cases 1 (g), 2 (h), and 3 (i); SynShadow method results for study cases 1 (j), 2 (k), and 3 (l); the roposed method results for study cases 1 (m), 2 (n), and 3 (o).
Figure 6. Illumination correction results for study cases 1 (a), 2 (b), and 3 (c); SMGAN method results for study cases 1 (d), 2 (e), and 3 (f); color transfer results for study cases 1 (g), 2 (h), and 3 (i); SynShadow method results for study cases 1 (j), 2 (k), and 3 (l); the roposed method results for study cases 1 (m), 2 (n), and 3 (o).
Applsci 11 11494 g006
Figure 7. Illumination correction results for study cases 4 (a), 5 (b), and 6 (c); SMGAN method results for study cases 4 (d), 5 (e), and 6 (f); color transfer results for study cases 4 (g), 5 (h), and 6 (i); SynShadow method results for study cases 4 (j), 5 (k), and 6 (l); the proposed method results for study cases 4 (m), 5 (n), and 6 (o).
Figure 7. Illumination correction results for study cases 4 (a), 5 (b), and 6 (c); SMGAN method results for study cases 4 (d), 5 (e), and 6 (f); color transfer results for study cases 4 (g), 5 (h), and 6 (i); SynShadow method results for study cases 4 (j), 5 (k), and 6 (l); the proposed method results for study cases 4 (m), 5 (n), and 6 (o).
Applsci 11 11494 g007
Figure 8. Input image for study case 1 (a), close view of the grass area for illumination correction (b), SMGAN method (c), color transfer method (d), SynShadow method (e), and the proposed work (f). A close view of asphalt color-corrected region for illumination correction (g), SMGAN method (h), color transfer method (i), SynShadow method (j), and the proposed work (k).
Figure 8. Input image for study case 1 (a), close view of the grass area for illumination correction (b), SMGAN method (c), color transfer method (d), SynShadow method (e), and the proposed work (f). A close view of asphalt color-corrected region for illumination correction (g), SMGAN method (h), color transfer method (i), SynShadow method (j), and the proposed work (k).
Applsci 11 11494 g008
Table 1. Results comparison of SSDI results.
Table 1. Results comparison of SSDI results.
TestProposed
Work
Illumination
Correction [25]
Color
Transfer [33]
SMGAN
[16]
SynShadow
[35]
117.23936.96328.19833.78019.610
220.95322.39620.57931.98017.155
311.89521.16015.79817.13912.26
413.14519.45822.24213.97113.81
514.77920.48320.80913.21216.300
612.90023.48421.86618.85213.288
AVG15.15221.99121.58221.48915.404
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Alvarado-Robles, G.; Solís-Muñoz, F.J.; Garduño-Ramón, M.A.; Osornio-Ríos, R.A.; Morales-Hernández, L.A. A Novel Shadow Removal Method Based upon Color Transfer and Color Tuning in UAV Imaging. Appl. Sci. 2021, 11, 11494. https://doi.org/10.3390/app112311494

AMA Style

Alvarado-Robles G, Solís-Muñoz FJ, Garduño-Ramón MA, Osornio-Ríos RA, Morales-Hernández LA. A Novel Shadow Removal Method Based upon Color Transfer and Color Tuning in UAV Imaging. Applied Sciences. 2021; 11(23):11494. https://doi.org/10.3390/app112311494

Chicago/Turabian Style

Alvarado-Robles, Gilberto, Francisco J. Solís-Muñoz, Marco A. Garduño-Ramón, Roque A. Osornio-Ríos, and Luis A. Morales-Hernández. 2021. "A Novel Shadow Removal Method Based upon Color Transfer and Color Tuning in UAV Imaging" Applied Sciences 11, no. 23: 11494. https://doi.org/10.3390/app112311494

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop