Next Article in Journal
A Study of the Bond Strength and Mechanism between Basalt Fibers and Asphalt Binders
Previous Article in Journal
Exposure Time to Work-Related Hazards and Factors Affecting Musculoskeletal Pain in Nurses
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Highlight Removal Emphasizing Detail Restoration

School of Electrical and Information Engineering, Wuhan Institute of Technology, Wuhan 430205, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(6), 2469; https://doi.org/10.3390/app14062469
Submission received: 1 February 2024 / Revised: 7 March 2024 / Accepted: 11 March 2024 / Published: 14 March 2024

Abstract

:
In existing highlight removal methods, research on highlights on metal surfaces is relatively limited. Therefore, this paper proposes a new, simple, effective method for removing highlights from metal surfaces, which can better restore image details. Additionally, the approach presented in this paper is highly effective for highlight removal in everyday real-world highlight scenarios. Specifically, we first separate the image’s illumination space based on the Retinex model and generate a highlight mask using the mean plus standard deviation method. Then, based on the mask, we transform the original image and the image at the corresponding mask position to the V channel of the HSV space, achieving the effective elimination of highlights. To enhance the details of the restored image, this paper introduces a method involving adaptive Laplacian sharpening operators and gradient fusion for detail enhancement at highlight removal positions. Finally, a highlight-free image with well-preserved details is obtained. In the experimental phase, we validate the proposed method using real welding seam highlight datasets and real-world highlight datasets. Compared with the existing methods, the proposed method achieves high-quality qualitative and quantitative evaluation.

1. Introduction

In metal images, most pictures exhibit noticeable highlights [1]. Highlights not only cause significant visual disturbance but also, due to the discontinuity between highlight and non-highlight pixel areas, greatly reduce the image’s detail information. Among them, welding, as a widely used joining method in industry, may result in an unstable structure if the quality of the weld seam is inadequate. Through weld seam detection, potential risks can be effectively reduced, ensuring the safety of the connected parts. The timely identification and repair of weld seam defects can prevent product failures during use, thereby reducing the costs of later maintenance and replacement. Through effective detection methods, quality issues can be eliminated before product delivery, enhancing production efficiency and reducing costs associated with maintenance and repair. In automated welding processes, timely and accurate detection can quickly eliminate non-conforming products, ensuring the continuity and stability of the production process. Therefore, highlight removal is crucial for the accurate detection of weld seams. Over the past period, researchers have extensively conducted studies on highlight removal [2,3].
The existing highlight removal algorithms are primarily divided into two categories: those based on multiple images and those based on a single image. Methods based on multiple images include perspective transformation [4,5] and variations in light source position [6,7]. On the other hand, single-image-based highlight removal algorithms encompass color segmentation [8], pixel space in the domain [9,10], color space [11,12], and repair model based approaches [13] (utilizing deep learning methods).
However, most of the above methods overlook the presence of specular pixels and operate under the assumption of perfect diffuse reflection. For instance, these methods have limitations in experimental scenarios with a single background, overly restrictive scene conditions, and generally low ambient light. However, in the real world, most object surfaces exhibit both diffuse and specular reflections. Consequently, when applying these algorithms to process images from real-world scenes, the results are often unsatisfactory [7,8]. These methods typically cover everyday scenes [14], metal surfaces [15], and special scenarios [16]. Therefore, a method that can be applied to real-world scenes and perform highlight removal in certain special scenarios becomes particularly important.
Additionally, the restoration of image details involves recovering texture, edges, and subtle variations, which define the characteristics of the captured scene. This enhancement not only improves the aesthetics of the image but also enriches its interpretability and practicality. The importance of highlight removal and simultaneous enhancement of image details cannot be overlooked. It can reveal hidden information, enrich visual perception, and provide more accurate and meaningful image data for various fields [17]. Therefore, while eliminating highlights from real-world images, restoring and preserving their detail information is particularly important.
In the summary by Khan [18] and others on previous approaches, the importance of identifying highlight regions is emphasized. Detecting highlight positions directly influences the final results of highlight removal. Through a systematic analysis of existing highlight removal algorithms, this paper focuses on detecting and removing highlights in individual images while preserving their detail information. The aim is to improve the accuracy of highlight removal and the fidelity of the final results. Therefore, a highlight removal method suitable for various real-world scenarios is proposed. In summary, the main contributions of this paper are as follows:
This paper introduces a widely applicable highlight removal method that effectively eliminates highlights, enhances image visibility, and improves image details.
Due to the significant loss of image details after highlight removal, this paper proposes a method using adaptive Laplacian sharpening operators and gradient fusion to enhance the details of the image after highlight removal.
Finally, this paper validated the proposed method using real metal welding seam highlight datasets and everyday highlight image datasets. Compared to traditional methods, the approach in this paper more effectively removes highlights and better showcases the restored details of the welding seam. In the final comparative experiments, this method achieved superior results.

2. Related Works

In the context of color-space-based highlight removal, Yang [8] proposed a two-dimensional Ch-CV space, which effectively reflects the correspondence between diffuse reflection and specular reflection components. The method involves segmenting highlight regions and separating reflections for each segmented area. While using color segmentation to remove highlights in a single image is a viable strategy, its algorithm exhibits limited robustness for images with complex textures and often requires manual detection of specular regions. Yang [11] proposed a color space based on H-S, separating reflection components by adjusting the saturation of specular pixels to the value of only diffuse reflection pixels with the same chromaticity. Ramos [12], building on the dual-color reflection model, transformed a color image to the YCbCr color space for histogram matching to eliminate highlights.
In the context of highlight removal based on the pixel space, Shen [9] and colleagues improved the SF image to obtain the MSF (modified specular-free) image. Subsequently, they determined specular separation and diffuse reflection components based on the difference between this image and the original one. Through iteration, the primary color and chromaticity differences were established, and highlights were eventually removed using the least squares method. The following year, Shen [10] and colleagues, building upon the previous work, performed pixel-level reflection adjustments for highlight restoration by using the smooth color transformation of the highlight surrounding region as a reference. Yang [19] and colleagues estimated the maximum chromaticity of the image’s diffuse reflection based on the SF image principle. They then used this value as the guided value for a bilateral filter to process the maximum chromaticity image, thereby separating the reflection component. Akashi [20] introduced a framework that combines non-negative matrix factorization (NNMF) and sparsity constraints to simultaneously estimate the dominant color and separate the diffuse reflection component. Yamamoto [3] and colleagues added high-intensity filters to the reflection and diffuse reflection components to detect pixels with separation errors. They replaced the separation results of erroneous pixels with the results of other reference pixels, achieving the secondary optimization of the resulting image. Zhang [21] used orthogonal decomposition to remove highlights and, based on the clustering results, separated diffuse reflection from specular reflection. Liu [22] leveraged the global chromaticity characteristics of reflection-separated pixels to obtain oversaturated images without highlights. They then restored saturation based on the respective characteristics of the two reflection components, thereby eliminating highlights. Zhao [23] and colleagues, considering the structural similarity between the reflection component and the highlight image and the local correlation characteristics of chromaticity in the diffuse reflection component, achieved highlight removal by addressing chromaticity joint compensation and local structure. Souza [24] and colleagues, building on previous pixel clustering, analyzed the distribution patterns of chromaticity extremes in the chromaticity space, achieving the goal of pixel clustering. Finally, they separated the reflection component based on the intensity ratio of each cluster. Xia [25] and colleagues used a globally optimized method based on the dual-color reflection model to remove highlights. This method involves correcting the hue and saturation of the highlighted region to estimate the chromaticity of diffuse reflection and using convex optimization with dual regularization to estimate the coefficients of diffuse reflection and specular reflection.
Utilizing a multiscale approach for highlight removal, Imai [5] and colleagues, based on the dual-color reflection model and spectral prediction of image data, proposed three multi-light source highlight detection methods. They simultaneously estimated the emission spectra for each light source. Wang [6] and colleagues captured four images using four specific points to form a dataset and reconstructed images without specular reflection using this dataset. Iwata [7] and colleagues proposed a method for video imaging. It primarily involves adjusting the capture rate of a moving camera and the flash rate of a flash unit to generate images with minimum and maximum illuminance. These two images are then used to determine images without specular reflection. Haghighat [4] and colleagues proposed an R-D-based strategy that can categorize images from multiple views into specular reflection and diffuse reflection components. They use two different transformations to distinguish between diffuse and specular reflection data. They introduced a progressive approach to eliminate specular reflection from diffuse data in stages. Takechi [2] revealed the low-rank structure of images at different viewpoints under various light source directions. They then expressed the formula for separating diffuse reflection and specular reflection as a low-rank approximation of a third-order tensor to achieve highlight removal. Jiao [16] and colleagues first captured document images from multiple viewpoints, merged and analyzed them for highlight region detection, and finally used a patching algorithm to repair highlight areas. However, this approach is only suitable for document images. Huo [26] and colleagues, through exposure correction, generated multiple low-dynamic-range (LDR) images with different exposures. They then used a highlight detection algorithm to identify highlight regions and synthesized a high-dynamic-range (HDR) image using LDR images with different exposure levels. Shah [27] and colleagues utilized multi-viewpoint images based on the continuity of image frames. They replaced pixels at highlight positions with pixels from adjacent frames to achieve highlight removal.

3. Method of This Article

The flowchart of the proposed method is shown in Figure 1, and it mainly consists of the following steps: First, apply the Retinex model to the input image, separating the image into an illumination space and reflection space. Next, process the illumination image based on the sum of the mean and standard deviation of the current image pixels, separate the highlight positions of the current image, and generate the corresponding mask. Then, based on this mask, separate the specular reflection in this area. The advantage of this approach is that it only processes the highlight region separately, avoiding color distortion due to global processing. Given that there are noticeable processing artifacts between the highlight and non-highlight regions after highlight removal, this paper introduces a compensation function to reduce the post-processing discontinuity. Finally, due to the significant loss of details at the original highlight positions after highlight removal, this paper uses an approach of the adaptive Laplacian operator and gradient fusion to enhance the image, resulting in the final output of the image after highlight removal.
This paper utilizes the illumination component separated based on the Retinex model and employs the mean and standard deviation method to isolate the highlight mask. Building upon the obtained highlight mask, separation of specular reflection is performed. The advantage of this approach lies in improving the traditional method of globally processing highlight images, thus avoiding color bias issues after separation. Additionally, to enhance the detail information after removing highlights from the image, this paper introduces gradient fusion and the Laplacian local enhancement algorithm. The processing logic in Figure 1 can be expressed using the following formulas:
V = I S ( G ( R ( I ) ) ) + L ( I )
In the above formula, V represents the final output image, I represents the input highlight image, S ( · ) represents the specular reflection component extraction function, G ( · ) represents the smoothing function used primarily for a smooth transition of the edges after processing the masked region, R ( · ) represents the highlight extraction function proposed in this paper based on the Retinex model, and L ( · ) represents the gradient information of the image.

3.1. Highlight Removal

The accurate detection of highlight positions is crucial for highlight removal and directly affects the image restoration results and the efficiency of highlight removal algorithms. This paper is inspired by the Retinex model [28], which suggests that a digital image can be represented as the pixel-wise product of illumination and reflection components. In this context, the illumination component reflects ambient light information, while the reflection component reflects the detail information of the image. The relationship between them can be expressed as:
I c ( x , y ) = R c ( x , y ) L c ( x , y )
where I c ( x , y ) represents the pixel at position (x, y) in the image, c { R , G , B } represents the color channel, and R c ( x , y ) and L c ( x , y ) represent the reflection and illumination component information at the corresponding pixel position. Based on this theory, this paper separates the highlight image into reflection and illumination components and performs highlight localization based on the illumination component, as shown below.
R m = i = 1 n L i n + i = 1 n ( L i L ¯ i ) 2 n
In the above equation, R m represents the extracted highlight mask, which also serves as the threshold for highlights. Here, n = W × H represents the total number of pixels, where W is the width and H is the height of the image. L i represents the i-th pixel value in the illumination component. Finally, the two are added together to obtain the mask for the highlight region.
Subsequently, in the mask region extracted based on the approach in this paper, specular reflection is separated using the dichromatic reflection model. However, compared to traditional methods, this paper processes only the region identified by highlight localization rather than the entire image. This can improve the color bias issues encountered in previous methods. The specific improvements are as follows:
I ( x ) = D ( x ) + S ( x ) = m d ( x ) Λ ( x ) + m s ( x ) Γ ( x )
The above formula represents the dichromatic reflection model, where x = ( x , y ) denotes the image’s horizontal and vertical coordinates, and I ( x ) = { I r ( x ) , I g ( x ) , I b ( x ) } represents the intensity of the three-channel pixels in a color image. In the formula, D(x) and S ( x ) indicate the diffuse reflection and specular reflection components, respectively, Λ ( x ) represents the diffuse reflection chromaticity, and Γ ( x ) represents the specular reflection chromaticity. m d ( x ) and m s ( x ) are the coefficients for these two components.
Based on the inference from reference [12], according to formula (4), the following formula can be derived, where I ^ ( x ) is the approximate illumination-free image separated in the RGB space, I min ( x ) is the minimum pixel value in the RGB space, and I min ¯ ( x ) is the mean of the minimum pixel values in the RGB space.
I ^ ( x ) = I ( x ) I min ( x ) + I min ( x ) ¯
However, on the surface of metals, because they are mostly smooth and lack proper diffuse reflectance chromaticity [15], the highlight removal process in this paper is as follows. First, based on the highlight mask separated in this paper, the following operation is performed, where γ represents the conversion of the image to the HSV space, I ( x ) is the original RGB image, I min ( x ) is the minimum value in the RGB pixels, R m is the highlight mask separated based on Formula (3), and indicates that the operation is only applied to the masked area. I v ( x ) represents the V space in the HSV color space of the original image, and finally, the minimum pixel is separated:
I v ( x ) = min { γ ( I ^ v ( x ) ) , ( R m I v ( x ) ) }
Finally, the method for highlight removal in this paper is as follows, where γ 1 represents the conversion from the HSV space back to the RGB space and makes a more accurate judgment on the highlight removal by adding the threshold information calculated in this paper, R m .
S = d i f f u s e , I x γ 1 ( H , S , I v ) < R m s p e c u a l , I min ( x ) > R m
However, after processing using our method, it was observed that there was a sense of disconnection at the edge of the mask. Therefore, we introduced a compensation factor to handle the mask, reducing the processing traces at the restoration location. The formula is as follows:
G ( x , y ) = 1 2 π i , j I ( x + i , y + j ) exp ( i 2 + j 2 2 )
In the above formula, x and y represent the coordinates of the corresponding pixels, and i and j represent the size of the convolution kernel. In this paper, a 5 × 5 convolution kernel is used to achieve the best restoration effect. The repair effect of highlight removal with the introduction of this compensation coefficient is shown in Figure 2.
Due to the fact that the compensation part in the above images is mainly located at the edges of the highlights, certain images may not exhibit significant improvements. For comparison, columns (c) and (d) in Figure 2 were selected to compare the performance metrics, and the specific results are shown in Figure 3. The horizontal axis represents the input images in Figure 2a that’s A minus E, and the vertical axis represents the PSNR (peak signal-to-noise ratio) values of the images. After comparison, it was found that the PSNR values of the images significantly improved after the addition of the compensation function.

3.2. Detail Restoration

After improving the method in this paper to repair highlight regions, there was often a severe loss of details in these areas. This had a significant impact on weld seam detection. Therefore, the paper introduces global detail and minor detail restoration. Firstly, in the HSV color space, the V channel of the repaired image is decomposed using guided filtering [29] ( V = V s + V t ). Subsequently, based on the texture layer V t , weight parameters are applied to the structural layer V s for global detail enhancement to correct global detail issues. The paper improves the Laplacian global image enhancement operator, refining image sharpness to enhance the overall detail information and achieve global detail enhancement. The formula is as follows:
2 V s ( x , y ) x 2 = [ γ ( V s ( x + 1 , y ) 2 V s ( x , y ) + V s ( x 1 , y ) ) ] Γ 2 V s ( x , y ) y 2 = [ γ ( V s ( x , y + 1 ) 2 V s ( x , y ) + V s ( x , y 1 ) ) ] Γ
In the above formula, V s ( x , y ) represents the position of the image’s structural layer at the corresponding pixel. γ and Λ are two adjustable parameters with default values of 0.1 and 0.75, where the parameter γ can better control sharpness, and Λ can better restore the brightness of edges. In the final Laplacian operator, this paper introduces a weight Λ to control visual sharpness (this parameter requires Λ > 1 ), and the higher the value of Λ , the clearer the result. The final formula is as follows:
2 V s ( x , y ) = Λ [ 2 V s ( x , y ) x 2 + 2 V s ( x , y ) y 2 ]
To adapt to images in different scenes, this paper proposes an adaptive weight Λ to meet the sharpening effects of various scene images. The formula is as follows, where V t ( x , y ) represents the position of the image’s texture layer at the corresponding pixel:
Λ = 20 1 + exp 0.1 · V t x , y
After applying global optimization, there were still missing details in the highlight removal area. Therefore, this paper uses the method of maximum gradient fusion [30] to restore the tiny details of the image. In this paper, the horizontal gradient G x is obtained through convolution, and its formula is G x = S x V , where represents convolution, V represents the target image, and S x represents the Sobel operator. Similarly, using the transpose of S x , the paper calculates the vertical gradient G y with the specific formula G y = S y V , where S y = S y ^ T . After obtaining the magnitudes of horizontal and vertical gradients, the image gradient can be defined as:
T ( x , y ) = G x ( x , y ) 2 + G y ( x , y ) 2
where G x ( x , y ) and G y ( x , y ) are the positions of the pixels in the horizontal and vertical gradient maps, respectively. Through the above equations, the gradient information of the original image and the image detected and repaired in the previous section can be obtained separately. The visibility of the image details is closely related to the magnitude of gradients, so this paper extracts the maximum gradient value G max ( x , y ) from each image. Since structural similarity is used to evaluate the perceptual similarity between two images, this paper calculates the similarity between the maximum gradient map G max and the gradient map G v s of the globally enhanced image, which can be expressed by the following formula:
S ( G max , G v s ) = 2 μ G max μ G v s + c 1 μ G max 2 + μ G v s 2 + c 1 2 σ G max G v s + c 2 σ G max 2 + σ G v s 2 + c 2
In the above equation, μ G max , μ G v s , σ G max , σ G v s , and σ G max G v s represent the local mean, local variance, and covariance of G max and G v s , respectively. c 1 and c 2 are stabilizing constants. In this way, this paper obtains the gradient quality L by averaging the quality maps, where H and W are the height and width of the image:
L = 1 H × W x , y S ( G max ( x , y ) , G v s ( x , y ) )
After obtaining the gradient quality, the texture is pyramid blended with the globally enhanced image to obtain the final experimental results, as shown in Figure 4.

4. Results

This paper conducted the final experiments using a real welding seam image dataset for validation. The dataset comprised 533 images containing welding seam images with highlight pollution. Additionally, 1000 highlight images from the SHIQ [13] dataset were also utilized. In this section, the universality of the proposed method is validated using the aforementioned datasets. Figure 5 and Figure 6 present experimental comparison images, including validations with methods from Akashi [20], Fu [31], Shen [10], Shen [14], Yamamoto [3], Lin [32], and the proposed method. The images in Figure 5 are from the highlight-polluted welding seams collected in this paper, while the images in Figure 6 are from the SHIQ dataset. Table 1 shows the methods we compared.

4.1. Qualitative Evaluation

To conduct a qualitative assessment, this paper categorized the collected highlight images into four major classes: metal, plastic, glass, and decorative items. These images were sourced from the SHIQ dataset and a dataset of real-world weld seam images collected in this paper. After contrasting the images in each category, three reference metrics were employed to evaluate the quality of the highlight removal images. They were the MSE (mean squared error), measuring the average squared difference between two images at each pixel; PSNR (peak signal-to-noise ratio), assessing the ratio of signal to noise; and SSIM (structure similarity index measure), comparing the brightness, contrast, and structure of the images. In conclusion, these metrics helped to determine the effectiveness of the highlight removal methods.
Figure 5 illustrates the comparison of the highlight removal effects between this paper’s approach and others on the surface of metal. In Figure 5b, the highlights are not effectively removed, and many artifacts are present. In Figure 5d,f,g, the image quality significantly degrades after highlight removal. In Figure 5e, although the highlights were removed, there are noticeable artifacts at the transitions in the highlight region, with severe loss of details. Figure 5h represents a deep learning method, showing noticeable color discrepancies in the restoration. While this method effectively removed highlights, it also exhibited certain artifacts. Table 2 presents the mean performance metrics of the different methods on these three images. The optimal result is marked in bold.
Figure 6 shows the results of highlight removal on a plastic surface. It can be observed that on the plastic surface, most methods resulted in some degree of color deviation after processing. This was mainly because the existing methods for highlight removal failed to constrain the highlight area, thereby mistakenly identifying the white background as highlights. Additionally, some methods exhibited a pseudo-shadow in the original highlight position after removing the highlights, indicating an incomplete elimination of highlights. These two issues are clearly visible in Figure 6e. The approach proposed in this paper performed better in suppressing pseudo-shadows and reducing color deviations in the highlight areas compared to the existing methods. Table 3 presents the average performance metrics for the images in Figure 6. The optimal result is marked in bold.
The results of highlight removal on glass surfaces shown in Figure 7 were compared. The existing methods achieved relatively balanced highlight removal on glass surfaces, but color deviation was observed to some extent, indicating an area for potential optimization in our future work. In Table 4, the optimal result is marked in bold.
Figure 8 shows the results of highlight removal on the surface of decorations. In environments with rich colors, most methods exhibited unnatural color transitions, as seen in Figure 8e. Additionally, in Figure 8g, there is significant overall color deviation in the restored image. Moreover, many methods did not handle the details of highlight removal well, as shown in Figure 8d,g. In Table 5, the optimal result is marked in bold.

4.2. Quantitative Evaluation

In the quantitative evaluation, to further demonstrate the universality and effectiveness of our approach, this paper evaluated both the SHIQ dataset and the collected weld seam dataset, which together comprised 1533 highlight images.
Table 6 and Table 7 list the performance metrics of using our method and the existing methods on these two datasets. It can be seen that our method had a significant performance improvement compared to the existing traditional highlight removal methods. Although our method had some slight deficiencies compared to the deep learning methods on the SHIQ dataset, the actual visual effect of our method was better than the deep learning methods, and for the deep learning methods, they could only handle images of up to 512 × 512 pixels; in addition, the image details were severely lost after processing. The optimal method is highlighted in bold in Table 6 and Table 7, and the values are the means after processing all images.

4.3. Runtime

Runtime is an important criterion for determining whether an algorithm can process in real-time. This paper selected highlight images at different resolutions to detect the runtime of the algorithm at different resolutions. All the methods mentioned in this paper were run on the same computer using MATLAB (2021b), with a computer configuration of i7-11800H, GTX 2060, and 16 GB of memory (Intel, Santa Clara, CA, USA).
Table 8 represents the runtime of the images. The table uses images with three different pixel resolutions for runtime comparison: 256 × 256, 512 × 512, and 1280 × 720. The table shows the runtime and final average runtime for each corresponding resolution, measured in seconds. * indicates that the image cannot be processed at the current resolution.
For the deep learning method, because it could only handle images up to 512 × 512 pixel, the processing time at the corresponding resolution is the training time of the model. The final average processing time is the processing time per image after the model is trained.

4.4. Detail Analysis

Information hidden in image details is often crucial, and bright image details are essential for practical applications such as object detection and tracking. Therefore, an excellent highlight removal algorithm should have the ability to enhance image details. This paper compared the detail information of the images after highlight processing, as shown in Figure 7. The detail information after restoration was compared between the algorithm from references [10,32], which performed well in the comparison mentioned above, and the algorithm proposed in this paper.
As shown in Figure 9, it was observed that the highlight removal method proposed in this paper excelled in detail restoration compared to the existing methods. This is particularly evident in the weld seam images. Additionally, in various other environments, the highlight removal approach presented in this paper outperformed the existing methods in terms of detail.
Therefore, the application scenarios of this method can be welding or industrial production scenarios. Among them, welding plays a vital role in industrial production, and weld quality is key to ensuring structural integrity. Detecting a weld can effectively reduce the potential risk and ensure the safety of the connection point. The timely identification and repair of welding defects can prevent product failures during use, thereby reducing the cost of later maintenance and replacement. By using effective inspection methods, quality problems can be eliminated before the product leaves the factory and production efficiency can be improved. In the automated welding process, timely and accurate detection helps to quickly eliminate nonconforming products and ensure the continuity and stability of the production workflow.

5. Conclusions

This paper proposes a widely applicable highlight removal method that emphasizes detail restoration. The method effectively removes highlights and better restores detail information in areas contaminated by highlights in the image. Compared to existing methods, the proposed method better preserves both overall and fine-grained detail information in the image. The repaired images are also more suitable for subsequent recognition operations. However, some shortcomings exist in terms of saturation in certain images after processing with this method, and there are still residual artifacts in certain types of image processing that have not been completely eliminated. These are issues that need to be considered and improved upon. With the increasing maturity of deep learning algorithms, future highlight removal algorithms should also evolve towards the direction of deep learning.

Author Contributions

Conceptualization, S.J. and L.C.; Methodology, S.J. and L.C.; Software, H.Y.; Validation, L.C.; Formal analysis, H.Y.; Resources, X.L.; Writing—original draft, S.J.; Writing—review & editing, S.J. and L.C.; Funding acquisition, X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Natural Science Foundation of Hubei Province (No. 2022CFB776) and the Local Standard Project of Hubei Province (TZ022002091).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data can be found here: https://github.com/fu123456/SHIQ (accessed on 1 June 2023).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Shafer, S.A. Using color to separate reflection components. Color Res. Appl. 1985, 10, 210–218. [Google Scholar] [CrossRef]
  2. Takechi, K.; Okabe, T. Diffuse-specular separation of multi-view images under varying illumination. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 2632–2636. [Google Scholar]
  3. Yamamoto, T.; Nakazawa, A. General Improvement Method of Specular Component Separation Using High-Emphasis Filter and Similarity Function. ITE Trans. Media Technol. Appl. 2019, 7, 92–102. [Google Scholar] [CrossRef]
  4. Haghighat, M.; Mathew, R.; Taubman, D. Rate-Distortion Driven Decomposition of Multiview Imagery to Diffuse and Specular Components. IEEE Trans. Image Process. 2020, 29, 5469–5480. [Google Scholar] [CrossRef]
  5. Imai, Y.; Kato, Y.; Kadoi, H.; Horiuchi, T.; Tominaga, S. Estimation of Multiple Illuminants Based on Specular Highlight Detection. In Computational Color Imaging; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  6. Wang, C.; Kamata, S.-I. Removal of Transparent Plastic Film Specular Reflection Based on Multi-Light Sources. In Proceedings of the 2012 Symposium on Photonics and Optoelectronics, Shanghai, China, 21–23 May 2012; pp. 1–4. [Google Scholar]
  7. Iwata, S.; Ogata, K.; Sakaino, S.; Tsuji, T. Specular reflection removal with highspeed camera for video imaging. In Proceedings of the IECON 2015—41st Annual Conference of the IEEE Industrial Electronics Society, Yokohama, Japan, 9–12 November 2015. [Google Scholar]
  8. Yang, J.; Cai, Z.; Wen, L.; Lei, Z.; Guo, G.; Li, S.Z. A New Projection Space for Separation of Specular-Diffuse Reflection Components in Color Images. In Computer Vision—ACCV 2012. ACCV 2012; Lee, K.M., Matsushita, Y., Rehg, J.M., Hu, Z., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2013; Volume 7727. [Google Scholar]
  9. Shen, H.L.; Zhang, H.G.; Shao, S.J.; Xin, J.H. Chromaticity-based separation of reflection components in a single image. Pattern Recognit. 2008, 41, 2461–2469. [Google Scholar] [CrossRef]
  10. Shen, H.L.; Cai, Q.Y. Simple and efficient method for specularity removal in an image. Appl. Opt. 2009, 48, 2711–2719. [Google Scholar] [CrossRef] [PubMed]
  11. Yang, J.; Liu, L.; Li, S.Z. Separating Specular and Diffuse Reflection Components in the HSI Color Space. In Proceedings of the 2013 IEEE International Conference on Computer Vision Workshops, Sydney, Australia, 1–8 December 2014. [Google Scholar]
  12. Ramos, V.S.; de Silviera, G.Q., Jr.; Silveira, L.F. Single Image Highlight Removal for Real-Time Image Processing Pipelines. IEEE Access 2019, 8, 3240–3254. [Google Scholar] [CrossRef]
  13. Fu, G.; Zhang, Q.; Zhu, L.; Li, P.; Xiao, C. A Multi-Task Network for Joint Specular Highlight Detection and Removal. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 7748–7757. [Google Scholar]
  14. Shen, H.-L.; Zheng, Z.-H. Real-time highlight removal using intensity ratio. Appl. Opt. 2013, 52, 4483. [Google Scholar] [CrossRef] [PubMed]
  15. Yu, D.; Han, J.; Jin, X.; Han, J. Efficient highlight removal of metal surfaces. Signal Process. 2014, 103, 367–379. [Google Scholar] [CrossRef]
  16. Jiao, J.; Fan, W.; Sun, J.; Satoshi, N. Highlight removal for camera captured documents based on image stitching. In Proceedings of the 2016 IEEE 13th International Conference on Signal Processing (ICSP), Chengdu, China, 6–10 November 2016; pp. 849–853. [Google Scholar]
  17. Dubreuil, M.; Delrot, P.; Leonard, I.; Alfalou, A.; Brosseau, C.; Dogariu, A. Exploring underwater target detection by imaging polarimetry and correlation techniques. Appl. Opt. 2013, 52, 997–1005. [Google Scholar] [CrossRef] [PubMed]
  18. Khan, H.A.; Thomas, J.B.; Hardeberg, J.Y. Analytical Survey of Highlight Detection in Color and Spectral Images. In Computational Color Imaging Workshop; Springer: Cham, Switzerland, 2017. [Google Scholar]
  19. Yang, Q.; Wang, S.; Ahuja, N. Real-Time Specular Highlight Removal Using Bilateral Filtering. In Proceedings of the Computer Vision—ECCV 2010, 11th European Conference on Computer Vision, Heraklion, Crete, Greece, 5–11 September 2010; Part IV; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  20. Akashi, Y.; Okatani, T. Separation of reflection components by sparse nonnegative matrix factorization. Comput. Vis. Image Underst. 2016, 146, 77–85. [Google Scholar] [CrossRef]
  21. Zhang, Z.; Ren, W.; Lu, Y.; Zhou, S.; Tang, Y.; Tian, J. Highlight Removal with Orthogonal Decomposition. In Proceedings of the 2022 4th International Conference on Data-Driven Optimization of Complex Systems (DOCS), Chengdu, China, 28–30 October 2022; pp. 1–6. [Google Scholar]
  22. Liu, Y.; Yuan, Z.; Zheng, N.; Wu, Y. Saturationpreserving specular reflection separation. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3725–3733. [Google Scholar]
  23. Zhao, Y.; Peng, Q.; Xue, J.; Kong, S.G. Specular reflection removal using local structural similarity and chromaticity consistency. In Proceedings of the IEEE International Conference on Image Processing, Quebec City, QC, Canada, 27–30 September 2015. [Google Scholar]
  24. Souza, A.C.; Macedo, M.C.; Nascimento, V.P.; Oliveira, B.S. Real-Time High-Quality Specular Highlight Removal using Efficient Pixel Clustering. In Proceedings of the 2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), Parana, Brazil, 29 October–1 November 2018. [Google Scholar]
  25. Xia, W.; Chen, E.C.S.; Pautler, S.; Peters, T.M. A Global Optimization Method for Specular Highlight Removal from A Single Image. IEEE Access 2019, 7, 125976–125990. [Google Scholar] [CrossRef]
  26. Huo, Y.; Yang, F.; Li, C. HDR image generation from LDR image with highlight removal. In Proceedings of the 2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Turin, Italy, 29 June–3 July 2015; pp. 1–5. [Google Scholar]
  27. Shah, S.M.Z.A.; Marshall, S.; Murray, P. Removal of specular reflections from image sequences using feature correspondences. Mach. Vis. Appl. 2017, 28, 409–420. [Google Scholar] [CrossRef]
  28. Fu, X.; Zhuang, P.; Huang, Y.; Liao, Y.; Zhang, X.P.; Ding, X. A retinex-based enhancing approach for single underwater image. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 4572–4576. [Google Scholar]
  29. He, K.; Sun, J.; Tang, X. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1397–1409. [Google Scholar] [CrossRef] [PubMed]
  30. Wang, Q.; Fu, X.; Zhang, X.P.; Ding, X. A fusion-based method for single backlit image enhancement. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 4077–4081. [Google Scholar]
  31. Fu, G.; Zhang, Q.; Song, C.; Lin, Q.; Xiao, C. Specular Highlight Removal for Real-World Images. J. Comput. Graph. Forum 2019, 38, 253–263. [Google Scholar] [CrossRef]
  32. Lin, J.; El Amine Seddik, M.; Tamaazousti, M.; Tamaazousti, Y.; Bartoli, A. Deep multi-class adversarial specularity removal. In Image Analysis; Felsberg, M., Forssén, P.E., Sintorn, I.M., Unger, J., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2019; Volume 11482, pp. 3–15. [Google Scholar]
Figure 1. The framework of the proposed method.
Figure 1. The framework of the proposed method.
Applsci 14 02469 g001
Figure 2. The highlight restoration effect of the proposed method in this paper: (a) the input image, (b) the highlight pixels, (c) without the compensation coefficient, (d) using the compensation coefficient.
Figure 2. The highlight restoration effect of the proposed method in this paper: (a) the input image, (b) the highlight pixels, (c) without the compensation coefficient, (d) using the compensation coefficient.
Applsci 14 02469 g002
Figure 3. Performance metrics before and after compensation.
Figure 3. Performance metrics before and after compensation.
Applsci 14 02469 g003
Figure 4. Effect after detail repair: (a) the input image, (b) gradient image, (c) the details are enlarged; the red box is before enhancement, and the blue box is after enhancement.
Figure 4. Effect after detail repair: (a) the input image, (b) gradient image, (c) the details are enlarged; the red box is before enhancement, and the blue box is after enhancement.
Applsci 14 02469 g004
Figure 5. Comparison of highlight removal effects on the metal surface: (a) input image, (b) Akashi [20], (c) Fu [31], (d) Shen [10], (e) Shen [14], (f) Yamamoto [3], (g) Lin [32], (h) our method.
Figure 5. Comparison of highlight removal effects on the metal surface: (a) input image, (b) Akashi [20], (c) Fu [31], (d) Shen [10], (e) Shen [14], (f) Yamamoto [3], (g) Lin [32], (h) our method.
Applsci 14 02469 g005
Figure 6. Comparison of highlight removal effects on the plastic surface: (a) input image, (b) Akashi [20], (c) Fu [31], (d) Shen [10], (e) Shen [14], (f) Yamamoto [3], (g) Lin [32], (h) our method.
Figure 6. Comparison of highlight removal effects on the plastic surface: (a) input image, (b) Akashi [20], (c) Fu [31], (d) Shen [10], (e) Shen [14], (f) Yamamoto [3], (g) Lin [32], (h) our method.
Applsci 14 02469 g006
Figure 7. Comparison of highlight removal effects on a glass surface: (a) input image, (b) Akashi [20], (c) Fu [31], (d) Shen [10], (e) Shen [14], (f) Yamamoto [3], (g) Lin [32], (h) our method.
Figure 7. Comparison of highlight removal effects on a glass surface: (a) input image, (b) Akashi [20], (c) Fu [31], (d) Shen [10], (e) Shen [14], (f) Yamamoto [3], (g) Lin [32], (h) our method.
Applsci 14 02469 g007
Figure 8. Comparison of highlight removal effects on decoration surfaces: (a) input image, (b) Akashi [20], (c) Fu [31], (d) Shen [10], (e) Shen [14], (f) Yamamoto [3], (g) Lin [32], (h) our method.
Figure 8. Comparison of highlight removal effects on decoration surfaces: (a) input image, (b) Akashi [20], (c) Fu [31], (d) Shen [10], (e) Shen [14], (f) Yamamoto [3], (g) Lin [32], (h) our method.
Applsci 14 02469 g008
Figure 9. Image details show: (a) input image, (b) Shen [10], (c) Lin [32], (d) our method.
Figure 9. Image details show: (a) input image, (b) Shen [10], (c) Lin [32], (d) our method.
Applsci 14 02469 g009
Table 1. These methods are replaced with abbreviations later in this article.
Table 1. These methods are replaced with abbreviations later in this article.
MethodAbbreviation
Separation of reflection components by sparse non-negative matrix factorization [20]Akashi [20]
Specular highlight removal for real-world images [31]Fu [31]
Simple and efficient method for specularity removal in an image [10]Shen [10]
Real-time highlight removal using intensity ratio [14]Shen [14]
General improvement method of specular component separation using high-emphasis filter and similarity function [3]Yamamoto [3]
Deep multi-class adversarial specularity removal [32]Lin [32]
Table 2. Mean performance metrics of images in Figure 5.
Table 2. Mean performance metrics of images in Figure 5.
MetalAkashi [20]Fu [31]Shen [10]Shen [14]Yamamoto [3]Lin [32]Ours
MSE/10−26.1171.059.28132.93242.0746.845.50
PSNR20.339.6518.506.9514.8922.4025.73
SSIM0.700.410.870.380.540.900.91
Table 3. Mean performance metrics of images in Figure 6.
Table 3. Mean performance metrics of images in Figure 6.
PlasticAkashi [20]Fu [31]Shen [10]Shen [14]Yamamoto [3]Lin [32]Ours
MSE/10−215.31107.717.51173.69380.2110.187.11
PSNR16.617.9921.635.3313.2019.8423.21
SSIM0.690.580.890.550.650.850.90
Table 4. Mean performance metrics of images in Figure 7.
Table 4. Mean performance metrics of images in Figure 7.
GlassAkashi [20]Fu [31]Shen [10]Shen [14]Yamamoto [3]Lin [32]Ours
MSE/10−29.8155.300.42155.51223.110.650.63
PSNR18.5011.5231.926.8015.2330.1929.82
SSIM0.810.820.980.340.820.970.97
Table 5. Mean performance metrics of images in Figure 8.
Table 5. Mean performance metrics of images in Figure 8.
DecorationsAkashi [20]Fu [31]Shen [10]Shen [14]Yamamoto [3]Lin [32]Ours
MSE/10−25.1230.343.0547.93285.013.423.34
PSNR21.0813.7624.2112.2713.6122.9924.35
SSIM0.930.780.900.670.760.920.94
Table 6. Performance metrics on the real welding seam dataset.
Table 6. Performance metrics on the real welding seam dataset.
Akashi [20]Fu [31]Shen [10]Shen [14]Yamamoto [3]Lin [32]Ours
PSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIM
AVG20.570.6510.860.4024.550.947.060.4514.730.6125.960.9229.440.95
Table 7. Performance metrics on the SHIQ dataset.
Table 7. Performance metrics on the SHIQ dataset.
Akashi [20]Fu [31]Shen [10]Shen [14]Yamamoto [3]Lin [32]Ours
PSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIM
AVG18.860.5914.860.3128.640.837.430.4116.330.4128.770.9328.830.91
Table 8. The runtime of different highlight removal methods.
Table 8. The runtime of different highlight removal methods.
ResolutionAkashi [20]Fu [31]Shen [10]Shen [14]Yamamoto [3]Lin [32]Ours
256 × 2565.21 s0.86 s0.23 s0.58 s3.24 s3 day s0.28 s
512 × 5126.19 s0.94 s0.41 s0.67 s4.68 s5 day s0.43 s
1280 × 72010.34 s1.03 s0.88 s0.79 s6.37 s*0.97 s
Ave7.24 s0.94 s0.51 s0.68 s4.76 s0.11 s0.56 s
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiang, S.; Cheng, L.; Yuan, H.; Li, X. Highlight Removal Emphasizing Detail Restoration. Appl. Sci. 2024, 14, 2469. https://doi.org/10.3390/app14062469

AMA Style

Jiang S, Cheng L, Yuan H, Li X. Highlight Removal Emphasizing Detail Restoration. Applied Sciences. 2024; 14(6):2469. https://doi.org/10.3390/app14062469

Chicago/Turabian Style

Jiang, Shengrui, Li Cheng, Haiwen Yuan, and Xuan Li. 2024. "Highlight Removal Emphasizing Detail Restoration" Applied Sciences 14, no. 6: 2469. https://doi.org/10.3390/app14062469

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop