Next Article in Journal
Regional Contributions and Climate Attributions to Interannual Variation of Global Net Ecosystems Production by an ECOSYSTEM Processed Model Driven by Remote Sensing Data over the Past 35 Years
Next Article in Special Issue
BACA: Superpixel Segmentation with Boundary Awareness and Content Adaptation
Previous Article in Journal
Spatiotemporal Evolution of Cultivated Land Non-Agriculturalization and Its Drivers in Typical Areas of Southwest China from 2000 to 2020
Previous Article in Special Issue
PFD-SLAM: A New RGB-D SLAM for Dynamic Indoor Environments Based on Non-Prior Semantic Segmentation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Specular Highlight Removal Algorithm for Quality Inspection of Fresh Fruits

1
School of Automation, Northwestern Polytechnical University, Xi’an 710072, China
2
Research and Development Institute, Northwestern Polytechnical University at Shenzhen, Shenzhen 518057, China
3
Science and Technology on Electro-Optic Control Laboratory, Luoyang 471000, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(13), 3215; https://doi.org/10.3390/rs14133215
Submission received: 31 May 2022 / Revised: 27 June 2022 / Accepted: 1 July 2022 / Published: 4 July 2022
(This article belongs to the Special Issue Computer Vision and Image Processing)

Abstract

:
Nondestructive inspection technology based on machine vision can effectively improve the efficiency of fresh fruit quality inspection. However, fruits with smooth skin and less texture are easily affected by specular highlights during the image acquisition, resulting in light spots appearing on the surface of fruits, which severely affects the subsequent quality inspection. Aiming at this issue, we propose a new specular highlight removal algorithm based on multi-band polarization imaging. First of all, we realize real-time image acquisition by designing a new multi-band polarization imager, which can acquire all the spectral and polarization information through single image capture. Then we propose a joint multi-band-polarization characteristic vector constraint to realize the detection of specular highlight, and next we put forward a Max-Min multi-band-polarization differencing scheme combined with an ergodic least-squares separation for specular highlight removal, and finally, the chromaticity consistency regularization is used to compensate the missing details. Experimental results demonstrate that the proposed algorithm can effectively and stably remove the specular highlight and provide more accurate information for subsequent fruit quality inspection. Besides, the comparison of algorithm speed further shows that our proposed algorithm has a good tradeoff between accuracy and complexity.

Graphical Abstract

1. Introduction

Traditional fruit quality inspection methods that rely on human eye detection have disadvantages such as easy fatigue, high labor intensity, long labor time and lack of objectivity, which is far from meeting the ever-increasing market demand. How to realize the fruit quality inspection efficiently and non-destructively has become an important issue in the fruits industry. Non-destructive testing (NDT), as the name suggests, is a technique to evaluate the quality of fruits according to their physical characteristics without destroying the fruit samples [1]. At present, the most used NDT methods for fresh fruits are spectroscopy measurement [2], electronic nose (e-nose) [3], infrared thermography [4], image processing methods based on machine vision [5], dielectric properties [6] and so on.
Machine vision-based methods can evaluate the fruit quality based on characteristic information such as size, color, texture and so on, which is contactless, unified standards, highly efficient and of great importance to save manpower and material resources. Nowadays, this type of testing method is widely applied to the automatic fruit quality testing system, and the main working principle consists of image acquisition and specular highlight removal. When the fruit sample to be tested reach the image acquisition room, the camera captures the fruit image to the computer for processing, and finally the quality inspection result will be given based on the image processing result [7]. The schematic diagram of the automatic quality testing system based on machine vision is shown in Figure 1. To improve the accuracy and efficiency of fruit quality inspection, the researchers have been working to improve the performance of existing machine vision-based image processing algorithms. However, an important issue existing in the image acquisition process is ignored. Owing to the integrated influence of illumination and physical properties of target’s surface, the quality of target image is easily degraded by specular highlight [8]. For fruits with smooth skin and less texture as shown in Figure 2, they are easily affected by specular highlight during the image acquisition, resulting in light spots appearing on the surface of fruits, which severely affects the accuracy of subsequent quality inspection [9,10]. Specifically, the specular highlight areas existing on the surface may be misjudged as damage or fruit russeting and so on, which leads to an inaccurate quality inspection and evaluation results. Aiming at this issue, we need to propose a specular highlight removal algorithm that can be embedded into existing quality inspection systems to improve the accuracy and efficiency.
To solve the problem of specular highlight interference in machine vision-based quality inspection, we propose a new specular highlight removal algorithm based on multi-band polarization imaging, which combines the advantages of single image and multiple images methods. The proposed specular highlight removal algorithm can be integrated into existing quality inspection equipment to achieve effective specular highlight removal and thus ensure the accuracy of quality inspection and evaluation. Our proposed algorithm is composed of two parts, one is image acquisition and the other is specular highlight removal as shown in Figure 1. The image captured by the multi-band polarization imager is a mosaic image including all the spectral and polarization information, so we use a demosaicing algorithm [11] to reconstruct color sub-images with different polarization angles (0°, 45°, 90°). Then the two primary polarization parameters, Degree of Linear Polarization (DoLP) and Angle of Polarization (AoP), can be further calculated by these sub-images based on the following equations:
S 0 = I 0 + I 90 ; S 1 = I 0 I 90 ; S 2 = 2 I 45 I 0 I 90
D o L P = S 1 2 + S 2 2 S 0 ; A o P = 1 2 a r c t a n ( S 2 S 1 )
In the second part, we propose a specular highlight removal method based on multi-band polarization imaging. Firstly, we propose a joint multi-band-polarization characteristic vector constraint to realize the detection of specular highlights. Then, we put forward a Max-Min multi-band-polarization differencing scheme; meanwhile, an ergodic least-squares separation is combined to remove the specular highlight. Finally, the chromaticity consistency regularization is utilized to compensate for the missing details. Experimental results show that both visual effects and quantitative evaluation of our proposed algorithm are better.

2. Related Work

When incident light passes through the surface of a medium, there are two different reflection effects, diffuse reflection and specular reflection [12]. Under ideal lighting conditions, the surface of an object is often assumed to be a Lambertian surface with only diffuse reflection. The surface of the object presents the same brightness from all viewpoints and the brightness of a certain point on the surface of the object does not vary with the observation angles. Diffuse reflection is related to the surface material and reflects the physical and chemical properties of the target surface (such as roughness) [13], while, in fact, not only diffuse reflection but also specular reflection occurs on the target surface, resulting in specular highlight with high intensity existing on the target surface during the observation process. Furthermore, unsurprisingly, the specular highlight would conceal useful image features such as colors and textures. Different from diffuse reflection, specular highlight is related to the light source and viewpoints and varies with the direction and intensity of the light source, which effectively reflects the information of the light source [14].
For specular highlight removal, numerous methods have been proposed and they can be classified into single image methods and multiple images methods according to the number of input images. In this section, we review the previous specular highlight removal methods.
Single image specular highlight removal algorithms. The earliest method dates back to 1996, when Bajcsy et al. [15] proposed a specular highlight removal algorithm based on color segmentation, but later it was proven to be unable to handle targets with complex textures. Tan et al. [16] proposed a fully automatic method based on chromaticity analysis in 2005, they utilize the difference between specular and diffuse pixels in their proposed pseudo-specular-free (PSF) image to remove the specular highlight. To reduce the high computational cost, Yang et al. [17] accelerated the algorithm [16] by employing joint bilateral filtering. Similarly, Mallick et al. [18] proposed a specular highlight removal algorithm by evolving a partial differential Equation (PDE) that iteratively erodes the specular component at each pixel. Shen et al. [19] proposed a simple method to separate specular highlights in a color image based on the error analysis of chromaticity and appropriate selection of body color for each pixel. Yoon et al. [20] first proposed a specular-free two-band image, which was a specular-invariant color image representation. Furthermore, the highlight removal was achieved by comparing the local ratios of each pixel and these ratios were made equal in an iterative framework. Shen and Zheng [21] proposed a real-time highlight removal method, which works by pseudo-chromatic space analysis, clustering and estimation of intensity ratios in mixed specular-diffuse regions. This method works in a pixel-wise manner, without specular pixel identification and any local interaction. Based on the concept of dark channel prior, Kim et al. [22] proposed an optimization framework to achieve specular highlight removal. A new framework that combines the non-negative matrix factorization (NNMF) with a sparsity constraint for highlight removal was proposed by Akashi and Okatani [23]. Suo et al. [24] proposed a fast highlight removal algorithm based on an L 2 chromaticity definition and extended dichromatic reflection model (DRM). Ren et al. [25] proposed an algorithm based on color lines. Firstly, a modified nearest neighbor technique is used to cluster the image. Then searching along the radius in polar coordinates to recover the diffuse coefficient. Finally, both the color of illumination and diffuse reflection can be recovered. Fu et al. [26] proposed a specular highlight removal method for real-world images. They observed that for real-world images, the specular highlight often has a small size and sparse distribution; and the diffuse image can be represented by a linear combination of a small number of basis colors with the sparse encoding coefficients. So, they designed an optimization framework that can simultaneously estimate the diffuse and specular highlight images. Nguyen-Do-Trong et al. [10] excluded the specular highlight from the acquired hyperspectral reflectance images by implementing the cross-polarization to block the specular reflection, and they evaluated the cross-polarization approach in a line-scanning hyperspectral reflectance imaging system for the first time. Boyer [27] also used a cross-polarization imaging system to reduce the specular artifacts. Wen et al. [28] developed a polarization-guided model to generate a polarization chromaticity image and then reformulated the problem into a global energy function based on the proposed model. Finally, they optimized the global energy function by using the ADMM strategy to realize specular reflection separation.
Input image acquisition is simple and fast for single image methods since there is no need for special illumination or equipment, while the traditional single image methods have a common problem: they realize the removal of specular highlights based on particular prior information or assumptions. When the predetermined conditions are not satisfied or the image scenes change, it may lead to color discontinuity, edge distortion or a loss of structural and textural information. Most of the newer methods on specular highlight removal are based on deep learning. Although this kind of method has made significant progress [29,30], there are still some problems. First, deep-learning-based algorithms suffer from generic problems such as high computational effort, high hardware cost, and complex model design. Second, deep learning-based methods are usually trained on synthetic data or very few real data, and thus the desired results may not be obtained for real images due to the domain gap between training and test images.
Multiple images specular highlight removal algorithms. The prominent disadvantage of multiple images methods is that the imaging equipment or illumination conditions are complicated. However, multiple images algorithms have more accurate specular highlight removal results since they have richer input information. Nayer [31,32] combined color space analysis with projection constraints on the specular reflection component of the target polarized image to separate specular highlight. Based on the assumption that specular highlight is statistically uncorrelated with the diffuse component, Umeyama [33] first obtained the intensity difference between two images with different polarization angles to discriminate the specular region and then used independent component analysis (ICA) to estimate the specular highlight component in the specular region. By placing the polarization filter in front of the sensor, Wang [34] produced results with less color distortion. Yang [35] used the polarization imaging technique to detect bruises on nectarines. The polarized image can effectively suppress the specular highlight on the smooth surface of nectarine and overcome the interference of dark color during detection. A lightweight network (ResNet-G18) that integrate ResNet-18 and Ghost bottleneck was constructed to achieve classification and damage detection of nectarines. Lin et al. [36] proposed an algorithm based on color analysis and multi-baseline stereo vision, which can estimate the separation and the true depth of specular reflections at the same time. Lin et al. [37] proposed a method based on the neutral interface reflection model for separating specular reflection components in color images. Different from most previous methods, this approach did not assume any dependencies among pixels, such as regionally uniform surface reflectance. The image is a superposition of a transmitted layer and a reflected layer when it is recorded through a transparent medium (e.g., glass), Guo et al. [38] proposed a method to separate the two layers from multiple images by designing a novel Augmented Lagrangian Multiplier based algorithm.

3. Methodology

Our specular highlight removal algorithm consists of two parts as shown in Figure 3, one is image acquisition and the other is specular highlight removal. The task of the first part is to quickly acquire all the spectral and polarization information of the target, so we design a multi-band polarization imager with a new configuration here to realize highly efficient and multi-dimensional image acquisition. The second part’s task is to remove the specular highlight from images accurately, so we propose a new specular highlight removal method based on multi-band polarization imaging.

3.1. Images Acquisition Based on Multiband Polarization Imager

The imaging technique combining spectroscopy and polarization has been applied to fruit quality inspection [39]. With the rapid development of micro-nano manufacturing, array-based multi-band polarization imaging technology has gradually become a research focus in recent years. This technology is the product of a combination of intensity, polarization and spectral imaging technologies, which can obtain spatial, polarization and spectral information of the target at the same time through single image capture, thus realizing multi-dimension detection of the target.
The existing literature focuses on the configuration design and performance analysis of polarization arrays [40,41] or multi-band arrays [42] alone. In recent years, new array-based multi-band polarization imager has been produced, such as the PHX050S camera from Lucid Vision Labs, which can capture multi-band and polarization information of the scene simultaneously. Inspired by this, we design a multi-band polarization imager with a new configuration as shown in Figure 3a in the image acquisition part. The imager mainly consists of a multi-band polarization imaging array and a CMOS. Different colors indicate the selection of different wavebands and different grating orientations indicate the selection of different polarization angles. Since our subsequent proposed specular removal method requires information in visible light wavebands (R, G, B wavebands) and three polarization angles (0°, 45°, 90°), So we combine the “Quasi-Bayer” polarization pattern [43] and “Bayer” color filter array [44] together to constitute a new multi-band polarization imager as shown in Figure 3a. In this way, spectral information in R, G, B bands and polarization information in 0°, 45°, 90° can be acquired at the same time through single image capture.
As shown in Figure 3b, the captured image of our multi-band polarization imager is a multi-band polarization mosaic image, so the chromatic polarization demosaicking network (CPDNet) proposed by Wen [11] is applied here to reconstruct our required spectral and polarization information. Each pixel in the multi-band polarization array records only 1 out of 9 necessary intensity measurements. For each channel, the relationship between mosaic image Y and three full-resolution multi-band polarization images X can be formulated as:
Y = D θ X
where D θ represents the down-sampled matrix, with
θ R 0 , R 45 , R 90 , G 0 , G 45 , G 90 , B 0 , B 45 , B 90
The goal of CPDNet is to learn the mapping function F ( Y ) = X . The input of the CPDNet is the multi-band polarization mosaic image that can be denoted as Y I m × n , where m and n indicate the number of rows and columns of the input image. Accordingly, the ground-truth image can be expressed as X I m × n × 9 .
By using the CPDNet, we can reconstruct a color polarization image in three angles from a multi-band polarization mosaic image as shown in Figure 3d. The specific demosaicking process is not described in detail here, readers can refer to [11] for more information.

3.2. Specular Highlight Removal

According to the dichromatic reflection model proposed by Shafer [13], the reflected light can be regarded as a combination of diffuse reflection and specular reflection, which can be described mathematically as:
I ( x ) = I d ( x ) + I s ( x ) = ρ d ( x ) D ( x ) + ρ s ( x ) S ( x )
where I d ( x ) is diffuse reflection component and I s ( x ) is specular highlight component; ρ d and ρ s represent weighting factors of diffuse reflection and specular reflection, respectively; and D x = Ω H λ , x E λ q λ d λ , S x = Ω E λ q λ d λ , where λ is wavelength, Ω is the range of wavebands. q = q r , q g , q b is the camera induction function, H represents spectral reflectance, and E denotes the spectral power distribution function of light source.
In natural light imaging environment (namely, the light source is unpolarized light), the diffuse reflection component has a weak polarization effect due to multiple scattering processes, so the diffuse reflection component can be approximately regarded as unpolarized light, while the specular highlight reflects only once on the target surface so that it has strong polarization effect, thus the specular highlight can be approximately regarded as partial linear polarized light. As depicted in [45], the intensity of the captured target image varies as the cosine function of the rotation angle φ = θ p o l of the polarizer. The intensity of captured image can be expressed as:
I ( φ ) = I m a x + I m i n 2 + I m a x I m i n 2 c o s ( 2 φ 2 α )
where I m a x and I m i n represent the maximum and minimum intensities of the captured image, respectively, when rotating the polarizer. α is the polarization phase angle, and here in our study, θ 0 , 45 , 90 .
The intensity of diffuse reflection does not change with the rotation of the polarizer because the diffuse reflection component can be approximated to unpolarized, so we have:
I d = I m i n
For the varying specular highlight component, its intensity can be considered as the sum of a specular constant I s c and a cosine variable I s v c o s ( 2 φ 2 α ) :
I s = I s c + I s v c o s ( 2 φ 2 α )
So, the intensity of the captured image in different wavebands can be described as:
I λ ( φ ) = I λ , d + I λ , s 2 + I λ , s 2 c o s 2 φ s i n 2 α + I λ , s 2 s i n 2 φ c o s 2 α
I λ ( φ ) = I λ , d + f ( φ ) I λ , s
where f ( φ ) is weighting function of highlight component ( 0 f ( φ ) 1 ), which is only related to polarizer’s rotation angle φ while has nothing to do with waveband λ . Furthermore, λ r , g , b .
In a conclusion, the specular highlight has different polarization states in different wavebands. Therefore, the diffuse reflection component and highlight component can be distinguished effectively through comprehensive utilization of the polarization information in different wavebands.

3.2.1. Highlight Detection Based on Multiband Polarization

We conduct a statistical analysis to further analyze the differences in spectral and polarization characteristics between specular highlight and diffuse reflection areas intuitively. Here 100 images affected by specular highlights are collected as the data for statistical analysis. Firstly, we manually segment these image data to obtain the two independent areas: for each image data, two small patches that consist of 5 × 5 pixels are extracted from the diffuse reflection area and specular highlight areas, respectively. Thus, we obtain two sets of images, one is specular highlight patches and the other is diffuse reflection patches. Then we compute the DoLP (Degree of Linear Polarization) and grayscale values of each pixel in the R, G, and B bands, and the average values are shown in Figure 4. It can be clearly seen that the DoLP values of pixels in the diffuse reflection area are quite low while the specular highlight pixels have higher DoLP. Besides, for different wavebands, the distributions of DoLP are different either. Similarly, the grayscale value of the diffuse reflection area is much lower than that of the specular highlight area and the distributions are also different in each waveband. Therefore, the specular highlight and diffuse reflection components can be distinguished accurately by analyzing the differences in intensity, spectral and polarization characteristics, so that effective specular highlight detection can be achieved.
Based on the above analysis, we define the joint multi-band-polarization characteristic vector as follows:
V ( x ) = D o l p λ ( x ) , l ( x ) T
l ( x ) = 0.299 r ( x ) + 0.587 g ( x ) + 0.114 b ( x )
where D o l p is the degree of linear polarization, λ r , g , b and r , g , b represents intensity values of three visible bands, respectively. Additionally, it should be noted here that the ratios of r , g , b are fixed weight coefficients (0.299 for r rand, 0.587 for g rand and 0.114 for b rand), since we use the function “rgb2gray” in MATLAB to transform the color image into a gray-scale image. The values of D o l p λ range from 0 to 1, and both D o l p λ and l ( x ) are in the same vector V ( x ) , so l ( x ) is normalized for consistency. Namely, the possible values of l ( x ) range from 0 to 1 as well.
The specular highlight area and the diffuse reflection area can be detected by effective constraints based on the joint multi-band-polarization characteristic vector. For the target image under the influence of specular highlight, its pixel x meets the following constraints:
x R ( D ) , i f V ( x ) V 0 2 2 < ε R ( S ) , i f V ( x ) V 0 2 2 > ε
where R ( D ) and R ( S ) represent diffuse reflection area and specular highlight area, respectively, V 0 denotes the global threshold vector, ε is a tiny positive constant.
To facilitate subsequent highlight removal, a binary mask image is generated for the detection results of pixel x, and its expression is:
m a s k ( x ) = 1 , i f x R ( S ) 0 , i f x R ( D )
Based on the joint multi-band-polarization characteristic vector constraint, for the target image shown in Figure 3, The values of V 0 and ε are determined by the following analysis. Figure 5a is the target image, for the marked area of the target image in (a), the distribution statistical characteristics of Dolp, grayscale and luminance in R, G and B bands are shown in Figure 5b–e, where the X-axis is the pixel position, and the Y-axis is the values of each parameter. We can easily find the approximate pixel positions of specular area (100–170) from Figure 5b since specular has much higher grayscale values. So in Figure 5b, we can obtain the threshold of D o l p r x is 0.1; in Figure 5c, the threshold of D o l p g x is 0.2; in Figure 5d, the threshold of D o l p b x is 0.25. For the luminance l ( x ) , we can see the threshold is about 130 in Figure 5e, so the final threshold of l ( x ) is 0.5 after normalization.In a conclusion, we set V 0 = 0.1 , 0.2 , 0.25 , 0.5 . The value of ε = 0.3 is obtained by comparing the experimental results, and the best result of specular highlight area detection can be obtained when ε = 0.03 . The specular highlight detection result is shown in Figure 6.

3.2.2. Highlight Removal Based on Multiband Polarization

Compared with the unpolarized diffuse reflection component, the specular highlight component has different polarization states in different wavebands. Therefore, in this section, a Max-Min multi-band-polarization differencing scheme is proposed to generate a single specular-free image (SSF) by utilizing the multi-band polarization characteristic differences between diffuse reflection component and specular highlight component. Meanwhile, for the detected highlight area acquired in the previous section, an ergodic least-squares coefficients decomposition strategy is proposed to obtain the reflection coefficients of diffuse reflection and highlight and thus the effective separation of specular highlight can be achieved.
a.
Max-Min multi-band-polarization differencing scheme
Combining multi-band-polarization imaging model (7) and dichromatic reflection model (2), we have:
I λ = I λ , d + f · I λ , s = ρ d D λ + ρ s S λ
It is assumed that the light source has a uniform energy distribution in the visible bands, so for R , G , B three visible wavebands, we have:
S = [ S R , S G , S B ] = [ 255 , 255 , 255 ]
For the visible wavebands λ R , G , B , we can define the Max-Min multi-band-polarization differencing image Δ I as:
Δ I = m a x I λ m i n I λ
Combining the three equations above, we have:
Δ I = I m a x I m i n = ρ d D m a x + ρ s S ( ρ d D m i n + ρ s S ) = ρ d ( D m a x D m i n )
It can be derived that the Max-Min multi-band-polarization differencing image is only related to the diffuse reflection component rather than the specular highlight. Therefore, spectral differential images in three polarization angles—0°, 45°, and 90°—are merged with the linear weighting to generate a single specular-free (SSF) image, which can be expressed as:
S S F = 1 M φ = 0 , 45 , 90 m ( φ ) Δ I ( φ )
where φ represents rotation angle of the polarizer, and m ( · ) denotes the adaptive weight coefficient, which is associated with the intensities of polarization images with different angles, M = φ = 0 , 45 , 90 m ( φ ) · Δ I ( φ ) represents the normalization factor, namely the weighting sum.
As shown in Figure 3, the SSF image can effectively avoid the influence of specular highlight, and meanwhile retaining the original information of the target, which can effectively guide the subsequent removal of the specular highlight.
b.
Ergodic least-squares separation algorithm
For the pixels in the detected highlight area in Figure 6, we can find a diffuse pixel with the closest intensity value in the SSF image, and the intensity of the diffuse pixel is taken as the diffuse intensity of the highlighted pixel. Based on this idea, we propose an ergodic least-squares separation algorithm to perform a global ergodic search in the SSF image, to realize effective guidance of reflection coefficients separation in the detected specular highlight area. The process is as follows:
Step 1: Highlight separation of pixels in highlight area.
For pixel p marked m a s k = 1 in specular highlight area, searching for a diffuse reflection pixel q which has the nearest intensity value to p in SSF image, so we have:
q * = a r g m i n q | | S S F ( p ) S S F ( q ) | | 2
Take the original intensity value of q as the diffuse intensity of the highlighted pixel p, and combining with (1), we have:
I ( p ) = ρ d ( p ) I ( q ) + ρ s ( p ) S
Using the least-squares coefficient decomposition method, the reflection coefficient matrix is:
ρ d ( p ) ρ s ( p ) = I ( q ) S 1 I ( p )
where “ 1 ” represents the pseudo-inverse of the matrix. For the diffuse reflection factor ρ d ( p ) , its corresponding diffuse reflection intensity can be expressed as:
I d ( p ) = ρ d ( p ) D ( p ) = ρ d ( p ) I ( q )
Furthermore, the intensity of corresponding highlight component is:
I s ( p ) = I ( p ) ρ d ( p ) I ( q )
After separation, pixel p is marked as a diffuse pixel, that is m a s k ( p ) = 0 . The algorithm then proceeds to the next highlight pixel.
Step 2: Highlight separation of pixels in diffuse reflection area.
For the diffuse reflection area where m a s k = 0 , some diffuse reflection pixels may contain a small amount of highlight component. To further realize the separation and removal of specular highlight, we use the same least-squares strategy to obtain the diffuse reflection and specular highlight coefficients.
For the diffuse reflection area where m a s k = 0 , searching for a diffuse reflection pixel r whose intensity value is the highest in the SSF image, namely:
r * = a r g m a x r S S F ( r ) s . t . m a s k ( r ) = 0
Then seeking a diffuse reflection pixel z with the nearest intensity value to pixel r in the SSF image, which satisfies:
z * = a r g m i n z | | S S F ( r ) S S F ( z ) | | 2
Next taking the original intensity value of r as diffuse reflection intensity of pixel z, the reflection coefficient matrix can be calculated by using the least-squares coefficient decomposition method:
ρ d ( z ) ρ s ( z ) = I ( r ) S 1 I ( z )
Since the reflection coefficient of highlight is non-negative, if the calculated ρ s ( z ) < 0 , (24) can be converted to:
ρ d ( z ) = I ( r ) 1 I ( z )
Similar to (20) and (21), its corresponding diffuse reflection and specular highlight components can be calculated. Finally, pixel z is marked as a processed pixel, and then the same procedure can be adopted to the next diffuse reflection pixel which has the nearest intensity value to pixel r according to (23).
In summary, our proposed method considers not only the reflection separation of pixels in the highlight area, but also the possibility of the existence of a small amount of highlight in the diffuse reflection area, so that the removal result is much more thorough and accurate.
c.
The compensation of missing information based on local chromaticity consistency regularization constraint
The proposed highlight removal algorithm in Section b ignores the local detail information of the original target and leads to partial color distortion and edge discontinuity after specular highlight removal. So we use our previous work, local chromaticity consistency [46], to realize weighting regularized constraint of highlight removal result and meanwhile variable splitting [47] is combined to achieve fast optimization solving. In this way, the missing information after highlight removal can be effectively compensated in time and the visual effect is further improved. For the highlight suppression result I d , the weighting regularization term can be denoted as W ( x , y ) | I d ( x ) I d ( y ) | . The pixels with a similar grey level neighborhood to I d have larger weights in the average, thus like that in nonlocal means methods [48], W ( x , y ) can be set to be inversely proportional to the distance between pixel x and y and can be expressed as:
W ( x , y ) = e | | I ( x ) I ( y ) | | 2 / 2 σ 2
where I is the original target image, and σ represents standard deviation. Therefore, for two adjacent pixels x and y in a local area, when this two pixels belong to diffuse and highlight areas, respectively, significant difference exists between I ( x ) and I ( y ) , namely W ( x , y ) 0 ; while if this two pixels belong to the same reflection area, the weighted value W ( x , y ) is bigger.
To simplify, the matrix forms of the weighting regularization term and weighting function (26) can be expressed as | | W ( D I d ) | | 1 and W = e i r , g , b D I i 2 / 2 σ 2 , where D is first-order forward difference operator, ∘ represents matrix product, ⊗ represents convolution and i is color channel. Therefore, the problem of weighting regularization information compensation can be converted into the following energy function minimization problem:
a r g m i n I d γ 2 I d I d ˜ 2 2 + W ( D I d ) 1
where γ is weighting factor, I d ˜ is the initial highlight removal result. For the above energy function optimization problem, we can quickly solve it by the variable splitting method. Here we introduce an intermediate variable u, so (27) can be converted into:
γ 2 I d I d ˜ 2 2 + W u 1 + β 2 ( u ( D I d ) 2 2 )
where β is a weighting factor. Obviously, (28) converges to the optimal solution of (27) when β .
For (28), when β is fixed, the minimization problem can be treated as alternating optimization solution of u and I d . That is, first fixing I d to calculate optimal u; then fixing u to obtain optimal I d . The process continues until convergence. The detailed optimization process is as follows:
(1)
Fixing I d and optimizing u
In (28), for a given I d , its minimum energy function is:
W u 1 + β 2 u D I d 2 2
It can be converted into:
m i n x | ω · x | + β 2 x α 2
where are ω , α , β all known and can be obtained directly, namely:
u * = x o p t = m a x ( ( α ω β , 0 ) · s i g n ( a )
here s i g n ( · ) is the symbol function.
(2)
Fixing u and optimizing I d
We regard the calculated u as a fixed value to obtain the optimal I d . The objective function is:
γ 2 I d I d ˜ 2 2 + β 2 ( u ( D I d ) 2 2 )
Notice that (35) is a quadratic term function of the variable I d , so we can obtain the optimal value of I d by taking the partial derivative of (35) and we have:
γ β ( I d I d ˜ ) + D T ( D I d u )
where D T is the transpose matrix of D. Then we set (36) to be zero to obtain the optimal value of I d :
γ β ( I d I d ˜ ) + D T ( D I d u ) = 0
γ β I d + D T D I d = D T u + γ β I d ˜
Since convolution calculation is complicated in the time domain, so we use 2D FFT (Fast Fourier Transform) to calculate the optimal value of I d in the frequency domain. Thus, we have:
γ β F ( I d ) + F ( D ) ¯ F ( D ) F ( I d ) = F ( D ) ¯ F ( u ) + γ β F ( I d ˜ )
F ( I d ) = γ β F ( I d ˜ ) + F ( D ) ¯ F ( u ) ) γ β + F ( D ) ¯ F ( D )
where F ( · ) represents FFT and ( · ) ¯ denotes complex conjugate. Finally the optimal value of I d , denoted as I d * , can be obtained by Inverse Fast Fourier Transform (IFFT) transformation:
I d * = F 1 γ β F ( I d ˜ ) + F ( D ) ¯ F ( u ) ) γ β + F ( D ) ¯ F ( D )
where F 1 ( · ) represents IFFT.

4. Experiments and Results

In order to verify the effectiveness and wide applicability of our proposed specular highlight removal algorithm, we conducted experiments on different fruits with smooth skin and less texture. And six existing algorithms, including Mallick [18], Shen [19], Shen [21], Akashi [23], Yamamoto [49] and Fu [26], are selected to be compared with our proposed algorithm. Since the fruit target is delivered to the image acquisition room by conveyor belt in the automatic quality inspection system as shown in Figure 1, we cannot be completely sure about the position and quantity of the fruit target. Therefore, the distribution of highlights on the fruit’s surface is naturally different and multiple fruits may be captured at the same time in one image acquisition. So here we selected three representative fruit targets with different quantities and highlight distributions, the first target is one apple with the concentrated highlight, and the second is one lemon with dispersive highlight distribution and the third target is three oranges with dispersed highlight.

4.1. Experimental Data Acquisition

Since the imager we designed in Section 3.1 is still in the theoretical research stage, so we use the LUCID PHX0505 camera to acquire the images of targets and the size of the captured images is 2448 × 2048. In total, we captured images of 50 different scenes. Furthermore, as mentioned before, three groups of representative experimental results are presented in the manuscript. Due to the nature of spatial sampling, the image captured by our designed imager is a mosaic image containing all the required information in three wavebands and three polarization angles. The mosaic images of our selected three representative fruit targets are shown in Figure 7(a1,b1,c1). Demosaicing method [11] is used here to reconstruct color images at three polarization angles—0°, 45°, and 90°. The reconstruction results are shown in Figure 7(a2–a4) in the first row shows the reconstruction results of an apple. Similarly, the reconstruction results of a lemon and three oranges are shown in Figure 7(b2–b4) and Figure 7(c2–c4), respectively.

4.2. Objective Evaluation Results

To objectively evaluate the quality of highlight removal results, three objective evaluation indexes: average gradient ( A G ), angular second moment ( A S M ) and inverse difference moment ( I D M ) [50] are chosen to quantitatively analyze the specular highlight removal results.
The A G calculates the gradient amplitude in the horizontal and vertical directions of the image, which further shows the detail contrast and texture change characteristics in the highlight removal results. Furthermore, a larger value of A G means richer image detail information. The expression of A G is:
A G = 1 ( M 1 ) ( N 1 ) i = 1 M 1 j = 1 N 1 ( I ( i , j ) i ) 2 + ( I ( i , j ) j ) 2 2
The A S M is the sum of squares of the elements in the gray co-occurrence matrix, which indicates the distribution uniformity of gray value and the degree of texture fineness of the highlight removal results. Therefore, a larger value of A S M means richer image texture information and more uniform local distribution, and that means the highlight removal effect is better. The calculation formula of A S M is:
A S M = i j P ( i , j ) 2
I D M represents the regularity of texture in highlight removal results. Furthermore, a smaller value of I D M means richer image texture, more complex structure and higher detail information retention. The expression of I D M is:
I D M = i j 1 1 + ( i j ) 2 P ( i , j )
The objective evaluation results of the three scenarios are shown in Table 1, Table 2 and Table 3, respectively. The best results are shown in bold. The average values and standard deviation of quantitative evaluation results are listed in Table 4.

4.3. Specular Highlight Removal Results

We conduct experiments on three scenarios to verify the effectiveness of our proposed method. Furthermore, here six existing algorithms, including Mallick [18], Shen [19], Shen [21], Akashi [23], Yamamoto [49] and Fu [26], are selected to be compared with our algorithm for visual evaluation. It should be noted here that for these compared algorithms, the 0° polarization image is selected as the input image for subsequent highlight removal. The specular highlight removal results are shown in Figure 8, Figure 9 and Figure 10.

4.4. Quality Inspection Results

To further verify the effectiveness of our proposed specular highlight removal algorithm, we conduct fruit quality inspection based on the specular highlight removal results obtained in Section 4.3. Firstly, we choose a relatively simple method based on edge detection [51] for fruit damage detection. Furthermore, the detection results of three targets are shown in Figure 11, Figure 12 and Figure 13.
Secondly, the fruit quality inspection can be realized by calculating the proportion (%) of the damaged area to the total image area [52]. Furthermore, the standard is: If the proportion of the damaged area to the total image area obtained above is less than 2 % , the fruit target is considered “Good”, while if the proportion is larger than 2 % , the fruit target is considered “Damage”. The quality inspection results are shown in Table 5, Table 6 and Table 7.

5. Discussion

In this study, we propose a specular highlight removal method for quality inspection of fruits. The effectiveness of the proposed method is verified through (1) objective evaluation indexes (AG, ASM, IDM), (2) visual effects and (3) fruit quality inspection results. A detailed discussion of the obtained experimental results will be presented in this section.
The quantitative evaluation results are shown in Table 1, Table 2 and Table 3. The three objective evaluation indexes are mainly used to judge the preservation degree of texture details, so in Table 1, the results of Shen [21] are the worst since serious data loss exists after the removal of the highlight. The results of other comparing methods are not ideal since they all fail to remove the specular highlight completely. Our proposed method separates the highlight more accurately and then performs information compensation, so our results are the best. The results of Mallick are worst in Table 2 due to effective color space conversion cannot be conducted for the specular highlight areas. The results of Shen [19] are the worst in Table 3, which is also due to the information loss caused by specular highlight removal. In a conclusion, our proposed algorithm outperforms the other six comparing algorithms and the objective evaluation results are all the best in Table 1, Table 2 and Table 3.
For the first fruit target (an apple), the reconstructed polarization images, and highlight removal results of our proposed algorithm and six other comparing algorithms are shown in Figure 8. It can be seen clearly that Shen’s [19] method has serious information loss after removing the specular highlight. Furthermore, the removal results of Mallick, Shen [21], Akashi, Yamamoto [49] and Fu have different degrees of highlight residue. Besides, a small amount of data is lost in Akashi’s Yamamoto’s and Fu’s results. In summary, the comparing six methods cannot effectively remove the specular highlight, and serious color distortion or data void occurs. In contrast, our proposed method has a much better highlight removal effect and higher reliability. The reason is that the proposed algorithm is based on the constraints of physical feature analysis, and the missing details are compensated based on the spatial color consistency regularization after the specular highlight removal. As shown in Figure 8(b7), our removal result has clearer texture on the target surface and better color fidelity, meanwhile the edge detail information is greatly preserved. The visual effect of our method is better than that of the other algorithms.
For the 2nd scene (a lemon), the specular highlight has dispersed distribution. The highlight removal results are shown in Figure 9. We can see that highlight residue exists in both Mallick’s and Shen’s [21] results as shown in Figure 9(b1,b3), respectively. The specular highlights are completely removed in the results of Shen [19], Akashi, Yamamoto [49] and Fu, but it can be seen clearly that data loss degrade the image qualities. However, the input of our algorithm is multidimensional information including intensity, polarization and spectral information, and the richness of the input information makes the proposed method has higher accuracy in specular highlight detection, removal and subsequent compensation of lost information, so our proposed method has a relatively better visual effect. Although there still exists very little specular highlight, the original color, texture and other detailed information of the target surface is retained well.
For the 3rd scene (oranges), its surface is a biological greasy epidermis with single surface color and simple texture, and the specular highlight area is large and dispersed in the target image. The highlight removal results are shown in Figure 10. Since the color space conversion cannot be effectively performed in the specular highlight area, it can be seen that in Mallick’s results, the highlight cannot be accurately separated and meanwhile data voids exist. The result of Shen [19] contains a large area of data voids. Furthermore, the same problems also exist in the results of Shen [21], Akashi [23], Yamamoto [49] and Fu [26]: blurred and distorted edges, and data loss. We can see that in the results of the first six methods, most of the information in the highlighted area is lost, and a black circle appears in each removal result, which will certainly affect the subsequent quality inspection results. In comparison, our proposed method has the best visual effect as shown in Figure 10(b7). Similarly, the combination of multidimensional information input and compensation of missing details guarantees that our method has better visual performance. It can be seen that the specular highlight is removed thoroughly, and meanwhile the original color, texture and other detailed information of the target surface is retained efficiently.
Figure 11, Figure 12 and Figure 13 show the damage detection results of three scenarios. It can be seen that the highlight removal results of other comparing algorithms are detected as having damage since they fail to remove the specular highlight accurately and completely. Wrong damage detection results lead to subsequent erroneous quality inspection results. By contrast, our damage detection results shown in Figure 11(7), Figure 12(7) and Figure 13(7) are accurate, and the quality inspection results are correct as shown in Table 5, Table 6 and Table 7 as well. This further illustrates the effectiveness of our proposed specular highlight removal algorithm, and it also proved that our proposed method is of great importance to improve the accuracy of fruit detection.
The running time of the comparing algorithms and our proposed method are shown in Table 8. Our proposed method is not the fastest. There are three reasons for the large gap in running time between Shen’s two methods, Mallick’s method and ours. Firstly, the computation of our proposed algorithm is relatively large. Since three procedures are included in our algorithm: the detection of specular highlight based on the joint multi-band-polarization characteristic vector constraint; the removal of specular highlight based on the proposed Max-Min multi-band-polarization differencing scheme combined with an ergodic least-squares separation; and the compensation of missing details based on the chromaticity consistency regularization. Secondly, our acquired image in real-time is a mosaic image containing all the required information in three wavebands and three polarization angles, so the demosaicing method is required to realize information reconstruction. The demosaicing process takes some time. Last but not least, the input image becomes multiple color polarization images after demosaicing, so there is no doubt that the processing time must be longer than the first three methods since they are all single-image-input methods, so our time is definitely longer. However, the accuracy of our algorithm is much higher than these three algorithms. Our algorithm code is based on the MatLab platform, and we can accelerate it by moving it to a new platform such as FPGA in the future.

6. Conclusions

In this paper, we propose a specular highlight removal algorithm based on multi-band polarization imaging for quality inspection of fresh fruits. In the proposed algorithm, we first design a new multi-band polarization imager to realize real-time image acquisition. Then, we propose a specular highlight removal method to accomplish high-precision highlight removal. Experimental results demonstrate that the proposed method is highly efficient and robust; meanwhile, it can effectively remove the specular highlight of fruits with smooth skin and less texture. Both the visual effect, image quality and objective evaluation indices are better than existing algorithms. In addition, our proposed algorithm can be directly integrated into existing quality inspection equipment, which is of great importance to improve the performance of existing quality inspection equipment.

Author Contributions

Conceptualization, J.H. and Y.Z.; methodology, J.H. and Q.P.; software, J.H. and Q.P.; validation, Y.Z.; formal analysis, J.H. and Q.P.; investigation, J.H. and Q.P.; resources, J.H.; data curation, J.H.; writing—original draft preparation, J.H.; writing—review and editing, J.H. and Y.Z.; visualization, J.H. and Y.Z.; supervision, Y.Z.; project administration, Y.Z.; funding acquisition, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (No.61771391), the Key R&D of Shaanxi Province (2020ZDLGY07-11), Science, Technology and Innovation Commission of Shenzhen Municipality (JCYJ20170815162956949 and JCYJ20180306171146740), and the Fundamental Research Funds for the Central Universities.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the editors and anonymous reviewers for their valuable comments on this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Srivastava, S.; Sadistap, S. Non-destructive sensing methods for quality assessment of on-tree fruits: A review. J. Food Meas. Charact. 2018, 12, 497–526. [Google Scholar] [CrossRef]
  2. Qin, J.; Chao, K.; Kim, M.S.; Lu, R.; Burks, T.F. Hyperspectral and multispectral imaging for evaluating food safety and quality. J. Food Eng. 2013, 118, 157–171. [Google Scholar] [CrossRef]
  3. Sanaeifar, A.; Mohtasebi, S.S.; Ghasemi-Varnamkhasti, M.; Ahmadi, H. Application of MOS based electronic nose for the prediction of banana quality properties. Measurement 2016, 82, 105–114. [Google Scholar] [CrossRef]
  4. Maniwara, P.; Nakano, K.; Ohashi, S.; Boonyakiat, D.; Seehanam, P.; Theanjumpol, P.; Poonlarp, P. Evaluation of NIRS as non-destructive test to evaluate quality traits of purple passion fruit. Sci. Hortic. 2019, 257, 108712. [Google Scholar] [CrossRef]
  5. Vanakovarayan, S.; Prasanna, S.; Thulasidass, S.; Mathavan, V. Non-Destructive Classification of Fruits by Using Machine Learning Techniques. In Proceedings of the International Conference on System, Computation, Automation and Networking (ICSCAN), Pondicherry, India, 3–4 July 2021; pp. 1–5. [Google Scholar]
  6. Mohapatra, A.; Shanmugasundaram, S.; Malmathanraj, R. Grading of ripening stages of red banana using dielectric properties changes and image processing approach. Comput. Electron. Agric. 2017, 31, 100–110. [Google Scholar] [CrossRef]
  7. Li, Z.; Cui, G.; Chang, S.; Ning, X.; Wang, L. Application of computer vision technology in agriculture. J. Agric. Mech. Res. 2009, 31, 228–232. [Google Scholar]
  8. Artusi, A.; Banterle, F.; Chetverikov, D. A Survey of Specularity Removal Methods. Comput. Graph. Forum 2011, 30, 2208–2230. [Google Scholar] [CrossRef]
  9. Martinsen, P.; Schaare, P. Measuring soluble solids distribution in kiwifruit using near-infrared imaging spectroscopy. Postharvest Biol. Tec. 1998, 14, 271–281. [Google Scholar] [CrossRef]
  10. Nguyen-Do-Trong, N.; Keresztes, J.C.; De Ketelaere, B.; Saeys, W. Cross-polarised VNIR hyperspectral reflectance imaging system for agrifood products. Biosyst. Eng. 2016, 151, 152–157. [Google Scholar] [CrossRef]
  11. Wen, S.; Zheng, Y.; Lu, F.; Zhao, Q. Convolutional demosaicing network for joint chromatic and polarimetric imagery. Opt. Lett. 2019, 44, 5646–5649. [Google Scholar] [CrossRef]
  12. Shan, W.; Xu, C.; Feng, B. Image Highlight Removal based on Double Edge-preserving Filter. In Proceedings of the IEEE 5th International Conference on Signal and Image Processing (ICSIP), Nanjing, China, 3–5 July 2020. [Google Scholar]
  13. Shafer, S.A. Using color to separate reflection components. Color Res. Appl. 1985, 10, 210–218. [Google Scholar] [CrossRef] [Green Version]
  14. Attard, L.; Debono, C.J.; Valentino, G.; Castro, M.D. Specular Highlights Detection Using a U-Net Based Deep Learning Architecture. In Proceedings of the 2020 Fourth International Conference on Multimedia Computing, Networking and Applications (MCNA), Valencia, Spain, 19–22 October 2020; pp. 4–9. [Google Scholar]
  15. Bajcsy, R.; Lee, S.; Leonardis, A. Detection of diffuse and specular interface reflections and inter-reflections by color image segmentation. Int. J. Comput. Vis. 1996, 17, 241–272. [Google Scholar] [CrossRef] [Green Version]
  16. Tan, R.T.; Ikeuchi, K. Separating reflection components of textured surfaces using a single image. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 178–193. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Yang, Q.; Wang, S.; Ahuja, N. Real-time specular highlight removal using bilateral filtering. In Proceedings of the European Conference on Computer Vision, Crete, Greece, 5–11 September 2010. [Google Scholar]
  18. Mallick, S.P.; Zickler, T.; Belhumeur, P.; Kriegman, D. Dichromatic separation: Specularity removal and editing. In Proceedings of the ACM SIGGRAPH 2006 Sketches, Boston, MA, USA, 30 July–3 August 2006; p. 166. [Google Scholar]
  19. Shen, H.; Zhang, H.; Shao, S.; Xin, J. Chromaticity-based separation of reflection components in a single image. Pattern Recognit. 2008, 41, 2461–2469. [Google Scholar] [CrossRef]
  20. Yoon, K.J.; Choi, Y.; Kweon, I.S. Fast Separation of Reflection Components using a Specularity-Invariant Image Representation. In Proceedings of the IEEE International Conference on Image Processing, Atlanta, GA, USA, 8–11 October 2006; pp. 973–976. [Google Scholar]
  21. Shen, H.L.; Zheng, Z.H. Real-time highlight removal using intensity ratio. Appl. Opt. 2013, 52, 4483. [Google Scholar] [CrossRef] [Green Version]
  22. Kim, H.; Jin, H.; Hadap, S.; Kweon, I. Specular reflection separation using dark channel prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 1460–1467. [Google Scholar]
  23. Akashi, Y.; Okatani, T. Separation of reflection components by sparse non-negative matrix factorization. Comput. Vis. Image Underst. 2016, 146, 77–85. [Google Scholar] [CrossRef] [Green Version]
  24. Suo, J.; An, D.; Ji, X.; Wang, H.; Dai, Q. Fast and high quality highlight removal from a single image. IEEE Trans. Image Process. 2016, 25, 5441–5454. [Google Scholar] [CrossRef] [Green Version]
  25. Ren, W.; Tian, J.; Tang, Y. Specular reflection separation with color-lines constraint. IEEE Trans. Image Process. 2017, 26, 2327–2337. [Google Scholar] [CrossRef]
  26. Fu, G.; Zhang, Q.; Song, C.; Lin, Q.; Xiao, C. Specular highlight removal for real world images. Comput. Graph. Forum 2019, 38, 253–263. [Google Scholar] [CrossRef]
  27. Boyer, J.; Keresztes, J.C.; Saeys, W.; Koshel, J. An automated imaging BRDF polarimeter for fruit quality inspection. In Proceedings of the Novel Optical Systems Design and Optimization XIX, San Diego, CA, USA, 28 August 2016; pp. 82–90. [Google Scholar]
  28. Wen, S.; Zheng, Y.; Lu, F. Polarization Guided Specular Reflection Separation. IEEE Trans. Image Process. 2021, 30, 7280–7291. [Google Scholar] [CrossRef]
  29. Jian, S.; Yue, D.; Hao, S.; Yu, S.X. Learning non-lambertian object intrinsics across shapenet categories. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1685–1694. [Google Scholar]
  30. Yi, R.; Tan, P.; Lin, S. Leveraging multiview image sets for unsupervised intrinsic image decomposition and highlight separation. In Proceedings of the Association for the Advance of Artificial Intelligence, New York, NY, USA, 7–12 February 2020; pp. 12685–12692. [Google Scholar]
  31. Nayar, S.K.; Fang, X.S.; Boult, T. Separation of reflection components using color and polarization. Int. J. Comput. Vis. 1997, 21, 163–186. [Google Scholar] [CrossRef]
  32. Nayar, S.K.; Fang, X.S.; Boult, T. Fast separation of direct and global components of a scene using high frequency illumination. ACM Trans. Graph. 2006, 25, 935–944. [Google Scholar] [CrossRef]
  33. Umeyama, S.; Godin, G. Separation of diffuse and specular components of surface reflection by use of polarization and statistical analysis of images. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 639–647. [Google Scholar] [CrossRef] [PubMed]
  34. Fan, W.; Ainouz, S.; Petitjean, C.; Bensrhair, A. Specularity removal: A global energy minimization approach based on polariza tion imaging. Comput. Vis. Image Underst. 2017, 158, 31–39. [Google Scholar]
  35. Yang, Y.; Wang, L.; Huang, M.; Zhu, Q.; Wang, R. Polarization imaging based bruise detection of nectarine by using ResNet-18 and ghost bottleneck. Postharvest Biol. Tec. 2022, 189, 111916. [Google Scholar] [CrossRef]
  36. Lin, S.; Li, Y.; Kang, S.; Tong, X.; Shum, H. Diffuse specular separation and depth recovery from image sequences. In Proceedings of the European Conference on Computer Vision, Copenhagen, Denmark, 28–31 May 2002. [Google Scholar]
  37. Lin, S.; Shum, H.Y. Separation of diffuse and specular reflection in color images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Kauai, HI, USA, 8–14 December 2001. [Google Scholar]
  38. Guo, X.; Cao, X.; Ma, Y. Robust separation of reflection from multiple images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2187–2194. [Google Scholar]
  39. Nguyen-Do-Trong, N.; Dusabumuremyi, J.C.; Saeys, W. Cross-polarized VNIR hyperspectral reflectance imaging for non-destructive quality evaluation of dried banana slices, drying process monitoring and control. J. Food Eng. 2018, 238, 85–94. [Google Scholar] [CrossRef]
  40. Hao, J.; Zhao, Y.; Liu, W.; Kong, S.G.; Liu, G. A Micro-Polarizer Array Configuration Design Method for Division of Focal Plane Imaging Polarimeter. IEEE Sens. J. 2020, 21, 1. [Google Scholar] [CrossRef]
  41. Alenin, A.S.; Vaughn, I.J.; Tyo, J.S. Optimal bandwidth micropolarizer arrays. Opt. Lett. 2017, 42, 458. [Google Scholar] [CrossRef] [Green Version]
  42. Bai, C.; Li, J.; Lin, Z.; Yu, J. Automatic design of color filter arrays in the frequency domain. IEEE Trans. Image Process. 2016, 25, 1793–1807. [Google Scholar] [CrossRef]
  43. Zhao, X.; Lu, X.; Abubakar, A.; Bermak, A. Novel micro-polarizer array patterns for CMOS polarization image sensors. In Proceedings of the 2016 5th International Conference on Electronic Devices, Systems and Applications (ICEDSA), Ras Al Khaimah, United Arab, 6–8 December 2016; pp. 4994–5007. [Google Scholar]
  44. Bayer, B.E. Color Imaging Array. U.S. Patent 3,971,065, 20 July 1976. [Google Scholar]
  45. Shurcliff, W.A. Polarized Light: Production and Use; Harvard U. P.: Cambridge, MA, USA, 1962. [Google Scholar]
  46. Zhao, Y.; Peng, Q.; Xue, J.; Kong, S.G. Specular reflection removal using local structural similarity and chromaticity consistency. In Proceedings of the IEEE International Conference on Image Processing, Quebec City, QC, Canada, 27–30 September 2015. [Google Scholar]
  47. Afonso, M.V.; Bioucas-Dias, J.M.; Figueiredo, M.A. Fast Image Recovery Using Variable Splitting and Constrained Optimization. IEEE Trans. Image Process. 2010, 19, 2345–2356. [Google Scholar] [CrossRef] [Green Version]
  48. Buades, A.; Coll, B.; Morel, J.M. A non-local algorithm for image denoising. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–26 June 2005. [Google Scholar]
  49. Yamamoto, T.; Kitajima, T.; Kawauchi, R. Efficient improvement method for separation of reflection components based on an energy function. In Proceedings of the IEEE International Conference on Image Processing, Beijing, China, 17–20 September 2017. [Google Scholar]
  50. Haralick, R.M. Textural Features for Image Classification. IEEE T. Syst. Man Cybern. 1973, 3, 610–621. [Google Scholar] [CrossRef] [Green Version]
  51. Ali, M.; Thai, K.W. Automated fruit grading system. In Proceedings of the IEEE International Symposium in Robotics and Manufacturing Automation, Kuala Lumpur, Malaysia, 19–21 September 2017; pp. 1–6. [Google Scholar]
  52. Ji, Y.; Zhao, Q.; Bi, S.; Shen, T. Apple Grading Method Based on Features of Color and Defect. In Proceedings of the 37th Chinese Control Conference (CCC), Wuhan, China, 25–27 July 2018; pp. 5364–5368. [Google Scholar]
Figure 1. The schematic diagram of the automatic fruit quality inspection system where our proposed specular highlight removal algorithm can be integrated.
Figure 1. The schematic diagram of the automatic fruit quality inspection system where our proposed specular highlight removal algorithm can be integrated.
Remotesensing 14 03215 g001
Figure 2. Fruits with smooth skin and less texture.
Figure 2. Fruits with smooth skin and less texture.
Remotesensing 14 03215 g002
Figure 3. Architecture diagram of our proposed algorithm, including image acquisition and specular highlight removal.
Figure 3. Architecture diagram of our proposed algorithm, including image acquisition and specular highlight removal.
Remotesensing 14 03215 g003
Figure 4. Statistical analysis of spectral and polarization characteristics differences between diffuse reflection and specular highlight components.
Figure 4. Statistical analysis of spectral and polarization characteristics differences between diffuse reflection and specular highlight components.
Remotesensing 14 03215 g004
Figure 5. The detection result of specular highlight area.
Figure 5. The detection result of specular highlight area.
Remotesensing 14 03215 g005
Figure 6. The detection result of specular highlight area.
Figure 6. The detection result of specular highlight area.
Remotesensing 14 03215 g006
Figure 7. Experimental data pre-processing: (a1,b1,c1) in the first column are mosaic images captured by the multi-band polarization imager. (a2a4) are reconstructed color images of the 1st scenario (an apple) in three polarization angles. (b2b4) and (c2c4) are reconstructed color images of the 2nd scenario (a lemon) and 3rd scenario (three oranges) in three polarization angles, respectively.
Figure 7. Experimental data pre-processing: (a1,b1,c1) in the first column are mosaic images captured by the multi-band polarization imager. (a2a4) are reconstructed color images of the 1st scenario (an apple) in three polarization angles. (b2b4) and (c2c4) are reconstructed color images of the 2nd scenario (a lemon) and 3rd scenario (three oranges) in three polarization angles, respectively.
Remotesensing 14 03215 g007
Figure 8. The specular highlight removal results of the 1st scenario (an apple). (a1a3): reconstructed color images of target in different polarization angles. (b1b6) are highlight removal results of Mallick [18], Shen [19], Shen [21], Akashi [23], Yamamoto [49] and Fu [26], respectively. (b7) is the specular highlight removal result of our proposed method. For easy observation, each result image is accompanied by an enlarged view of the detail information below.
Figure 8. The specular highlight removal results of the 1st scenario (an apple). (a1a3): reconstructed color images of target in different polarization angles. (b1b6) are highlight removal results of Mallick [18], Shen [19], Shen [21], Akashi [23], Yamamoto [49] and Fu [26], respectively. (b7) is the specular highlight removal result of our proposed method. For easy observation, each result image is accompanied by an enlarged view of the detail information below.
Remotesensing 14 03215 g008
Figure 9. The specular highlight removal results of the 2nd scenario (a lemon). (a1a3): reconstructed color images of target in different polarization angles. (b1b6) are highlight removal results of Mallick [18], Shen [19], Shen [21], Akashi [23], Yamamoto [49] and Fu [26], respectively. (b7) is the specular highlight removal result of our proposed method. For easy observation, each result image is accompanied by an enlarged view of the detail information below.
Figure 9. The specular highlight removal results of the 2nd scenario (a lemon). (a1a3): reconstructed color images of target in different polarization angles. (b1b6) are highlight removal results of Mallick [18], Shen [19], Shen [21], Akashi [23], Yamamoto [49] and Fu [26], respectively. (b7) is the specular highlight removal result of our proposed method. For easy observation, each result image is accompanied by an enlarged view of the detail information below.
Remotesensing 14 03215 g009
Figure 10. The specular highlight removal results of the 3rd scenario (oranges). (a1a3): reconstructed color images of target in different polarization angles. (b1b6) are highlight removal results of Mallick [18], Shen [19], Shen [21], Akashi [23], Yamamoto [49] and Fu [26], respectively. (b7) is the specular highlight removal result of our proposed method. For easy observation, each result image is accompanied by an enlarged view of the detail information below.
Figure 10. The specular highlight removal results of the 3rd scenario (oranges). (a1a3): reconstructed color images of target in different polarization angles. (b1b6) are highlight removal results of Mallick [18], Shen [19], Shen [21], Akashi [23], Yamamoto [49] and Fu [26], respectively. (b7) is the specular highlight removal result of our proposed method. For easy observation, each result image is accompanied by an enlarged view of the detail information below.
Remotesensing 14 03215 g010
Figure 11. The damage detection results of the 1st scenario (an apple). (16) are damage detection results of Mallick [18], Shen [19], Shen [21], Akashi [23], Yamamoto [49] and Fu [26], respectively. (7) is the damage detection result of our proposed method.
Figure 11. The damage detection results of the 1st scenario (an apple). (16) are damage detection results of Mallick [18], Shen [19], Shen [21], Akashi [23], Yamamoto [49] and Fu [26], respectively. (7) is the damage detection result of our proposed method.
Remotesensing 14 03215 g011
Figure 12. The damage detection results of the 2nd scenario (a lemon). (16) are damage detection results of Mallick [18], Shen [19], Shen [21], Akashi [23], Yamamoto [49] and Fu [26], respectively. (7) is the damage detection result of our proposed method.
Figure 12. The damage detection results of the 2nd scenario (a lemon). (16) are damage detection results of Mallick [18], Shen [19], Shen [21], Akashi [23], Yamamoto [49] and Fu [26], respectively. (7) is the damage detection result of our proposed method.
Remotesensing 14 03215 g012
Figure 13. The damage detection results of the 3rd scenario (oranges). (16) are damage detection results of Mallick [18], Shen [19], Shen [21], Akashi [23], Yamamoto [49] and Fu [26], respectively. (7) is the damage detection result of our proposed method.
Figure 13. The damage detection results of the 3rd scenario (oranges). (16) are damage detection results of Mallick [18], Shen [19], Shen [21], Akashi [23], Yamamoto [49] and Fu [26], respectively. (7) is the damage detection result of our proposed method.
Remotesensing 14 03215 g013
Table 1. The Quantitative Evaluation of Highlight Removal Results of 1st Scenario.
Table 1. The Quantitative Evaluation of Highlight Removal Results of 1st Scenario.
Mallick [18]Shen [19]Shen [21]Akashi [23]Yamamoto [49]Fu [26]Proposed
AG0.27920.26220.20470.22820.24570.25230.3101
ASM0.62900.60010.59860.61310.58690.60200.6335
IDM0.99530.99690.99730.99580.99890.99650.9941
Table 2. The Quantitative Evaluation of Highlight Removal Results of 2nd Scenario.
Table 2. The Quantitative Evaluation of Highlight Removal Results of 2nd Scenario.
Mallick [18]Shen [19]Shen [21]Akashi [23]Yamamoto [49]Fu [26]Proposed
AG0.24680.25150.24990.25820.26010.25970.2826
ASM0.70020.70650.70210.70300.70550.70640.7093
IDM0.99710.99690.99630.99580.99570.99560.9948
Table 3. The Quantitative Evaluation of Highlight Removal Results of 3rd Scenario.
Table 3. The Quantitative Evaluation of Highlight Removal Results of 3rd Scenario.
Mallick [18]Shen [19]Shen [21]Akashi [23]Yamamoto [49]Fu [26]Proposed
AG0.24520.22850.24470.23820.24490.25030.2714
ASM0.71640.70650.71190.71310.71870.71740.7182
IDM0.99590.99790.99640.99690.99660.99570.9953
Table 4. Average Values and Standard Deviation of Quantitative Evaluation (Average Values/Standard Deviation).
Table 4. Average Values and Standard Deviation of Quantitative Evaluation (Average Values/Standard Deviation).
Mallick [18]Shen [19]Shen [21]Akashi [23]Yamamoto [49]Fu [26]Proposed
AG0.2605/0.02490.2487/0.02220.2235/0.02090.2381/0.02450.2488/0.02670.2583/0.02780.2921/0.0194
ASM0.6196/0.03730.6725/0.03680.6085/0.03576780/0.03410.6120/0.03870.6753/0.04020.6855/0.0358
IDM0.9951/0.00100.9973/0.00120.9964/0.00090.9967/0.00100.9950/0.00120.9956/0.00140.9945/0.0009
Table 5. The Quality Inspection Results of 1st Scenario.
Table 5. The Quality Inspection Results of 1st Scenario.
Mallick [18]Shen [19]Shen [21]Akashi [23]Yamamoto [49]Fu [26]Proposed
ResultsDamageDamageDamageDamageDamageDamageGood
Table 6. The Quality Inspection Results of 2nd Scenario.
Table 6. The Quality Inspection Results of 2nd Scenario.
Mallick [18]Shen [19]Shen [21]Akashi [23]Yamamoto [49]Fu [26]Proposed
ResultsDamageDamageDamageGoodGoodDamageGood
Table 7. The Quality Inspection Results of 3rd Scenario.
Table 7. The Quality Inspection Results of 3rd Scenario.
Mallick [18]Shen [19]Shen [21]Akashi [23]Yamamoto [49]Fu [26]Proposed
ResultsDamageDamageDamageDamageDamageDamageGood
Table 8. Running Time of Each Algorithm (Unit: seconds).
Table 8. Running Time of Each Algorithm (Unit: seconds).
Mallick [18]Shen [19]Shen [21]Akashi [23]Yamamoto [49]Fu [26]Proposed
Running time21.31 s5.74 s1.18 s81.14 s94.57 s345.03 s55.67 s
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hao, J.; Zhao, Y.; Peng, Q. A Specular Highlight Removal Algorithm for Quality Inspection of Fresh Fruits. Remote Sens. 2022, 14, 3215. https://doi.org/10.3390/rs14133215

AMA Style

Hao J, Zhao Y, Peng Q. A Specular Highlight Removal Algorithm for Quality Inspection of Fresh Fruits. Remote Sensing. 2022; 14(13):3215. https://doi.org/10.3390/rs14133215

Chicago/Turabian Style

Hao, Jinglei, Yongqiang Zhao, and Qunnie Peng. 2022. "A Specular Highlight Removal Algorithm for Quality Inspection of Fresh Fruits" Remote Sensing 14, no. 13: 3215. https://doi.org/10.3390/rs14133215

APA Style

Hao, J., Zhao, Y., & Peng, Q. (2022). A Specular Highlight Removal Algorithm for Quality Inspection of Fresh Fruits. Remote Sensing, 14(13), 3215. https://doi.org/10.3390/rs14133215

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop