Next Article in Journal
Methodology for Complex Efficiency Evaluation of Machinery Safety Measures in a Production Organization
Next Article in Special Issue
Convolutional Neural Network in the Evaluation of Myocardial Ischemia from CZT SPECT Myocardial Perfusion Imaging: Comparison to Automated Quantification
Previous Article in Journal
Comparing Innovative Versus Conventional Ham Processes via Environmental Life Cycle Assessment Supplemented with the Assessment of Nitrite Impacts on Human Health
Previous Article in Special Issue
Exudates as Landmarks Identified through FCM Clustering in Retinal Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hair Removal Combining Saliency, Shape and Color

National Research Council (CNR), Institute for the Applications of Calculus, 80131 Naples, Italy
Appl. Sci. 2021, 11(1), 447; https://doi.org/10.3390/app11010447
Submission received: 30 November 2020 / Revised: 26 December 2020 / Accepted: 29 December 2020 / Published: 5 January 2021
(This article belongs to the Special Issue Advanced Image Analysis and Processing for Biomedical Applications)

Abstract

:

Featured Application

Hair removal is a preliminary and often necessary step in the automatic processing of dermoscopic images since hair can negatively affect or compromise the distinction of a lesion region from the normal surrounding healthy skin. A featured application is skin lesion segmentation.

Abstract

In a computer-aided system for skin cancer diagnosis, hair removal is one of the main challenges to face before applying a process of automatic skin lesion segmentation and classification. In this paper, we propose a straightforward method to detect and remove hair from dermoscopic images. Preliminarily, the regions to consider as candidate hair regions and the border/corner components located on the image frame are automatically detected. Then, the hair regions are determined using information regarding the saliency, shape and image colors. Finally, the detected hair regions are restored by a simple inpainting method. The method is evaluated on a publicly available dataset, comprising 340 images in total, extracted from two commonly used public databases, and on an available specific dataset including 13 images already used by other authors for evaluation and comparison purposes. We propose also a method for qualitative and quantitative evaluation of a hair removal method. The results of the evaluation are promising as the detection of the hair regions is accurate, and the performance results are satisfactory in comparison to other existing hair removal methods.

1. Introduction

In almost every specialist area of medicine, including dermatology, image analysis is transforming the diagnostic methods. In particular, computer-aided diagnosis systems for dermoscopic images have proven to be useful tools to improve significantly the common dermoscopic diagnostic practice, which is usually characterized by limited accuracy and is mainly based on visual inspection. Indeed, to differentiate melanoma from other pigmented skin lesions, these systems display morphological features not easily perceptible by the naked eye and support the assessment process of the human expert [1,2]. Typically, a computer-aided system is structured in four main consecutive steps: preprocessing, segmentation, feature extraction, and classification, each playing a key role in enabling correct diagnosis [3]. During the preprocessing, the dermoscopic image is subjected to noise removal, image enhancement, color quantization, and artifacts removal processes [4,5]. Noise removal and image enhancement techniques are employed to minimize the effects due to different illumination conditions and poor resolution of the acquisition process [6]. Color quantization [7,8,9,10] is a technique of reducing the total number of unique colors in the image often used as a preprocessing step for many applications that are carried out more efficiently on a numerically smaller color set. For example, color quantization is employed effectively as a preliminary computation phase for skin lesion segmentation [11,12,13,14,15]. The removal methods of artifacts, such as bubbles, hair, shadows and reflections, aims to eliminate their negative effect and disturbance on the diagnostic operations of the area of interest (i.e., the skin lesion) [16]. Particularly, if the area of the skin lesion is partially occluded by the hair, although such occlusions may not be critical for human investigation, this presence poses major challenges for automatic image analysis method such as segmentation and classification. Indeed, the hair removal (HR) methods and skin lesion segmentation (SLS) methods are highly correlated. Usually, SLS methods can determine the skin lesion region without the need to apply preliminary hair removal since these methods can include explicitly or implicitly hair removal operations. However, it is appropriate to consider that (a) in any case, a partial hair removal facilitates and increases the efficiency of the segmentation step; (b) the hair presence can lead to errors in lesion detection in some situations, especially when there is a massive presence of hair (see Figure 1); (c) for diagnostic and therapeutic purposes, HR is helpful at least to visualize the free-hair lesion to the expert.
To address the hair issue, several hair removal methods have been proposed. HR methods usually consist of two steps: (a) the detection of occluding hair and generation of the hair binary mask; (b) the removal of the detected hair. Typically, hair detection is accomplished through object detection methods enucleating thin items, while hair removal is obtained through standard inpainting methods. As reported in [17], at least six main hair removal methods are widely used in the literature [18,19,20,21,22,23].
The method proposed in [18] by Lee et al., also known as Dullrazor, consists of four steps. The hair regions are initially detected through the morphological closing operator on each RGB color channel separately and with three structuring elements having different directions (step 1). To generate the binary mask, a thresholding process is applied to the absolute difference between the original color channel and the image generated by the closing (step 2). The mask pixels undergo a bilinear interpolation between two nearby not-mask pixels (step 3). Finally, to the resulting image, an adaptive median filter is applied (step 4).
The method [19] by Xie et al. also consists of four steps. The hair area is improved using a morphological closing top-hat operator (step 1). The binary image is obtained through a statistical thresholding process (step 2). To extract the hairs, the elongate feature property of connected regions is employed (step 3). To restore the information occluded by the hair, they apply the image inpainting method based on partial differential equation (PDE), which realizes the diffusion of information through the difference between pixels (step 4).
In the method proposed in [20] by Abbas et al., there are three computational steps. In the CIELab uniform color space, the hairs are detected by a derivative of Gaussian (DoG) (step 1). Morphological techniques to link broken hair segments, eliminate small and circular objects, and fill the gaps between lines are applied (step 2). The adopted inpainting method is based on coherence transport (step 3).
The method in [21] by Huang et al. comprises three steps. To the grayscale version of the image, a multiscale curvilinear matched filter is applied (step 1). To detect the hair regions, hysteresis thresholding is employed (step 2). Then, region growing and the linear discriminant analysis (LDA) technique, based on the pixel color information in the CIELab color space, are applied to recover the missing information left by the removed hair (step 3).
The method [22] by Toossi et al. includes four steps. The image is converted to a grayscale image via a principal component analysis (PCA), and the noise is filtered with a Wiener filter (step 1). Hair is detected by using an adaptive canny edge detector (step 2). A refining process with morphological operators to eliminate unwanted objects and obtain a smooth hair mask is then applied (step 3). The inpainting process is carried out by a multi-resolution transport inpainting method based on wavelets (step 4).
As with [20,21], in [23] by Bibiloni et al., hair removal is made up of three steps. The contrast of the luminance of the image is improved with the Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm (step 1). The hair is detected using soft color morphology operators in the CIELab color space (step 2). The inpainting phase is based on the arithmetic mean of the modified opening and closing morphological transformations to recover the missing pixels (step 3). The common element of these HR methods and most of the other existing methods, e.g., [24,25], is the employment of morphological operations and, to a minor extent, of information derived from color. On the other hand, although deep learning has been used successfully to solve many difficult computer vision problems, inexplicably, to the best of our knowledge, only two very recent HR methods relying on neural network architecture exist [26,27].
Despite the sufficiently wide variety of the existing papers, the problem of hair removal results to be not solved satisfactorily yet. The main critical points are the failure to identify hair accurately and the undesirable effects such as unremoved thin hair and color alteration.
We address the HR problem using information regarding the saliency, shape, and color of the image objects. These are three elements that have proved to be extremely useful because each of them allows capturing a fundamental aspect of the problem at hand. Indeed, besides the shape aspects, detectable by mathematical morphology properties, it is also appropriate to perform the hair detection based on information related to the significant image elements and detectable by their saliency and color properties. In the following, we refer to the proposed method as saliency shape color for hair removal, shortly indicated as HR-SSC or simply SSC.
As described in Section 2, HR-SSC consists of five steps. The core of the method is step 4, named hair object detection, in which the hair regions are determined. The innovative elements of this step, whose success also depends on the correctness of the results obtained in the three previous steps, are related mainly to how the initial candidate hair components are considered (see Section 2 for more details). In the last step, hair removal is performed using a standard inpainting method.
The method is evaluated and compared extensively with other existing methods since a detailed quantitative and qualitative analysis on two publicly available databases PH2 [28] and ISIC2016 [29], usually used in dermoscopic image processing, is performed.
The experimental results confirm (a) the effectiveness and the utility of the employment of saliency, shape, and color information for HR; (b) that HR-SSC achieves good quantitative results with an adequate balance and has a competitive and satisfactory performance concerning other existing HR methods; (c) that HR-SSC implementation is simple and rather fast since it does not require a large amount of computational power based on a high number of parameters and of labeled training images.
Additional contributions of this work are (a) the availability of appropriate datasets to be used for testing and comparing each new method; (b) the proposal of a method for qualitative and quantitative evaluation of an HR method.
The paper is organized as follows: in Section 2, we describe the method HR-SSC, detailing its main steps; in Section 3, we provide a quantitative and qualitative evaluation of experimental results, also highlighting the pros and cons; finally, discussion and conclusions are drawn in Section 4.

2. The Proposed Method

The proposed method, as mentioned above, is based on three elements: the notion of visual saliency, shape, and color. Indeed, since the saliency of an item is the element for it stands out from its neighbors [30], its use allows to enucleate the most relevant subsets and to focus on the hair regions especially. Moreover, since hair regions have a well-defined structure, the shape-oriented operations of the mathematical morphology, that “simplify image data, preserving their essential shape characteristics and eliminating irrelevancies” [31], lend themselves well to detect the hair object. On the other hand, the properties of the color model can result essential to distinguish between no-hair and hair regions when information related to saliency and shape is not enough to manage ambiguous cases [11].
The method consists of five main steps as described in the diagram shown in Figure 2. The step “Hair object detection” is the main step and is preceded by three preliminary steps, called “size reduction”, “Pseudo-Hair detection” and “Border and corner component detection” aimed respectively at reducing the image size, determining the initial candidate regions to consider as hair regions, called pseudo-hair, and determining the components located on the frame of the image. The “Hair object detection” step is followed by the hair removal step and the resizing called “Inpainting and rescaling”. Specifically, the method can be briefly described as follows.
Step 1. 
Size reduction—The first step is devoted to limit the computation burden of the successive steps by reducing the size of the input image with a scale factor s equal to the ratio of a fixed value, say Maxdim, and the number of columns. To perform this, we resort to the classical and most common bicubic downsampling, implemented by the Matlab command imresize with bicubic option and scale factor s. The size reduction step is an optional but highly recommended operation since it significantly limits the computation time.
Step 2. 
Pseudo-Hair detection—This step is based on top-hat transformation, i.e., a morphological operator capable of extracting small elements and details from a grayscale image, commonly used for feature extraction, background equalization, and other enhancement operations. There are two types of transformation: the white top-hat transformation, defined as the difference between the original image and its aperture by a structuring element, and the black top-hat transformation (or bottom-hat transformation), defined dually as the difference between the closure by a structuring element and the original image [32,33]. Following [19,34], to obtain the binarized version HR initially containing the pseudo-hair components, we apply a bottom-hat filter in the red band R of the RGB image and then the Otsu threshold method [35] by the Matlab command imbinarize. Then, if HR is not empty, the actual hair regions are determined during the successive steps 3–5.
Indeed, the components currently detected in HR (i.e., the so-called pseudo-hair components) can correspond to hair regions but can also correspond to portions of other types of artifacts survived this preliminary treatment, such as marker ink signs, dark spots belonging to the lesion, marker colored disks [34], and regions wrongly identified. These regions not corresponding to hair regions are called no-hair regions in the following, and if they exist, they are detected and eliminated in the successive steps. In Figure 3b some examples of pseudo-hair are shown, where the no-hair regions are approximatively indicated by a red arrow.
Step 3. 
Border and corner component detection—The border components are detected based on their saliency and proximity to the image frame, by applying the following process, named called border detection, already used in [14,15]. The saliency map (SM) with well-defined boundaries of salient objects is computed by the method proposed in [36]. Successively, SM is enhanced by increasing the contrast in the following way: the values of the input intensity image are mapped to new values obtained by saturating the bottom 1% and the top 1% of all pixel values, by the Matlab command imadjust. Then, the saliency map SM is binarized by assigning to the foreground all pixels with a saliency value greater than the average saliency value. The connected components of SM including pixels of the image frame are considered as border components and stored in the bidimensional array, named BC.
Moreover, the image corner components, usually much darker than the image center, are detected following the same procedure proposed in [34]. Specifically, the representation of the input image in the HSV color space is examined; the channel V undergoes a thresholding process by a predefined threshold value δ. Then, the components of the thresholded V covering most of the frame or the corner area of the image are considered as image corner components and are stored in the bidimensional array, named CC (see Figure 3c).
Step 4. 
Hair object detection—Preliminarily, the no-hair regions are detected and stored in the bidimensional array, named NR, as follows. NR is initially computed as the product S.*V and binarized by the Otsu method. Then, the salient pixels not belonging to HR and BC are included in NR, the pseudo-hair regions currently detected in HR are removed from NR, and small holes in NR are filled. Successively, if NR has a significant extension (area), the detected no-hair regions are removed from HR. If the current HR is not empty, border components are suitably considered and possibly removed from HR taking also into account the gray version of the input image Ig and a fixed gray value, say Δ, indicating a minimum reference gray value for the hair component. Finally, corner components and eventual remaining components corresponding to colored disks are eliminated from HR.
At the end of this step, the regions in HR are located in correspondence with the detected hair objects and form a binary hair-mask on which to perform the next reconstruction step. See the Matlab pseudocode given below for more details. In Figure 3d, examples of detected hair are given.
Step 4. Hair object detection
% No-hair regions detection
NR = S. *V; % Initial no-hair regions construction and storing in NR
NR = imbinarize(NR, graythresh (NR)) % Otsu binarization
NR(SM > 0 & HR == 0 & BC == 0) = 1; % insertion in NR of salient pixel not belonging to HR and BC
NR (HR > 0) = 0; % pseudo-hair elimination from NR
NR = imfill(NR,’holes’); % holes filling
% end of no-hair regions detection
if (area(NR) is significant)
    HR(NR > 0) = 0; % no-hair regions removal from HR
    HR = imfill(HR,’holes’); % holes filling
    if (HR is not empty)
      if (BC is not empty) % border and corner components management
        NB = BC; % copy of BC
        NB(NR > 0 & Ig > Δ) =0 ; % generation of NB without no-hair regions and too dark regions
        HR(NB >= 0 & SM > 0) % elimination of salient pixels of NB from HR
        CR = (BC > 0 & NB > 0) % common regions to BC and NB
        HR(CR > 0) = 0 % elimination from HR of common regions of BC and NB
        CR(CR > 0 & CC > 0) = 0; % corner regions elimination from CR
        BN = border_detection (CR); % border components detection in CR (as done in step 3)
        if (BN is not empty)
           HR(BN > 0 & Ig > Δ) = 0; % elimination of clear border regions of CR from HR
          end
     end
     HR(NR > 0 & HR > 0 | (BC > 0 & Ig > Δ)) = 0; % colored disk and clear border component removal
   end
end
Step 5. 
Inpainting and rescaling—If HR is empty, the image is considered hairless; otherwise, the reconstruction process is applied. After a preliminary enlargement of HR by n steps of dilation, the inpainting is carried out by calling the Matlab function regionfill on each image channel separately, by using HR as hair-mask and then joining the resulting channels. If the size reduction step has been performed, a scaling is newly applied using the Matlab function imresize with the bicubic option. In Figure 3e, examples of the resulting image are given.
Different parameter settings to achieve a trade-off between quality and performance have been explored. The better parameter values resulting from this analysis are Maxdim = 500, δ = 0.4, Δ = 100, n = 3. The experimental results shown in this paper are obtained by this setting. The method is implemented in Matlab using Intel® core ™ i7—6600U CPU 2.60 GHz with 8 GB installed RAM and a 64-bit Operating System Windows 10.

3. Experimental Results

This section describes the image datasets and the evaluation of the experimental results in qualitative and quantitative terms. In fact, the evaluation of the performance of the proposed method and the comparison with other methods are very hard tasks due to the lack of publicly available source code of the existing methods, the limited literature, and the different evaluation methodology often employing not well-specified datasets and different quality measures. To overcome these critical issues, (a) we select some adequate datasets (see Section 3.1); (b) we perform qualitative evaluations/comparisons from different points of view (see Section 3.2); (c) following [17,37], we perform quantitative evaluations and comparisons by generating synthetic hair on skin lesion images originally hair-free in a controlled way (see Section 3.3). Note that the controlled hair introduction modality offers the advantage that the added hair regions are known and constituted a reference image, i.e., a ground truth. Accordingly, since the quantitative evaluation of the performance of an HR method requires a reference image, this modality is the unique way to evaluate the results by comparing the added hair regions in the reference image (ground truth) with the detected hair regions in the binary mask.

3.1. Datasets

We test our method by considering images available on two publicly available databases of dermoscopic images: PH2 [28] and ISIC2016 [29]. PH2 is a dermoscopic image database acquired at the Dermatology Service of Hospital Pedro Hispano to support comparative studies on segmentation/classification methods. This database includes clinical/histological diagnosis, medical annotation, and the evaluation of many dermoscopic criteria. It provides 200 dermoscopic RGB images and the corresponding ground truth, including 80 atypical nevi, 80 common nevi, and 40 melanomas. All the images are 8-bit RGB and have resolution 760 × 560 pixels. ISIC2016 is one of the largest databases of dermoscopic images of skin lesions with quality control held by the International Symposium on Biomedical Imaging (ISBI) to improve melanoma diagnosis. It includes images representative of both benign and malignant skin lesions. For each image, the ground truth is also available. ISIC2016 consists of 397 (75 melanomas) and 900 (173 melanomas) annotated images as testing and training data, respectively. The images are 8-bit RGB and have a size ranging from 542 × 718 to 2848 × 4288. PH2 and ISIC2016 databases contain numerous images with complex backgrounds and complicated skin conditions with the presence of hair and other artifacts/aberrations.
Since in PH2 and ISIC2016 hairless and hairy images are not distinguished, it is not possible to evaluate the performance of an HR method on each total dataset, and it is necessary to separate them preliminarily. Hence, from PH2 and ISIC2016 we extract two datasets, denoted as H-data and NH-data, each constituted by 170 images, which respectively contain images with evident hair and images without hair. These images are selected randomly and subdivided into the two datasets according to a human visual inspection. These datasets, totally comprising 340 images, are available at the Github link indicated in the section Data Availability Statement.
To accurately and comprehensively validate the goodness of detecting hair and, at the same time, to make a deeper comparison with the published results of the existing methods [18,19,20,21,22,23], which in the following we indicate with the name of the first author (i.e., Lee, Xie, Huang, Abbas, Toossi, Bibiloni), we also consider a specific dataset available in [37]. This dataset, here call NH13-data and shown in Figure 4, is constituted by 13 images without hair. We consider also the hairy images obtained starting from NH13-data by the GAN method [38] and HairSim method [39], that starting from a hair-free dermoscopic image, provide a hair-occluded image and the corresponding binary hair-mask. These datasets are available in [37], are denoted as H13GAN-data and H13Sim-data, and are shown in Figure 5 and Figure 6, respectively.
Moreover, to validate the performance of the method on a larger dataset, we simulate the presence of hair on NH-data using the HairSim method by generating the HSim-data set. Note that for HSim-data and H-data, only the methods Lee, Xie, and HR-SSC are considered. The choice of these methods is based on the fact that first, they are the only methods, including the deep learning class of methods, with an available source code, second, they are used widely in the literature, and third, they have higher quality measures in [37]. We consider all images of H-data and HSim-data, but, given the high number of images, we limit to show the results for a13 images sample for each dataset, here named sH-data and sHSim-data, respectively. To favor the visual comparison of the results, the sNH-data from which sHSim-data are generated are shown in Figure 7, while in Figure 8 and Figure 9 sHSim-data and sH-data, together with the corresponding added hair, are respectively shown. Additionally, all of these datasets are available at the above Github link to support the possibility of comparison by other authors.

3.2. Qualitative Evaluation

To perform a qualitative evaluation, we check, for each method under consideration, if the set of images selected as hairy images by a method is (almost) equal to the corresponding original dataset, and we verify if the appearance of the inpainted image is good. For this purpose, we consider all the images (with and without hair) belonging to both H-data and NH-data, and we perform the following evaluations.
(a)
We check whether, in most cases, the hair determination is successful or not, i.e., that the images are re-confirmed as belonging to H-data and NH-data, respectively. This allows us to determine for the different considered methods how much the resulting sets belonging to H-data or NH-data are equal to the initial ones.
(b)
We verify if the appearance of the hairless resulting image is, according to human subjective judgment, compatible with a hairless and good quality version of it. Moreover, we test whether the presence of the hair can preclude or alter a subsequent step of skin lesion segmentation.
(c)
We visually compare the obtained results by the proposed method and those directly available in [37] or by the available implementation of Lee and Xie on H13GAN-data, H13Sim-data, HSim-data, and H-data to determine their overall performance.
In regard to the assessment of point (a), we find that the classification error is within 25%, 65%, 10%, respectively, for Lee, Xie, and HR-SSC. As concerns the assessment of point (b), the visual inspection of the results shows that the resulting perceptual quality is in accordance with the percentages obtained for point (a). To verify the effectiveness of the hair removal methods, a recent SLS method [14,15] is applied. The segmentation results show that hair removal applied before the segmentation process involves an improvement of about 70%, 20%, 90% for Lee, Xie, and HR-SSC, respectively. The results of the visual comparison on the various datasets of point (c) are given in Figure 10 and Figure 11 on H13GAN-data and H13Sim-data, respectively. To give major visual evidence and to facilitate the comparison, in Figure 12 and Figure 13, the results on sHSim-data and the corresponding final mask are respectively shown. The same is true for Figure 14 and Figure 15, where results on H-data with the corresponding final mask are shown.
In summary, in relation to the qualitative evaluation, from the visual examination of the resulting images of each method available in [37] and HR-SSC on H13GAN-data and H13Sim-data (see Figure 10 and Figure 11), it appears that evident hair regions are not detected by Abbas and Toossi. Limiting the comparison only to the three methods of Lee, Xie, and HR-SSC, evident hair regions are not detected by Xie on the HSim-data and, to a lesser extent, on H-data. See the results on the sample sHsim-data in Figure 12 and on the sample sH-data in Figure 14. Note that HR-SSC is able also to remove the ruler marks that can be mistaken as hair (see Figure 14 and Figure 15).

3.3. Quantitative Evaluation

We quantitatively evaluate the resulting images on the hairless image datasets to which hair has been added (see Section 3.1) by considering the original image as ground truth and expressing a quantitative evaluation in terms of the following:
-
nine most popular quality measures: MSE, PSNR, MSE3, PSNR3, SSIM, MSSIM, VSNR, VIFP, UQI, NQM, WSNR [40,41];
-
area of the detected hair regions;
-
true/false discovery rate (see the definition in Section 3.3.3).
Although the above quality measures are related to human perception to a small extent, and the problem to define adequate metrics for the performance evaluation of color image processing methods remains an open problem widely studied [41,42,43,44,45], most often, these measures are extensively employed to evaluate the performance of many types of image analysis methods, including the HR methods [17,37]. In turn, we see these quality measure values as valid indicators since they contribute to delineate the trend of the performance of an HR method, and, at the same time, we consider them not suitable alone to give evidence of its effectiveness. To overcome this gap, since the determination of the effective hair area and the true/false rate are the major critical points for the quantitative evaluation of HR methods, we extend the performance evaluation by measuring the hair area and true/false rate (see respective Section 3.3.2 and Section 3.3.3). As mentioned above, following [17,37], we consider the images in which, in a controlled way, the hair regions are introduced on input hair-free images by using suitable hair insertion methods [38,39] that provide a hair-occluded image and the corresponding binary hair mask. The resulting binary mask is used as ground truth to quantitatively evaluate the performance by computing the detected area and the false discovery rate/true discovery rate (FDR/TDR).
Note that we use the hairy images used in [17] and those available at [37]. Then, we extend the controlled hair simulation on a larger dataset, and to allow comparison with other HR methods on the same image dataset, we made it available at the already mentioned Github link. Indeed, currently, the direct comparison with the results shown in another paper is in practice impossible since the experimental results are given for a not well-specified dataset. Accordingly, we think that having a shared dataset with the corresponding ground truth is a useful tool favoring the comparison and increasing the quality of the performance evaluation.

3.3.1. Quantitative Evaluation Based on Quality Measures

To carry out a quantitative evaluation based on quality measures, we consider the results obtained by all methods (see Figure 10 and Figure 11) on the image datasets H13Sim-data and H13Sim-data (see Figure 5 and Figure 6), and we compute the metric values by considering the original images in NH13-data as ground truths (see Figure 4). The metric values are shown in Table 1 and Table 2. Moreover, since the cardinality of the NH13-data is too limited, we repeat this quantitative quality evaluation also on HSim-data to verify if by varying the insertion of the hairs and increasing the cardinality of the set of reference data, we obtain a similar result. This quality evaluation is performed by limiting the considered methods to Lee, Xie, and HR-SSC. For the sake of brevity, in Table 3, we show the metric values only for sHSim-data by considering the corresponding resulting images (see Figure 12) and the sNH_data (see Figure 7). In Table 4 we report the average quality measures referring to H13GAN-data, H13Sim-data, sH13Sim-data, and HSim-data. The quantitative metrics for the set HSim-data including 170 images are also available at the mentioned Github link since they require much editing space.
Based on the quantitative analysis using the nine metrics, the trend of the various methods turns out to be completely different on H13GAN-data and H13Sim-Data in comparison with those on a set with greater cardinality HSim-data as well as on its sample sHSim-data of 13 images. This is highlighted in Figure 16 and Figure 17, where the trends of each quality measure on the dataset containing 13 images belonging to different datasets but generated by the same HairSim method for the hair simulation, i.e., H13Sim-data and sHSim-data, are shown.

3.3.2. Quantitative Evaluation Based on the Area of the Detected Hair Regions

To perform a quantitative evaluation based on area, we compare the area values obtained by the methods Lee, Xie, and HR-SSC on HSim-data, that is, the dataset in which hair is added in a controlled way. In detail, for each image I, we calculate the hair area introduced by the HairSim method, indicated as AI, and we compare this value with the area identified by each method, indicated as AL, Ax, AR, respectively for Lee, Xie, and HR-SSC. For the sake of brevity, in Table 5, we show the resulting area values for sHSim-data. Moreover, we compare the average hair area <AI > introduced in HSim-data by the HairSim method with the average hair area detected by each method (Table 6).
Since in our experiment <AI > = 42648, from Table 6, it can be observed that the average hair area computed by HR-SSC is the one that comes closest to <AI >, while the average hair area computed by Xie is by far the most distant. This evaluation trend in terms of area on HSim-data and sHSim-data confirms the trend indicated in Section 3.3.1.

3.3.3. Quantitative Evaluation in Terms of True/False Discovery Rate

We evaluate the quality of the resulting images also in terms of true discovery rate (TDR) and false discovery rate (FDR), defined as the following:
FDR =   FP FP + TP TDR = 1 FDR
where FP and TP denote false positive and true positive assessments, respectively. For the sake of brevity, in Table 7, we show the resulting FDR and TDR values only for sHSim-data. Moreover, the average <FDR> and <TDR> values of each method for HSim-data are shown in Table 8. From the examination of Table 7 and Table 8, a lower value of FDR and a higher value of TDR for HR-SSC, an intermediate value of FDR and TDR for Lee, and a higher value of FDR and a lower value of TDR for Xie can be observed. With respect to Lee, HR-SSC reports the percentage improvements of TDR and FDR equal to 35% and 27%, respectively, on Hsim-data, and equal to 33% and 27%, respectively, on sHSim-data. This evaluation trend in terms of FDR/TDR on Hsim-data, sHSim-data confirms the trend indicated in Section 3.3.1.

4. Discussion and Conclusions

In this paper, we propose the method HR-SSC based on the combined use of saliency, shape, and color. Initially, the computation burden of the hair removal process is lowered optionally by reducing the size of the image. Then, pseudo-hair regions and border/corner components are determined and employed in the successive process of hair mask detection. Successively, the image is restored by an inpainting process. A further contribution of this paper includes the proposal of a method for qualitative and quantitative evaluation of an HR method, and the availability of appropriate datasets to be used for testing and comparing by others. According to the proposed evaluation method, we perform a detailed quantitative and qualitative analysis of the experimental results on these datasets. Specifically, we qualitatively evaluate the performance of the proposed method and six state-of-the-art methods. We quantitatively evaluate the performance of HR methods under examination using a hair simulation technique applied on available dermoscopic image datasets, nine commonly adopted quality measures, area criteria, and FDR/TDR indicators.
Based on the experimental results and the performance evaluation, HR-SSC detects and removes the hair from the dermoscopic image by preserving the image features for its subsequent image segmentation process. Moreover, HR-SSC has a competitive and satisfactory performance concerning other considered methods as the probability of missing hair regions and/or detecting false hair regions is low. This is visually evident from the evaluation carried out, but it is to a lesser extent if we restrict the analysis to NH13-data. Indeed, as also reported in [17], the quantitative results on H13GAN-data and H13Sim-data (see Table 1 and Table 2) indicate that the method Xie statistically outperforms the other methods under consideration, including HR-SSC. However, this experimental evidence does not match the qualitative/quantitative results obtained on the larger dataset HSim-data and on its sample, which, on the contrary, indicate a better performance of the proposed method. This trend is validated also by the qualitative evaluation based on area and TDR/FDR as reported respectively in Section 3.3.2 and Section 3.3.3.
In summary, according to the performance evaluation, HR-SSC achieves good qualitative and quantitative results with an adequate balance. Moreover, it detects hair regions rapidly by processes with limited complexity. The results have also demonstrated the effectiveness and the utility of the employment of saliency, shape, and color information for hair removal problems. Finally, the implementation does not require any extensive learning based on a high number of parameters and labeled training images, and its execution time is quite fast.
In future investigations, there is room to extend the comparative studies with other existing methods and to improve this work by applying more efficient and efficacy inpainting methods to increase the performance quality.

Funding

This work was supported by GNCS (Gruppo Nazionale di Calcolo Scientifico) of the INDAM (Istituto Nazionale di Alta Matematica).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are openly available at the following GitHub link: https://github.com/gramella/HR.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Okur, E.; Turkan, M. A survey on automated melanoma detection. Eng. Appl. Artif. Intell. 2018, 73, 50–67. [Google Scholar] [CrossRef]
  2. Oliveira, R.B.; Papa, J.P.; Pereira, A.S.; Tavares, J.M.R.S. Computational methods for pigmented skin lesion classification in images: Review and future trends. Neural Comput. Appl. 2018, 29, 613–636. [Google Scholar] [CrossRef] [Green Version]
  3. Masood, A.; Jumaily, A.A. A Computer Aided Diagnostic Support System for Skin Cancer: A Review of Techniques and Algorithms. Int. J. Biom. Imag. 2013. [Google Scholar] [CrossRef] [PubMed]
  4. Vocaturo, E.; Zumpano, E.; Veltri, P. Image pre-processing in computer vision systems for melanoma detection. In Proceedings of the 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Madrid, Spain, 6 December 2018; pp. 2117–2124. [Google Scholar]
  5. Kavitha, N.; Vayelapelli, M. A Study on Pre-Processing Techniques for Automated Skin Cancer Detection. Smart Technologies in Data Science and Communication; Fiaidhi, J., Bhattacharyya, D., Rao, N., Eds.; Lecture Notes in Networks and Systems; Springer: Berlin/Heidelberg, Germany, 2020; Volume 105, pp. 145–153. [Google Scholar]
  6. Michailovich, O.V.; Tannenbaum, A. Despeckling of medical ultrasound images. IEEE Trans. Ultras. Ferroelect. Freq. Control 2006, 53, 64–78. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Ramella, G.; Sanniti di Baja, G. A new technique for color quantization based on histogram analysis and clustering. Int. J. Patt. Recog. Art. Intell. 2013, 27, 13600069. [Google Scholar] [CrossRef]
  8. Bruni, V.; Ramella, G.; Vitulano, D. Automatic Perceptual Color Quantization of Dermoscopic Images. In VISAPP 2015; Scitepress Science and Technology Publications: Setúbal, Portugal, 2015; Volume 1, pp. 323–330. [Google Scholar]
  9. Ramella, G.; Sanniti di Baja, G. A new method for color quantization. In Proceedings of the 12th International Conference on Signal Image Technology & Internet-Based Systems—SITIS 2016, Naples, Italy, 28 November–1 December 2016; pp. 1–6. [Google Scholar]
  10. Bruni, V.; Ramella, G.; Vitulano, D. Perceptual-Based Color Quantization. Image Analysis and Processing—ICIAP 2017; Lecture Notes in Computer Science 10484; Springer: Berlin/Heidelberg, Germany, 2017; pp. 671–681. [Google Scholar]
  11. Premaladha, J.; Lakshmi Priya, M.; Sujitha, S.; Ravichandran, K.S. A Survey on Color Image Segmentation Techniques for Melanoma Diagnosis. Indian J. Sci. Technol. 2015, 8, IPL0265. [Google Scholar]
  12. Ramella, G.; Sanniti di Baja, G. Image Segmentation Based on Representative Colors and Region Merging in Pattern Recognition; Lecture Notes in Computer Science 7914; Springer: Berlin/Heidelberg, Germany, 2013; pp. 175–184. [Google Scholar]
  13. Ramella, G.; Sanniti di Baja, G. From color quantization to image segmentation. In Proceedings of the 2016 12th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), Naples, Italy, 28 November–1 December 2016; IEEE: Piscataway Township, NJ, USA, 2016; pp. 798–804. [Google Scholar]
  14. Ramella, G. Automatic Skin Lesion Segmentation based on Saliency and Color. In VISAPP 2020; Scitepress Science and Technology Publications: Setúbal, Portugal, 2020; Volume 4, pp. 452–459. [Google Scholar]
  15. Ramella, G. Saliency-based segmentation of dermoscopic images using color information. arXiv 2020, arXiv:2011.13179. [Google Scholar]
  16. Celebi, M.E.; Wen, Q.; Iyatomi, H.; Shimizu, K.; Zhou, H.; Schaefer, G. A state-of-the-art survey on lesion border detection in dermoscopy images. In Dermoscopy Image Analysis; Celebi, M.E., Mendonca, T., Marques, J.S., Eds.; CRC Press: Boca Raton, FL, USA, 2016; pp. 97–129. [Google Scholar]
  17. Talavera-Martinez, L.; Bibiloni, P.; Gonzalez-Hidalgo, M. Comparative Study of Dermoscopic Hair Removal Methods. In Proceedings of the ECCOMAS Thematic Conference on Computational Vision and Medical Image Processing, Porto, Portugal, 16–18 October 2019; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
  18. Lee, T.; Ng, V.; Gallagher, R.; Coldman, A.; McLean, D. Dullrazor: A software approach to hair removal from images. Comput. Biol. Med. 1997, 27, 533–543. [Google Scholar] [CrossRef]
  19. Xie, F.-Y.; Qin, S.-Y.; Jiang, Z.-G.; Meng, R.-S. PDE-based unsupervised repair of hair-occluded information in dermoscopy images of melanoma. Comput. Med. Imaging Graph. 2009, 33, 275–282. [Google Scholar] [CrossRef]
  20. Abbas, Q.; Celebi, M.E.; Fondón García, I. Hair removal methods: A comparative study for dermoscopy images. Biomed. Signal Process. Control. 2011, 6, 395–404. [Google Scholar] [CrossRef]
  21. Huang, A.; Kwan, S.-Y.; Chang, W.-Y.; Liu, M.-Y.; Chi, M.-H.; Chen, G.-S. A robust hair segmentation and removal approach for clinical images of skin lesions. In Proceedings of the 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS), Osaka, Japan, 3–7 July 2013; pp. 3315–3318. [Google Scholar]
  22. Toossi MT, B.; Pourreza, H.R.; Zare, H.; Sigari, M.H.; Layegh, P.; Azimi, A. An effective hair removal algorithm for dermoscopy images. Skin Res. Technol. 2013, 19, 230–235. [Google Scholar] [CrossRef] [PubMed]
  23. Bibiloni, P.; Gonzàlez-Hidalgo, M.; Massanet, S. Skin Hair Removal in Dermoscopic Images Using Soft Color Morphology. In AIME 2017; Lecture Notes in Artificial Intelligence 10259; Springer: Berlin/Heidelberg, Germany, 2017; pp. 322–326. [Google Scholar]
  24. Koehoorn, J.; Sobiecki, A.; Rauber, P.; Jalba, A.; Telea, A. Efficient and Effective Automated Digital Hair Removal from Dermoscopy Images. Math. Morphol. Theory Appl. 2016, 1, 1–17. [Google Scholar]
  25. Zaqout, I.S. An efficient block-based algorithm for hair removal in dermoscopic images. Comput. Optics. 2017, 41, 521–527. [Google Scholar] [CrossRef]
  26. Attia, M.; Hossny, M.; Zhou, H.; Nahavandi, S.; Asadi, H.; Yazdabadi, A. Digital hair segmentation using hybrid convolutional and recurrent neural networks architecture. Comput. Methods Programs Biomed. 2019, 177, 17–30. [Google Scholar] [CrossRef] [PubMed]
  27. Talavera-Martınez, L.; Bibiloni, P.; Gonzalez-Hidalgo, M. An Encoder-Decoder CNN for Hair Removal in Dermoscopic Images. arXiv 2020, arXiv:2010.05013v1. [Google Scholar]
  28. Mendonca, T.; Ferreira, P.M.; Marques, J.S.; Marcal, A.R.; Rozeira, J. PH2–A public database for the analysis of dermoscopic images. In Dermoscopy Image Analysis; Celebi, M.E., Mendonca, T., Marques, J.S., Eds.; CRC Press: Boca Raton, FL, USA, 2015; pp. 419–439. [Google Scholar]
  29. ISIC 2016. ISIC Archive: The International Skin Imaging Collaboration: Melanoma Project, ISIC. Available online: https://isic-archive.com/# (accessed on 5 January 2016).
  30. Itti, L.; Koch, C.; Niebur, E. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Patt. Anal. Mach. Intell. 1998, 20, 1254–1259. [Google Scholar] [CrossRef] [Green Version]
  31. Haralick, R.; Sternberg, S.R.; Huang, X. Image Analysis Using Mathematical Morphology. IEEE Trans. PAMI 1987, 4, 532–550. [Google Scholar] [CrossRef]
  32. Soille, P. Morphological Image Analysis: Principles and Applications; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  33. Serra, J.; Vincent, L. An Overview of Morphological Filtering. Circuits Systems Signal Process. 1992, 11, 47–108. [Google Scholar] [CrossRef] [Green Version]
  34. Guarracino, M.R.; Maddalena, L. SDI+: A Novel Algorithm for Segmenting Dermoscopic Images. IEEE J. Biomed. Health Inf. 2019, 23, 481–488. [Google Scholar] [CrossRef]
  35. Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Systems Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  36. Achanta, R.; Hemami, S.; Estrada, F.; Susstrunk, S. Frequency-tuned salient region detection. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; IEEE: Piscataway Township, NJ, USA, 2009; pp. 1597–1604. [Google Scholar]
  37. Dermaweb. Available online: http://dermaweb.uib.es/ (accessed on 26 November 2020).
  38. Attia, M.; Hossny, M.; Zhou, H.; Yazdabadi, A.; Asadi, H.; Nahavandi, S. Realistic Hair Simulator for Skin lesion Images Using Conditional Generative Adversarial Network. Preprints 2018, 2018100756. [Google Scholar] [CrossRef]
  39. HairSim by Hengameh Mirzaalian. Available online: http://creativecommons.org/licenses/by-nc-sa/3.0/deed.en_US (accessed on 26 November 2020).
  40. Mitsa, T.; Varkur, K.L. Evaluation of contrast sensitivity functions for the formulation of quality measures incorporated in halftoning algorithms. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Minneapolis, MN, USA, 27–30 April 1993; pp. 301–304. [Google Scholar]
  41. Ramella, G. Evaluation of quality measures for color quantization. arXiv 2020, arXiv:2011.12652. [Google Scholar]
  42. Chandler, D.M. Seven Challenges in Image Quality Assessment: Past, Present, and Future Research. ISRN Signal Process. 2013, 2013, 1–53. [Google Scholar] [CrossRef]
  43. Lee, D.; Plataniotis, K.N. Towards a Full-Reference Quality Assessment for Color Images Using Directional Statistics. IEEE Trans. Image Process. 2015, 24, 3950–3965. [Google Scholar] [CrossRef] [PubMed]
  44. Lin, W.; Kuo, C.-C.J. Perceptual visual quality metrics: A survey. J. Vis. Commun. Image Represent. 2011, 22, 297–312. [Google Scholar] [CrossRef]
  45. Liu, M.; Gu, K.; Zhai, G.; Le Callet, P.; Zhang, W. Perceptual Reduced-Reference Visual Quality Assessment for Contrast Alteration. IEEE Trans. Broadcast. 2016, 63, 71–81. [Google Scholar] [CrossRef] [Green Version]
Figure 1. (a) Examples of images (IMD003 and ISIC_0002871) with a massive presence of hair; (b) the resulting image after the application of saliency shape color for hair removal (HR-SSC).
Figure 1. (a) Examples of images (IMD003 and ISIC_0002871) with a massive presence of hair; (b) the resulting image after the application of saliency shape color for hair removal (HR-SSC).
Applsci 11 00447 g001
Figure 2. Flowchart of the proposed method (HR-SSC).
Figure 2. Flowchart of the proposed method (HR-SSC).
Applsci 11 00447 g002
Figure 3. Some examples of results obtained in the main steps of HR-SSC: (a) input image; (b) detected pseudo-hair components; (c) border/corner components; (d) detected hair; (e) resulting image.
Figure 3. Some examples of results obtained in the main steps of HR-SSC: (a) input image; (b) detected pseudo-hair components; (c) border/corner components; (d) detected hair; (e) resulting image.
Applsci 11 00447 g003
Figure 4. Image dataset NH13-data proposed in [37].
Figure 4. Image dataset NH13-data proposed in [37].
Applsci 11 00447 g004
Figure 5. Image dataset H13GAN-data generated by applying the GAN method [38] to NH13-data and published in [37].
Figure 5. Image dataset H13GAN-data generated by applying the GAN method [38] to NH13-data and published in [37].
Applsci 11 00447 g005
Figure 6. Image dataset H13Sim-data generated by applying the HairSim method [39] to NH13-data and published in [37].
Figure 6. Image dataset H13Sim-data generated by applying the HairSim method [39] to NH13-data and published in [37].
Applsci 11 00447 g006
Figure 7. Image dataset sNH-data selected randomly from NH-data.
Figure 7. Image dataset sNH-data selected randomly from NH-data.
Applsci 11 00447 g007
Figure 8. Image dataset sHSim-data with the hair mask produced by applying the HairSim method to sNH-data.
Figure 8. Image dataset sHSim-data with the hair mask produced by applying the HairSim method to sNH-data.
Applsci 11 00447 g008
Figure 9. Image dataset sH-data selected randomly from H-data.
Figure 9. Image dataset sH-data selected randomly from H-data.
Applsci 11 00447 g009
Figure 10. (a) Results of methods Lee, Xie, Abbas, Huang available in [37], rows 1–4, on H13GAN-data. (b) Results of methods Toossi, Bibiloni available in [37], rows 1–2, and results of HR-SSC, row 3, on H13GAN-data.
Figure 10. (a) Results of methods Lee, Xie, Abbas, Huang available in [37], rows 1–4, on H13GAN-data. (b) Results of methods Toossi, Bibiloni available in [37], rows 1–2, and results of HR-SSC, row 3, on H13GAN-data.
Applsci 11 00447 g010aApplsci 11 00447 g010b
Figure 11. (a) Results of methods Lee, Xie, Abbas, Huang available in [37], rows 1–4, on H13Sim-data. (b) Results of method Toossi, Bibiloni available in [37], rows 1–2, and results of HR-SSC, row 3, on H13Sim-data.
Figure 11. (a) Results of methods Lee, Xie, Abbas, Huang available in [37], rows 1–4, on H13Sim-data. (b) Results of method Toossi, Bibiloni available in [37], rows 1–2, and results of HR-SSC, row 3, on H13Sim-data.
Applsci 11 00447 g011aApplsci 11 00447 g011b
Figure 12. Results of methods Lee, Xie, and HR-SSC on sHSim-data.
Figure 12. Results of methods Lee, Xie, and HR-SSC on sHSim-data.
Applsci 11 00447 g012
Figure 13. Resulting mask of HairSim method and the resulting mask of methods Lee, Xie, and HR-SSC on sHSim-data.
Figure 13. Resulting mask of HairSim method and the resulting mask of methods Lee, Xie, and HR-SSC on sHSim-data.
Applsci 11 00447 g013
Figure 14. Results of methods Lee, Xie, and HR-SSC on sH-data.
Figure 14. Results of methods Lee, Xie, and HR-SSC on sH-data.
Applsci 11 00447 g014
Figure 15. Resulting masks of methods Lee, Xie, and HR-SSC on sH-data.
Figure 15. Resulting masks of methods Lee, Xie, and HR-SSC on sH-data.
Applsci 11 00447 g015
Figure 16. Trends of quality measures on H13Sim-data for the methods Lee, Xie, and HR-SSC.
Figure 16. Trends of quality measures on H13Sim-data for the methods Lee, Xie, and HR-SSC.
Applsci 11 00447 g016
Figure 17. Trends of quality measures on sHSim-data for the methods Lee, Xie, and HR-SSC.
Figure 17. Trends of quality measures on sHSim-data for the methods Lee, Xie, and HR-SSC.
Applsci 11 00447 g017
Table 1. Quality evaluation of the results on the H13GAN-data—best results are in bold.
Table 1. Quality evaluation of the results on the H13GAN-data—best results are in bold.
ImgMet.MSEPSNRMSE3PSNR3SSIMMSSIMVSNRVIFPUQINQMWSNR
IMD006Lee13.62636.78720.91834.9260.8880.95624.4010.4030.65023.15140.255
Xie12.61037.12419.83135.1570.8910.95725.3550.4120.65322.68740.286
Abbas57.55530.53064.07330.0640.8560.89815.2600.3540.60812.37228.927
Huang24.28334.27833.48132.8830.8600.92619.6010.3010.53416.30533.955
Toossi55.74830.66862.44030.1760.8530.89715.3700.3420.59112.58629.143
Bibiloni19.65335.19728.07033.6480.8670.94321.5880.3280.58919.41136.740
HR-SSC19.66935.19327.07833.8050.8610.94121.1010.3230.56120.95037.581
IMD010Lee44.37331.66052.73430.9100.8550.93916.9900.3520.65918.18933.261
Xie46.30531.47555.89830.6570.8590.93115.8530.3640.66514.28830.576
Abbas88.07028.68398.37628.2020.8380.90714.9440.3300.63614.12928.725
Huang42.98531.79852.67430.9150.8180.90517.3300.2360.51017.16932.186
Toossi90.16128.581100.95628.0890.8320.90515.0070.3200.61813.84128.495
Bibiloni40.55032.05151.01931.0540.8570.93716.6990.3540.66016.75532.390
HR-SSC55.95230.65366.18529.9230.8270.92015.2030.2930.57917.83731.972
IMD017Lee18.62535.43024.64534.2130.8810.95729.1300.4450.71127.35938.837
Xie16.22836.02822.79034.5530.8840.95530.4110.4550.71425.46538.191
Abbas61.52830.24067.98329.8070.8470.91121.6980.3900.66217.37328.555
Huang29.31833.45935.61132.6150.8540.93725.1180.3660.60120.44632.682
Toossi62.80130.15168.98229.7430.8400.90721.5910.3740.63617.37128.516
Bibiloni24.31234.27330.78733.2470.8670.94826.9350.4060.68423.49735.660
HR-SSC31.00733.21636.94332.4550.8500.93425.5500.3710.63223.80834.512
IMD018Lee53.07230.88258.62430.4500.8650.96027.4330.3530.58122.03935.006
Xie18.25335.51723.64834.3930.8630.95628.1080.3520.57825.36840.353
Abbas112.12927.634117.43227.4330.8500.91317.4750.3440.57014.96628.474
Huang65.66429.95871.77329.5710.8450.93922.4710.2860.47618.58032.252
Toossi108.74427.767113.63427.5760.8490.91517.6890.3400.56315.18128.639
Bibiloni54.37130.77760.41930.3190.8610.95726.7340.3430.57421.67034.753
HR-SSC34.43232.76139.24032.1940.8530.95524.1350.3270.53722.63836.055
IMD019Lee40.67432.03849.99331.1420.8820.94224.4150.4140.69621.02035.377
Xie41.59931.94051.37331.0230.8860.93824.3150.4270.70019.62034.346
Abbas80.57929.06990.69928.5550.8560.90818.6860.3720.65716.02430.163
Huang60.70330.29971.58229.5830.8270.90420.2290.2730.50718.01132.313
Toossi81.50329.01991.59428.5120.8460.90418.6620.3550.63416.10530.222
Bibiloni49.19631.21259.45230.3890.8680.93423.1360.3750.67219.58034.043
HR-SSC56.41430.61765.96129.9380.8580.93021.5620.3630.64218.99133.110
IMD020Lee54.46230.77061.12630.2690.8420.95823.8010.3620.65322.28234.661
Xie23.64234.39429.56133.4240.8460.95726.2390.3730.66025.90540.384
Abbas132.08026.922138.99626.7010.8030.88315.7640.3060.60414.07426.834
Huang62.55430.16869.85129.6890.8150.93621.3170.3030.56419.10832.275
Toossi125.75927.135132.54426.9070.7960.88416.1110.2950.58114.45627.197
Bibiloni59.85030.36067.23429.8550.8280.95122.5180.3290.63020.98533.649
HR-SSC43.09831.78648.65731.2590.8170.94922.2030.3160.59023.64136.222
IMD030Lee50.82731.07057.66530.5220.8640.95219.9590.3980.66018.34132.244
Xie16.42535.97623.67134.3890.8690.95926.4930.4130.66925.47639.857
Abbas70.11229.67376.88529.2720.8380.91718.5280.3520.62315.44729.300
Huang50.55631.09357.62030.5250.8470.94120.7430.3470.59919.63232.935
Toossi72.51729.52679.21129.1430.8300.91218.3300.3370.60015.61629.428
Bibiloni45.92031.51153.06230.8830.8550.94921.6920.3720.64719.68033.360
HR-SSC59.20230.40766.78329.8840.7860.90417.2040.2520.46819.00931.509
IMD033Lee29.07133.49636.02832.5640.8580.94823.1460.3480.62320.94637.195
Xie20.68034.97527.69633.7070.8740.95426.2800.3890.64620.92838.139
Abbas87.50528.71094.48928.3770.8250.89416.7620.3020.57412.08128.385
Huang33.27032.91040.51632.0550.8360.92621.5840.2800.52318.37234.461
Toossi86.21828.77593.27828.4330.8150.89016.7830.2860.54712.32428.584
Bibiloni38.72232.25145.57731.5430.8390.93220.9800.2960.58818.17834.092
HR-SSC56.65930.59864.34230.0460.7850.89418.5900.2070.45116.91732.624
IMD044Lee47.45731.36853.35530.8590.8630.94317.6410.4210.73816.90329.594
Xie18.59035.43827.08533.8040.8870.96425.9710.4940.77423.18836.594
Abbas72.92329.50278.28829.1940.8670.92616.2760.4570.75012.36925.924
Huang105.87527.883110.17827.7100.7450.83512.9930.2100.48412.17423.933
Toossi73.45229.47178.67429.1720.8590.92316.1100.4400.73612.44525.910
Bibiloni96.72228.276101.89528.0490.7630.86713.7110.2420.56113.03024.693
HR-SSC176.34125.667179.59025.5880.6730.73411.3560.1430.3459.15720.718
IMD050Lee35.78732.59441.72131.9270.8800.95418.8950.3350.57817.67631.938
Xie22.69534.57228.81433.5350.8810.95322.2800.3370.57819.71634.639
Abbas49.80631.15855.62430.6780.8590.89818.0940.3140.55412.76328.703
Huang22.68234.57429.14433.4850.8520.93222.3280.2320.45318.43434.242
Toossi49.65831.17155.45230.6920.8560.89718.1510.3000.53612.89928.805
Bibiloni24.68334.20730.62433.2700.8760.94920.8980.3350.57519.33833.924
HR-SSC32.03233.07538.42832.2840.8630.94421.8150.2690.50719.70534.589
IMD061Lee200.13725.118206.68324.9780.8180.91316.4920.3400.65514.26924.901
Xie31.04533.21139.24032.1940.8530.94728.4930.4010.70124.18935.438
Abbas162.90526.011170.74025.8070.7740.86219.8980.2620.58517.62127.216
Huang118.12227.408125.80627.1340.8240.93019.4720.3410.64118.02628.664
Toossi160.19526.084168.46925.8660.7590.85619.6910.2480.55217.75227.250
Bibiloni132.36826.913139.48526.6860.8160.91818.7070.3190.65115.91126.804
HR-SSC123.96427.198132.64326.9040.7300.87721.1040.2220.45919.31228.617
IMD063Lee73.08729.49278.05629.2070.8680.94916.4580.3950.63818.37627.193
Xie15.33336.27423.18934.4780.8710.95530.7260.4050.64427.48638.241
Abbas75.40429.35782.45128.9690.8490.91419.2380.3720.61916.23925.970
Huang82.50628.96687.60628.7050.8460.93115.7810.3280.56316.78625.992
Toossi74.61929.40281.59829.0140.8470.91419.3160.3640.60816.26825.989
Bibiloni74.38729.41680.22329.0880.8610.94816.3140.3740.62618.40227.149
HR-SSC38.01132.33244.80631.6170.8520.94525.0760.3550.58621.39930.795
IMD075Lee98.53828.195107.53027.8160.8600.94816.5240.3540.62415.79327.594
Xie18.34235.49629.23033.4730.8660.95226.9450.3730.63524.58638.198
Abbas73.19429.48682.10728.9870.8450.92618.0780.3360.60616.73228.717
Huang112.43327.622122.34127.2550.8210.91715.5140.2510.48614.62026.485
Toossi71.88929.56480.60029.0670.8410.92618.1560.3250.59016.95528.867
Bibiloni93.92328.403103.66327.9750.8550.94817.0890.3440.61815.98827.807
HR-SSC45.13231.58653.23430.8690.8140.92520.6870.2560.49119.15031.981
Table 2. Quality evaluation of the results on the H13Sim-data—best results are in bold.
Table 2. Quality evaluation of the results on the H13Sim-data—best results are in bold.
ImgMet.MSEPSNRMSE3PSNR3SSIMMSSIMVSNRVIFPUQINQMWSNR
IMD006Lee5.44340.7736.40040.0690.9780.98525.0440.8730.93823.32040.853
Xie6.10540.2748.71238.7300.9980.97122.4020.8980.94418.36236.300
Abbas146.17526.482149.92326.3720.9200.8369.7090.7470.8776.77723.447
Huang14.65436.47117.00035.8260.9640.97120.2630.8170.90516.57534.655
Toossi147.39026.446151.56526.3250.9130.8319.7070.7150.8576.72623.421
Bibiloni14.08736.64219.99135.1230.9370.96021.6150.5360.83018.99036.561
HR-SSC17.98335.58222.25234.6570.8820.96121.6300.3730.63720.65237.497
IMD010Lee44.46231.65145.49031.5520.9600.96915.1080.8440.93514.71830.440
Xie7.63239.30410.39637.9620.9990.97920.2310.9240.96719.06135.559
Abbas93.50528.42296.60528.2810.9350.89412.5480.7690.90410.23125.941
Huang21.23034.86122.19234.6690.9620.96720.1110.8190.91917.99533.880
Toossi94.86828.36099.19528.1660.9260.88912.5200.7340.88210.25325.956
Bibiloni49.42731.19155.04730.7230.9730.97116.9170.8500.95314.44930.158
HR-SSC54.62530.75761.89030.2150.8560.93715.2340.3530.66617.57531.736
IMD017Lee9.53538.3389.85538.1940.9810.98829.5190.9050.96728.64339.534
Xie11.08237.68416.27736.0150.9970.98627.3860.9550.97921.08332.729
Abbas64.66630.02467.69429.8250.9410.93420.4320.7580.90215.53626.897
Huang13.74736.74914.35736.5600.9780.98626.0040.8960.94421.10033.753
Toossi66.23829.92069.39329.7180.9300.92820.4320.7140.87115.46926.880
Bibiloni17.43835.71619.82035.1600.9500.97226.6950.6480.90423.42035.196
HR-SSC28.23133.62431.10533.2020.8810.95426.0970.4370.72724.09234.710
IMD018Lee41.73531.92642.89631.8070.9820.98828.2840.8990.94421.86434.994
Xie31.00933.21643.16631.7790.9930.98728.2520.8950.94221.82034.947
Abbas43.58331.73844.88031.6100.9720.98327.2680.8290.92121.78234.845
Huang41.98031.90043.16631.7790.9810.98728.2520.8950.94221.82034.947
Toossi246.04524.221251.09124.1320.9360.83111.2920.8410.9119.42423.138
Bibiloni44.10131.68645.64831.5370.9720.98327.2260.7920.93021.52334.748
HR-SSC29.83133.38431.74033.1150.8860.97825.0320.3960.65223.17236.610
IMD019Lee34.22732.78737.12532.4340.9540.96823.6900.7700.90821.10135.203
Xie30.43433.29742.53831.8430.9860.93921.0010.8630.93017.51732.120
Abbas311.60823.195326.34222.9940.8880.79710.6870.6360.8358.62722.541
Huang44.90631.60847.75231.3410.9450.96020.8960.7390.87417.75532.540
Toossi314.69123.152330.17322.9430.8730.79010.6930.5870.7998.58122.515
Bibiloni51.86130.98256.22730.6310.9210.94621.8070.5610.84618.83432.594
HR-SSC46.41931.46449.45831.1880.8970.96022.9560.4540.75420.61134.265
IMD020Lee39.13032.20641.41831.9590.9740.98824.5160.8830.95122.13534.724
Xie5.02541.1207.09939.6190.9980.98126.9520.9330.96523.51738.794
Abbas151.33326.331157.03226.1710.9190.86413.5180.7090.87911.62024.574
Huang40.21032.08742.34331.8630.9760.98623.4830.9000.94719.37632.999
Toossi157.47026.159163.80725.9870.9050.85913.4680.6610.84811.48224.465
Bibiloni46.31631.47449.42331.1920.9560.97822.7750.6950.92220.40133.376
HR-SSC39.24232.19341.77931.9210.8570.96722.5150.3830.69322.95636.159
IMD030Lee40.46832.06040.96832.0060.9700.97919.7770.8610.94518.71332.452
Xie4.28641.8106.16340.2330.9980.98726.4670.9270.96923.43237.889
Abbas81.08529.04183.08628.9360.9380.91516.7120.7430.89712.58526.889
Huang36.53632.50436.83832.4680.9820.98621.8200.9110.96020.08733.643
Toossi84.98128.83887.18228.7270.9250.90816.5180.6970.86712.65726.939
Bibiloni36.76332.47739.56932.1570.9540.97121.7770.6750.91319.90533.500
HR-SSC63.60130.09668.80229.7550.7820.90116.9370.2450.46217.58330.554
IMD033Lee24.45434.24726.04633.9730.9470.96821.8300.7570.90720.11236.196
Xie22.40234.62831.32333.1720.9950.95820.1090.9150.95213.49830.916
Abbas189.79925.348194.68925.2370.9010.85512.4710.6620.8536.66523.295
Huang18.81935.38519.67535.1920.9560.96622.4010.8280.91818.76135.305
Toossi169.48625.839174.63525.7090.8900.85212.9520.6240.8237.39123.923
Bibiloni69.26929.72573.56029.4640.9020.91016.6890.4660.81511.93528.549
HR-SSC50.81131.07155.61330.6790.8160.92119.2800.2640.55217.66633.302
IMD044Lee49.27031.20547.78631.3380.9270.94316.1970.6840.89014.86427.824
Xie7.50839.37610.18338.0520.9980.98023.0520.9240.97318.34432.600
Abbas39.65132.14839.20932.1970.9550.94517.6530.7900.93413.44727.293
Huang73.35429.47771.51029.5870.8910.90114.1200.5940.81312.37225.252
Toossi44.65331.63244.31731.6650.9410.93817.2530.7340.91113.25026.963
Bibiloni124.03427.195127.43927.0780.7640.82912.4160.2210.56211.71922.787
HR-SSC214.18324.823214.14724.8240.6710.71810.4980.1420.3428.44519.567
IMD050Lee21.35534.83621.80734.7450.9760.98420.0820.8600.93019.01333.138
Xie5.56140.6807.85539.1790.9980.97025.6060.8740.93419.43336.204
Abbas121.21027.295122.35727.2550.9300.83812.2420.7900.8957.59423.528
Huang10.58037.88611.26237.6150.9650.97923.6580.8210.91519.41535.929
Toossi120.02227.338121.17427.2970.9270.83712.3590.7690.8867.64723.580
Bibiloni37.04432.44440.80332.0240.9690.97118.7400.8170.92016.15030.032
HR-SSC28.87833.52532.35533.0310.8930.96422.4660.3440.63220.09934.948
IMD061Lee199.21425.138197.81625.1680.9180.93415.8410.7050.88114.09724.753
Xie13.37136.86918.98435.3470.9940.97325.6710.9150.96221.19732.801
Abbas231.50324.485238.37024.3580.8690.84817.2430.5680.81112.03522.788
Huang92.49128.47092.93228.4490.9650.97919.6450.8710.93917.97928.910
Toossi230.40624.506237.67624.3710.8530.83917.1170.5260.77811.96022.729
Bibiloni125.20027.155127.40927.0790.9190.93718.2030.5510.87515.40726.464
HR-SSC120.60927.317126.00127.1270.7600.89521.1220.2540.53519.57928.808
IMD063Lee62.23930.19060.75030.2950.9840.97716.0400.9020.95918.38127.253
Xie6.40740.0649.62038.2990.9960.98227.2520.9310.96822.77233.216
Abbas88.23328.67489.37628.6190.9610.93715.6110.8200.92714.72924.462
Huang67.13929.86165.43229.9730.9750.97515.7310.8710.93716.99226.398
Toossi94.79128.36395.62528.3250.9540.93415.1640.7840.90914.39724.068
Bibiloni63.39430.11063.38730.1110.9720.97716.0330.7780.93718.20627.103
HR-SSC37.73132.36441.06131.9970.8800.96325.1200.3950.67120.78430.350
IMD075Lee84.83428.84586.17328.7770.9790.98316.3860.8800.95115.89427.734
Xie4.12541.9775.98640.3590.9990.98627.0810.9320.96923.10237.372
Abbas123.56227.212123.99727.1970.9510.91714.2340.8030.92011.94624.537
Huang92.73828.45893.96328.4010.9700.97715.6070.8510.93414.74426.916
Toossi131.86726.929132.95026.8940.9420.91313.8980.7620.89711.70324.251
Bibiloni77.56029.23479.01429.1540.9760.98216.9530.8250.94916.05928.034
HR-SSC42.99331.79746.67631.4400.8290.94421.0120.2860.53919.15832.117
Table 3. Quality evaluation of the results on the sHSim-data—best results are in bold.
Table 3. Quality evaluation of the results on the sHSim-data—best results are in bold.
ImgMet.MSEPSNRMSE3PSNR3SSIMMSSIMVSNRVIFPUQINQMWSNR
ISIC_0000040Lee284.16723.595378.46322.3510.9760.9528.8770.7120.9160.70219.995
Xie184.39725.473295.27723.4290.9770.8743.3270.5620.9232.31819.575
HR-SSC114.31327.550158.44826.1320.9820.9858.6880.6970.61115.38931.447
ISIC_0000096Lee8.60438.78413.23736.9130.9970.98316.6920.9050.95614.83036.493
Xie164.33025.974236.75024.3880.9850.9173.7360.8180.9502.84221.224
HR-SSC5.16041.00512.58937.1310.9960.96120.1980.7160.61416.60639.021
ISIC_0000184Lee26.72033.86239.35532.1810.9950.95920.2660.8150.91516.44431.995
Xie391.06922.208580.44120.4930.9660.8489.1150.7110.9034.43417.907
HR-SSC12.84437.04419.33935.2670.9950.96624.7590.7280.80119.20834.899
ISIC_0000257Lee14.88536.40320.02135.1160.9930.97414.3360.7500.92210.32332.190
Xie244.90824.241326.32522.9940.9660.8541.5740.5280.9111.21917.999
HR-SSC1.09447.7391.91045.3200.9980.99431.1660.8450.65223.11346.466
ISIC_0000410Lee11.89337.37816.30636.0070.9890.98614.8630.8310.95712.45533.225
Xie138.02926.731186.63825.4210.9750.9233.2740.6900.9533.51621.231
HR-SSC2.34644.4274.86941.2570.9710.97622.0080.6200.35420.21041.823
ISIC_0010503Lee14.65536.47121.32234.8430.9870.97018.1750.7710.92315.29533.283
Xie209.16024.926297.52723.3960.9610.8775.6270.6440.9133.65519.156
HR-SSC3.71642.4305.48940.7360.9920.98727.2910.6730.64621.39239.927
ISIC_0006982Lee4.37941.7176.31840.1250.9970.99117.5970.8960.96817.79939.600
Xie78.81829.165106.01827.8770.9900.9503.4480.8400.9574.02823.575
HR-SSC15.34136.27223.82734.3600.9900.94710.6700.4790.3428.23432.009
ISIC_0007693Lee11.96537.35216.41235.9790.9880.98611.8220.7860.95910.84833.922
Xie137.38926.751187.71525.3960.9740.9261.8480.5950.9492.38521.840
HR-SSC3.15743.1384.98641.1530.9830.98115.1560.5270.29818.92240.392
ISIC_0009993Lee33.23832.91444.90231.6080.9890.96921.6220.7650.90910.98131.791
Xie567.51020.591757.97619.3340.9490.82311.1200.5660.8901.90117.733
HR-SSC3.80742.3255.87140.4440.9960.98326.8230.8310.80118.43540.775
ISIC_0010182Lee41.09031.99355.47230.6900.9630.94714.7320.5960.8857.94929.978
Xie523.31520.943703.67319.6570.9300.7714.6030.4230.8790.56816.851
HR-SSC1.09747.7271.87545.4010.9920.99229.6040.9090.69022.14746.114
ISIC_0010226Lee52.79130.90568.49129.7740.9830.94512.0750.7190.9084.65929.058
Xie565.80820.604729.46719.5010.9520.7914.8940.5560.8980.35417.064
HR-SSC2.75443.7314.06942.0350.9920.98927.0220.6800.52623.25749.042
ISIC_0010584Lee11.21637.63215.21336.3090.9940.97915.3290.8790.94412.64934.171
Xie201.00625.099267.89923.8510.9740.8892.2070.7710.9361.85019.196
HR-SSC2.36244.3973.56242.6130.9970.98725.2500.8550.85518.60541.457
ISIC_0011323Lee30.36133.30840.70932.0340.9840.96814.8070.8090.93811.53030.180
Xie263.79123.918353.51122.6470.9610.8725.3950.6860.9353.01319.455
HR-SSC1.92245.2942.68343.8440.9850.99330.9510.7750.66627.84247.033
Table 4. Average quality evaluation of the results on H13GAN-data, H13Sim-data, sH13Sim-data, and HSim-data—best results are in bold.
Table 4. Average quality evaluation of the results on H13GAN-data, H13Sim-data, sH13Sim-data, and HSim-data—best results are in bold.
DatasetMet.MSEPSNRMSE3PSNR3SSIMMSSIMVSNRVIFPUQINQMWSNR
H13GAN-dataLee58.44131.45465.31430.7520.8630.94821.1760.3790.65119.71932.927
Xie23.21134.80230.92533.4450.8720.95225.9590.4000.66322.99337.326
HR-SSC59.37831.16166.45330.5210.8130.91220.4300.2840.52719.42432.329
H13Sim-dataLee50.49032.63151.11832.4860.9640.97320.9470.8330.93119.45032.700
Xie11.91938.48516.79236.9680.9960.97524.7280.9140.95820.24134.727
HR-SSC59.62631.38463.29831.0120.8380.92820.7620.3330.60519.41332.356
sH13Sim-dataLee41.99734.79356.63233.3790.9870.97015.4760.7870.93112.14731.991
Xie282.27224.356386.86322.9530.9660.8704.6280.6450.9232.83319.447
HR-SSC13.07041.77519.19439.6690.9900.98023.0450.7180.60419.48940.800
HSim-dataLee56.74832.69377.34431.3100.9840.95916.7090.7730.91811.18430.038
Xie376.33123.049509.62921.7170.9580.8537.0000.6500.9113.20218.552
HR-SSC24.52538.00135.45036.2370.9880.97221.9520.7140.67716.21235.725
Table 5. Hair area on sHSim-data—best results are in bold.
Table 5. Hair area on sHSim-data—best results are in bold.
ImgAI ALAxAR
ISIC_000004047,60877,0011,665,45151,985
ISIC_000009666,46386,9203,106,08078,198
ISIC_000018432,82742,115733,96433,815
ISIC_000025726,06933,456768,30828,057
ISIC_000041086,133110,8264,541,149105,109
ISIC_000050327,41133,822724,91728,063
ISIC_000698273,066110,4616,010,208106,147
ISIC_0007693100,975131,1365,984,212135,523
ISIC_000999335,99147,248766,11638,432
ISIC_001018236,25744,818764,85638,127
ISIC_001022633,22641,208770,00834,677
ISIC_001058421,63027,634774,68722,792
ISIC_001132320,30927,615774,76822,041
Table 6. Average hair area values on the HSim-data to compare with < AI > = 42648—best results are in bold.
Table 6. Average hair area values on the HSim-data to compare with < AI > = 42648—best results are in bold.
Dataset<AL><Ax><AR>
Hsim-data61,0451,594,36348,830
Table 7. False discovery rate (FDR) and true discovery rate (TDR) on sHSim-data—best results are in bold.
Table 7. False discovery rate (FDR) and true discovery rate (TDR) on sHSim-data—best results are in bold.
ImgMet.FDRTDRImgMet.FDRTDR
ISIC_0000040Lee0.3260.674ISIC_0007693Lee0.6510.349
Xie0.9800.020 Xie0.9920.008
HR-SSC0.2020.798 HR-SSC0.4690.531
ISIC_0000096Lee0.3030.697ISIC_0009993Lee0.6410.359
Xie0.9850.015 Xie0.9920.008
HR-SSC0.2060.794 HR-SSC0.4560.544
ISIC_0000184Lee0.2930.707ISIC_0010182Lee0.6310.369
Xie0.9880.012 Xie0.9920.008
HR-SSC0.2170.783 HR-SSC0.4440.556
ISIC_0000257Lee0.3110.689ISIC_0010226Lee0.6280.372
Xie0.9870.013 Xie0.9920.008
HR-SSC0.2110.789 HR-SSC0.4390.561
ISIC_0000410Lee0.2990.701ISIC_0010584Lee0.6180.382
Xie0.9890.011 Xie0.9920.008
HR-SSC0.2430.757 HR-SSC0.4270.573
ISIC_0000503Lee0.2810.719ISIC_0011323Lee0.6080.392
Xie0.9910.009 Xie0.9920.008
HR-SSC0.2760.724 HR-SSC0.4160.584
ISIC_0006982Lee0.6540.346
Xie0.9920.008
HR-SSC0.4640.536
Table 8. Average FDR and TDR on the HSim-data—best results are in bold.
Table 8. Average FDR and TDR on the HSim-data—best results are in bold.
<FDR><TDR>
Lee0.5030.497
Xie0.9900.010
HR-SSC0.3600.640
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ramella, G. Hair Removal Combining Saliency, Shape and Color. Appl. Sci. 2021, 11, 447. https://doi.org/10.3390/app11010447

AMA Style

Ramella G. Hair Removal Combining Saliency, Shape and Color. Applied Sciences. 2021; 11(1):447. https://doi.org/10.3390/app11010447

Chicago/Turabian Style

Ramella, Giuliana. 2021. "Hair Removal Combining Saliency, Shape and Color" Applied Sciences 11, no. 1: 447. https://doi.org/10.3390/app11010447

APA Style

Ramella, G. (2021). Hair Removal Combining Saliency, Shape and Color. Applied Sciences, 11(1), 447. https://doi.org/10.3390/app11010447

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop