Next Article in Journal
Examination of Entire Gastrointestinal Tract: A Perspective of Mouth to Anus (M2A) Capsule Endoscopy
Next Article in Special Issue
Preprocessing Effects on Performance of Skin Lesion Saliency Segmentation
Previous Article in Journal
Molecular Pathology of ALS: What We Currently Know and What Important Information Is Still Missing
Previous Article in Special Issue
Quantitative Multispectral Imaging Differentiates Melanoma from Seborrheic Keratosis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Segmentation of Melanocytic Lesion Images Using Gamma Correction with Clustering of Keypoint Descriptors

by
Damilola Okuboyejo
and
Oludayo O. Olugbara
*
ICT and Society Research Group, South Africa Luban Workshop, Durban University of Technology, Durban 4000, South Africa
*
Author to whom correspondence should be addressed.
Diagnostics 2021, 11(8), 1366; https://doi.org/10.3390/diagnostics11081366
Submission received: 30 April 2021 / Revised: 26 July 2021 / Accepted: 27 July 2021 / Published: 29 July 2021
(This article belongs to the Special Issue Imaging Diagnosis for Melanoma)

Abstract

:
The early detection of skin cancer, especially through the examination of lesions with malignant characteristics, has been reported to significantly decrease the potential fatalities. Segmentation of the regions that contain the actual lesions is one of the most widely used steps for achieving an automated diagnostic process of skin lesions. However, accurate segmentation of skin lesions has proven to be a challenging task in medical imaging because of the intrinsic factors such as the existence of undesirable artifacts and the complexity surrounding the seamless acquisition of lesion images. In this paper, we have introduced a novel algorithm based on gamma correction with clustering of keypoint descriptors for accurate segmentation of lesion areas in dermoscopy images. The algorithm was tested on dermoscopy images acquired from the publicly available dataset of Pedro Hispano hospital to achieve compelling equidistant sensitivity, specificity, and accuracy scores of 87.29%, 99.54%, and 96.02%, respectively. Moreover, the validation of the algorithm on a subset of heavily noised skin lesion images collected from the public dataset of International Skin Imaging Collaboration has yielded the equidistant sensitivity, specificity, and accuracy scores of 80.59%, 100.00%, and 94.98%, respectively. The performance results are propitious when compared to those obtained with existing modern algorithms using the same standard benchmark datasets and performance evaluation indices.

1. Introduction

1.1. Background

Melanocytic lesions typically refer to the proliferation of melanin-producing neural crest-derived melanocytic cells in human skin. These lesions can either be benign (innocuous) or malignant (cancerous). The top three prevalent malignant melanocytic lesions include basal cell carcinoma (BCC), squamous cell carcinoma (SCC), and melanoma [1]. Melanoma is a skin cancer that is often caused by an unpredictable disorder in the melanocytic cells, which eventually leads to the improper synthesis of melanin. It is currently the 7th most commonly occurring cancer in young adults within the age bracket of 15–29 [1]. Moreover, it is one of the most contributing factors toward the disability-adjusted life years (DALYs) of many countries within North America, West Europe, and Australia [2]. The socio-economic impacts for patients with melanoma have increased in certain geographical areas. A recent report has indicated a relationship between unemployment and impaired health-related quality of life (HRQoL) in melanoma patients in a geographical region of China [3].
The need for early detection of skin cancer cannot be overemphasized because a 5-year survival rate has been reported to be up to 95% for melanoma diagnosed at an early stage [4,5]. Dermatologists mostly rely on excision biopsy of lesion areas and subject the extracted lesions to clinical examination to have a successful early detection. Concerns with frequent lesion excision include blisters, infections, bleeding, and sometimes nerve damage. Literature has indicated a low confidence interval of less than 40% when excision is performed to detect cancerous lesions [6]. This inherent challenge has necessitated the drive to espouse non-invasive and automated diagnosis for classifying a lesion as benign or malignant. Segmentation is a crucial preprocessing step to successfully distinguish between benign and malignant lesions to reduce avoidable excisions that can be economically costly both emotionally and mentally [1,7]. However, several challenges are facing the accurate segmentation of skin lesions. They include the presence of noise artifacts, low contrast surrounding skin, fuzzy borders, and irregular structures that characterize lesion images. A good number of dermatologists rely heavily on manual tumor tracing for lesion localization. This is, however, prone to human error, which gives room for inconsistencies [1,7]. Several lesion segmentation methods have been proposed in the literature to address the concerns for aiding better lesion diagnosis.
The extraction of a set of promising features that can help to localize lesion areas is considered an essential component to effectively perform lesion boundary tracing. A feature can be described as a piece of information relevant to solving a specific problem of data analytics. In addition, features can be used to reference interesting points in an image. The interesting points would typically have well-defined positions and are often invariant to affine transformations such as scale, rotation, and illumination. These features can be based on low-level image primitives, such as intensity (color), shape (edge, region, contour), and texture (co-occurrence matrix, Fourier). The connotation of keypoint features generally refers to spatial locations that define interesting points in an image. These interesting points can be of varying characteristics, such as edge, corner, line, and blob. The use of keypoint properties can be useful for detecting lesion areas. The extraction of appropriate discriminating features that can aid effective lesion segmentation is often influenced by conditions surrounding the acquisition of lesion images. Such conditions include non-uniform illumination that can affect image contrasts. This is one application domain where preprocessing techniques, such as gamma correction, can be applied faithfully. The method of gamma correction, which is sometimes called gamma adjustment, can be applied to lesion images to obtain the desired contrast features for effective lesion boundary tracing.
The common challenges often associated with many of the existing image segmentation methods include the difficulty of method reproducibility, computational intensiveness, and performance bottleneck. The purpose of this study is to introduce a simple, reproducible, and effective novel method based on gamma correction with clustering of keypoint descriptors to improve the segmentation performance of melanocytic lesions. The performance of the proposed lesion segmentation algorithm has been compared with a set of modern methods on two standard publicly available benchmark datasets using the popular evaluation metrics. This study has contributed to the research on skin lesion segmentation through the development of a novel segmentation algorithm with the following unique feats:
  • The application of keypoint descriptors with a data clustering technique for accurate segmentation of skin lesion areas, which recorded an improved segmentation performed when compared to the existing algorithms for the same task.
  • The transformation of the identified keypoint descriptors by the clustering of valid neighboring image data points to identify the true lesion area of interest.
  • The experimental comparison of the proposed image segmentation algorithm with other prominent image segmentation algorithms reported in the literature to demonstrate the effectiveness of the proposed algorithm. The remainder of this paper is succinctly organized as follows. Section 1.2 highlights the related works while Section 2 describes the introduced method. Section 3 discusses the experimental results, and Section 4 gives a concluding remark.

1.2. Related Works

The literature on image processing and computer vision has reported advances in the use of several lesion segmentation approaches, such as edge-based [8], region-based [7,9], contour-based [10], texture-aware [11], thresholding [12,13], clustering [14,15], and, recently, deep learning [16,17,18]. The edge-based image segmentation methods typically rely on edge operators such as Laplacian of Gaussian (LOG) and Canny to retrieve relevant edge information that can assist in boundary tracing. However, edge-based image segmentation customarily suffers from the use of dynamic programming on lesions comprising several tumor areas [1]. Region-based segmentation algorithms mostly group lesion areas using collective image characteristics [19,20]. However, it might sometimes lead to over-segmentation, especially for lesion images comprising of multi-colored areas [1]. The contour-based lesion segmentation provides the ability to either use region or edge information to estimate lesion boundaries [19,20]. Texture-based segmentation relies on textual properties, such as co-occurrence matrices of lesion images, to suggest possible lesion boundaries. Thresholding techniques can be useful for lesion segmentation by assigning pixels below or above a threshold value. However, one major logjam to the use of thresholding is its unpredictability when applied to images with noise such as vignettes. Most methods of segmentation by clustering follow the pattern of thresholding in multidimensional space to trace lesion boundaries in an unsupervised manner. The study reported in [13] utilized an amalgam of methods comprising histogram intensity equalization, thresholding, morphological operation, and a GrabCut algorithm to segment lesion areas in dermoscopic images. The thresholding technique was used for the first-level segmentation in the CIELAB color model, while Grabcut was used to perform the second-stage semi-automatic segmentation.
In recent years, there has been traction in the application of deep learning methods for the segmentation of skin lesions. Deep learning algorithms are well suited for semantic segmentation of images, pixel-wise labeling, and automatic feature construction, which have all yielded remarkable performances [21,22,23,24,25,26,27,28,29]. Deep learning methods for skin lesion segmentation reported in the literature are mostly based on the convolutional neural network (CNN) because of its inherent ability to leverage large datasets to extract discriminative object features. The CNN is a special type of deep neural network (DNN) where each neuron in the network layers has multiple high-dimensional filters that are convolved with the input of the current or previous layer. Liu et al. [23] proposed a deep learning method based on CNN with auxiliary information to segment skin lesions, achieving up to 88.76% sensitivity. Similarly, Phan et al. [26] utilized two auxiliary tasks integrated into a decoder of U-Net architecture for skin lesion segmentation to report accuracy up to 94.55%. Khan et al. [30] implemented a hybrid approach that uses maximum mutual information to fuse results from a CNN model and an improved high-dimension contrast transform (HDCT)-based saliency segmentation. They proposed a 16-layered CNN architecture that consisted of five convolutional layers, four rectified linear activation function (ReLu) layers, three max-pooling layers, one transpose layer, one SoftMax, and a one-pixel classification layer. The authors concluded that they extracted more discriminant features using the localized lesion regions when compared to extraction of features from the original images. However, one of the main challenges of using deep learning methods is their tendency to have overfitted models.
The application of keypoints to segment regions of interest in images has gained momentum in recent years. Lin et al. [31] applied keypoint contexts to detect region duplication within an image. Heinly et al. [32] reported the performance of popular feature detectors and descriptors for perspective, affine image, and non-geometric transforms. The strength of keypoint-based segmentation lies in the effectiveness of the identified keypoint features and possibly with corresponding descriptors. Feature detection is the process of identifying interesting points within a given image that can uniquely characterize an image. Several detectors of keypoint features have been reported in the literature, including good features to track (GFTT) [33], Harris detector [34], level curve curvation, features from accelerated segment test (FAST) [35], adaptive and generic accelerated segment test (AGAST) [36], maximally stable extremal regions (MSER), Laplacian-of-Gaussian (LoG), difference-of-Gaussian (DoG), and determinant-of-Hessian (DoH). In the context of lesion images, a feature detector refers to an algorithm that can detect interest points within a given image. Single-scale feature detectors, such as the Harris detector and FAST detectors, have a single representation for the detected features. Multi-scale feature detectors, such as LoG and DoG, have multiple representations. Patch feature description refers to a squared pixel that represents a neighborhood around a given interest point.
Feature description is the process of representing the characteristics of a given set of image features, and it refers to an algorithm that can be used to represent unique image characteristics. It typically encodes feature points into a series of numbers that can be used as a numerical fingerprint to distinguish one feature from another. Some of the frequently used descriptors include binary robust independent elementary features (BRIEF) [37,38] and fast retina keypoint (FREAK) [39]. In addition, the literature has recorded several algorithms with the capability to both detect and describe keypoints. In this category are speeded-up robust features (SURF) [40,41], scale-invariant feature transform (SIFT) [42], oriented FAST and rotated BRIEF (ORB) [43], binary robust invariant scalable keypoints (BRISK) [44], KAZE [45], and accelerated KAZE (AKAZE).
Previous studies have compared the effectiveness of various feature descriptors and their strength in terms of invariant to scale, rotation, and viewpoint changes [46,47,48,49,50]. However, the success could sometimes be soiled if applied to poorly acquired images and medium-heavy noised lesion images. Due to undesired results emanating from poorly acquired images, there has been a growing need to have some sort of correction that can easily compensate for this intrinsic curb. The use of gamma is one correction to influence the intensity of a given image when performing the required morphological operation. Baptiste et al. [51] proposed a new edge detector based on anisotropic linear filtering, local maximization, and gamma correction. Their method has boasted the ability to detect edges in parts of a given image where objects are either under-exposed or over-exposed. The application of gamma correction with keypoint descriptors is introduced to effectively segment lesions in dermoscopy images.

2. Materials and Methods

2.1. Materials

Publicly available experimental datasets from two prominent sources were used to test the performance of the proposed segmentation algorithm. All the 200 images, consisting of 160 benign Melanocytic Nevus (MN) with 40 malignant Melanoma (MM) from the Dermatology service of Pedro Hispano hospital (PH2) dataset, were used for comparison of the proposed algorithm with existing non-deep learning algorithms. In addition, 5400 images of 4014 Melanocytic Nevi (MN), 399 Seborrheic Keratosis (SK), 19 Actinic Keratosis (AK), 50 Dermatofibroma (DF), 40 Vascular Lesion (VL), 13 Unknown Benign Types (UNB), 657 Melanoma (MM), 101 Basal Cell Carcinoma (BCC), 44 Squamous Cell Carcinoma (SCM), and 63 Unknown Malignant Types (UNM) selected from the international skin imaging collaboration (ISIC) dataset were used for comparison. Table 1 illustrates the description of the experimental images used to test the performance of the proposed algorithm for the segmentation of skin lesions. The images used for experimentation are characterized by moderate to heavy noise, such as air bubbles, hair occlusion, ruler marking, and vignette. It should be noted that these images were featured in ISIC challenges in the years 2016 to 2019.

2.2. Proposed Method

A novel method termed gamma-adjusted skin image segmentation using keypoint (GASISUK) is offered to precisely segment skin lesion areas from melanocytic images. The GASISUK algorithm is invariant to scale, orientation, and rotation transformations. The technique of gamma correction is used to obtain the desired contrast features for a given lesion image. The essential keypoints are then extracted from the gamma-corrected image as discriminating features relative to lesion areas and surrounding non-lesion areas. The extracted keypoints are then clustered in a way that can foster an appropriate and inexpensive segmentation of lesion areas. The publicly available implementations of source codes of modern non-deep learning methods were used to evaluate the segmentation results. The code implementations used for both efficient graph-based image segmentation [52] and statistical region merging [53] were provided in [54]. In this study, to evaluate the saliency detection method [55], we relied on the application provided in [56]. The source code implementations for the rest of the other non-deep learning methods were from the study reported in [57]. The essential phases of the proposed algorithm are image preprocessing, gamma correction, and clustering of keypoint features.

2.2.1. Image Preprocessing

Image preprocessing was performed on a given image I with a domain of definition D to increase the efficiency of the lesion segmentation algorithm because of the possible presence of noise in I that might negatively affect the segmentation result. The edge-preserving image smoothing function proposed by Ambrosio and Tortorelli [58] was applied to reduce the effect of unwanted artifacts such as salt and pepper noise particles in the lesion image. Ambrosio and Tortorelli [58] have validated in their work that active contour based on the Mumford–Shah functional given by Equation (1) can be derived by computing the limit of energy functional E J ,   B ,   ε where boundary B is replaced by continuous function , whose magnitude indicates the presence of a boundary. The image smoothening operation can equally assist in highlighting the possible hair occlusion while blurring the image background. This was particularly beneficial in the identification of hair shaft noise from lesion images.
E J ,   B ,   ε = C I x J x 2 d x + A x J x 2 d x + B ε x 2 + ε 1 φ 2 x d x
where φ is a potential function with the following possible solutions:
φ 1 = 1 2 ;   0 , 1
φ 2 = 3 1 ;   0 , 1
A fast line detector (FLD) [59] was used to detect hair shaft noise in the given input lesion image, and the threshold length of detectable lines restricted to a size of 20 has improved performance results in this study. FLD is a recognition algorithm that uses a vocabulary tree built with mean standard deviation line descriptors to find candidate matches. The morphology black-hat operation was then applied on the results of the FLD to further eliminate possible noise, such as hair shaft and ruler marking, from the lesion image. Different structuring kernel sizes were used depending on the size of the detected lines. The actual removal of the identified lines was performed using a digital inpainting method of the fast marching method (FMM) [60,61]. This method ensures that a lesion image is free from noise such as hair shaft and ruler marking that often confuses most segmentation methods. Algorithm 1 summarizes the essential steps performed on each lesion image during the preprocessing stage to achieve effective segmentation results.
Algorithm 1. Preprocessing.
I a = smoothening of image I s using Ambrosio-Tortorelli [58] minimizer
I g = gray level of I a
L f = fast line detections using length threshold of 20
Len L f = size   of   detected   L f
if Len L f 1
  create a 2-d kernel k b m based on the Len L f
      k b m = 3 × 3 If 20 ≥ Len L f ≥ 1
      k b m = 7 × 7 35 ≥ Len L f 20
      k b m = 11 × 11 If 50 ≥ Len L f 35
      k b m = 15 × 15 If Len L f 50
   I b m = morphology blackhat of I g using k b m
   I b t = morphology binary threshold of I b m
   I p = fast marching inpaint [61] of I b t
else
   I p = initialize as I a

2.2.2. Gamma Correction

The application of gamma correction to lesion images can help to ensure that low contrast images are properly adjusted to reduce the effect of local shadow and suppress noise interference on the images. Moreover, it can help to optimize the usage of image bits by taking advantage of humans’ non-linear perception of light and color. Due to the complexity surrounding the acquisition of lesion images, digital images can have undesirable quality. Gamma correction has been applied in this study to obtain the desired contrast features for a given image and assist in enhancing lesion thresholding [62,63].

2.2.3. Clustering of Keypoint Descriptors

Image features can be categorized appositely as flat, edges, corners, or blobs in computer vision. Flat areas within an image refer to regions where pixel intensities tend to be homogeneous. Edges refer to boundaries between regions of an image where there is a discontinuity in pixel values. Corner features represent points in an image where two edges intersect and often reveal regions where there is a maximum intensity. It reveals the regions where there is the maximum variation in intensity when moved in all directions within an image. Blobs refer to dark on bright regions or bright on dark regions within an image. When a window is moved vertically on an image, a flat region yields the same result, but edges and corners might produce different results. Similarly, when a window is moved horizontally on an image, both the flat and edge regions produce repeatable results parallel to the direction. However, corner regions would probably produce an irreproducible result when such a window is moved horizontally, making it valuable for identifying discriminating features within an image. Consequently, the GASISUK algorithm uses corner keypoints as discriminating features relative to the lesion areas and surrounding non-lesion areas. In addition, it uses ORB [43] to perform keypoint feature detection and description because ORB descriptors can detect keypoints at each pyramid level, giving it a scale invariance advantage. Moreover, it advances the success of BRIEF that uses binary strings to represent feature points by adding rotation invariant capability at a much computationally cheaper rate. Furthermore, feature detectors in ORB leverage the achievement of the FAST feature detector using a multiscale representation of a single image at different resolutions, thereby adding scale-invariant capability. Moreover, it adds the orientation capability to the FAST algorithm to successfully detect interest points at a much quicker speed. An orientation is assigned to each keypoint depending on the level of intensity change around each keypoint.
The image pixels of lesion areas were observed to be typically situated in closed proximity regions. This suggests that features representing lesion areas could likely form well-defined density-connected components that can aid in appropriate lesion boundary tracing. Density-based spatial clustering of applications with noise (DBSCAN) is a non-parametric clustering algorithm based on pixel density [64,65]. It marks a point as a cluster outlier if it lies in a low-density region where its nearest neighbors are far apart. The clustering algorithm is particularly advantageous over partition-based counterparts, such as the k-means algorithm or the Fuzzy c-means algorithm, because of its ability to find arbitrarily shaped clusters without requiring a user to specify the number of clusters a priori. The application of DBSCAN can help to ensure that noise particles masquerading as features are trapped as outliers. In this study, we have clustered the identified keypoint features from lesion images using the DBSCAN algorithm with Euclidean distance metric in a memory-efficient way. In addition, we have ensured that groups with less than two contiguous members are automatically discarded from the list as potential noise. The density-connectedness and density-reachability mechanisms were computed as detailed in the previous study [64].

2.3. Algorithmic Description

Algorithm 2 gives the description of GASISUK being proposed in this paper to effectively segment lesion areas of interest from the surrounding regions. The brightness of each lesion was classified as predominantly light or dark relative to the dominant color of a given lesion image, and the dominant color was computed using a flattened array bin count. The algorithm showcases two-level contour filtering based on well-defined conditions. Due to the application of morphology operations, such as erosion in the first-level contour filtering, we have realized that the surface area of the identified lesion boundary could potentially reduce. This can influence the condition of the second-level contour filtering to ensure that the identified contours have a surface area relative to a pre-defined minimum value.
Algorithm 2. GASISUK Algorithm.
Let I p = preprocessed image (see Section 2.2.1)
Let o w and o h be the original width and height of I p respectively
Let č = dominant color of I p
If č > 20 ,   20 ,   20
   B r = predominantly light
else
   B r = predominantly dark
if B r is predominantly light
  assume gamma factor g f of 0.75
   I γ = gamma of I p
   I γ g = gray of I γ
   I γ t = binary threshold of I γ g
   I γ m = morphology opening of I γ t using a 7 × 7 matrix
   I γ b = initial segmentation using bitwise_AND of I γ m and I p
  If I γ b == [0] … depicting black image
     I γ b = I p
else
  assume gamma factor g f of 1.7
   I γ b = gamma of I p
k p = keypoint features of I γ b using ORB
k p c = clustering of keypoint features using DBSCAN
C i = contour sketch of clustered keypoints
I γ f 1 = first-level contour filtering
  filter-off contours satisfying below characteristics
    ∗ contour is identified as inner contour
    ∗ contour has an area of less than 32.000
    ∗ width and height of contour less than minimum width (variable, value of 0.2 o w or fixed value of 200) and minimum height (variable value of 0.2 o h or fixed value of 200)
  morphology erosion of contours
I γ f 2 = second-level contour filtering
  filter-off contours satisfying below characteristics
    ∗ ccontour has an area of less than 25.000
I s = segmented lesion image using filled mask of I γ f 2

3. Results and Discussion

This section presents the experimental evaluation of the GASISUK algorithm on the PH2 database and subsets of the ISIC database containing lesion images with heavy noise, such as vignettes, hair follicles, ruler marking, and air bubbles. The qualitative and quantitative test results of the algorithm against modern segmentation algorithms are also presented and discussed in this section.

3.1. Qualitative Result

Table 2 shows the qualitative results computed by GASISUK against some of the modern non-deep learning algorithms. Out of the 5400 images in the ISIC dataset and 200 images in the PH2 dataset used in this study, we have selected 60 image subsets comprising 15 benign ISIC images, 15 benign PH2 images, 15 malignant ISIC images, and 15 malignant PH2 images for qualitative evaluation. The modern algorithms evaluated include morphology active contour without edge (morph_cv_ls) [10], morphology geodesic active contour (morph_gac_ls) [10], saliency detection (saliency_map) [55], simple linear iterative clustering (slic_clust) [66], and statistical region merging (srm_obj) [53]. The proposed method of this study shows compelling visual results and increased prowess over the comparative methods.
Most of the existing modern algorithms failed to properly trace the lesion boundary if occluded with artifacts such as hair or ruler marking, as seen in the sample comparison of ISIC_0000043, ISIC_0000095, and PH2_IMD003. As detailed in Section 2.2.1, our algorithm resolved many of the challenging noise artifact images by applying preprocessing operations, such as Ambrosio and Tortorelli [58], fast line detector (FLD) [59], the fast marching method (FMM) [60], and morphological operations such as Blackhat. Some of the lesion images, such as SIC_0000004, ISIC_0000030, ISIC_0000147, ISIC_0000179, ISIC_0000554, IISIC_0001108, ISIC_0001118, ISIC_0001142, PH2_IMD168, PH2_IMD349, and PH2_IMD435, exhibit multiple shades of intensity that could easily be confused as lesion areas in the segmentation procedure. The vignette noise was another artifact that was seen to have negatively influenced the outcome of most of the evaluated algorithms, as seen in Table 2 for ISIC_0000125, ISIC_0000247, and ISIC_0000249, Table 3 for PH2_IMD010, PH2_IMD048, and PH2_IMD375, Table 4 for ISIC_0000004, ISIC_0000030, and ISIC_0001142, and Table 5 for PH2_IMD064 and PH2_IMD348. However, in our proposed algorithm, the application of gamma correction on each of the lesion images has assisted in obtaining the desired contrast effect for a given lesion image. This mechanism has contributed greatly towards the superior outcome of our proposed algorithm, as detailed in Table 2, Table 3, Table 4 and Table 5, on either benign or malignant lesions using both PH2 images and heavily noised ISIC images.

3.2. Quantitative Result

The standard statistical evaluation metrics recommended in the literature were used to quantitatively assess the results computed by the proposed algorithm. For the geometric evaluation, we have used the median (Med) value for each of the following metrics: Med-sensitivity, Med-specificity, Med-accuracy, Med-Jaccard-index, and Med-Dice-coefficient. This is particularly beneficial given the resilience of equidistant values to outliers and ease of computation. Sensitivity measures the degree of correctly identified lesion areas. Specificity measures the degree of correctly identified non-lesion areas. The Jaccard index compares similarity and diversity between predicted lesion areas and actual ground truth. The Dice coefficient compares the pixel-wise similarity between the predicted lesion areas and actual ground truth. Accuracy measures the statistical bias of the lesion segmentation.
The proposed algorithm has been compared against several algorithms from the literature and we observed the median value to have performed with remarkable results. The algorithms used for the purpose of quantitative evaluation are active contour without edge (ACWE), tagged cv_ls [67], morphology ACWE (morph_cv_ls) [10], morphology geodesic active contour (Morphology GAC), tagged morph_gac_ls [10], adaptive thresholding (adaptive_thresh) [68], ISODATA thresholding (isodata_thresh) [69], mean thresholding (mean_thresh) [70], triangle thresholding (triangle_thresh) [71], Otsu thresholding (otsu_thresh) [72], saliency detection (saliency_map) [55], statistical region merging (srm_obj) [53], efficient graph-based image segmentation (egbs_obj) [52], and simple linear iterative clustering (slic_clust) [66]. The results with a Jaccard index ≥0.6 and Dice coefficient ≥0.6 are considered acceptable generally in literature, and therefore used as a benchmark in this study. In Table 6, adaptive thresholding (adaptive_thresh) [68] and Morphology GAC [10] recorded the best results, with sensitivity scores of 97.95% and 93.66%, respectively, over the benign ISIC dataset, reflecting the possibility of capturing most of the lesion interest points from the surrounding skin area. Considering the result of adaptive_thresh across other metrics, it can be observed to have performed poorly, as the median Jaccard index and Dice coefficient are both below 0.6. The proposed GASISUK method and morphology ACWE (morph_cv_ls) [10] recorded the best specificity score of 100% over the same dataset, which is then followed by statistical region merging (srm_obj) [53] with a specificity score of 99.96%. The competing specificity result of morph_cv_ls can be attributed to the usage of curvature morphological operators by the authors. The GASISUK algorithm has recorded a better result in accuracy (95.17%), Jaccard index (0.80), and Dice coefficient (0.89), showing its superiority when compared to the other algorithms.
The evaluation on benign PH2 images in Table 7 shows srm_obj [53] to have recorded the best sensitivity score, followed by Morphology GAC (morph_gac_ls) [10]. The proposed algorithm equally recorded superior results in accuracy (96.76%), Jaccard index (0.85), and Dice coefficient (0.92). In Table 8 and Table 9, Morphology ACWE (morph_cv_ls) [10] shows the best specificity result, while adaptive thresholding (adaptive_thresh) [68] shows promising results based on sensitivity score.
In Table 8, the proposed algorithm has displayed its superiority over the other algorithms when considering the accuracy (93.45%), Jaccard index (0.78), and Dice coefficient (0.88) results on the malignant ISIC dataset. Similar results were recorded for the proposed algorithm in Table 9 with accuracy (80.94%), Jaccard index (0.70), and Dice coefficient (0.82).
The srm_obj [53], as illustrated in Table 9, has performed the best with a sensitivity score of 99.66% over the malignant PH2 dataset, which is then trailed by adaptive thresholding (adaptive_thresh) [68]. Similar to the result obtained over the malignant ISIC dataset, Morphology ACWE [10] has recorded the best specificity score of 100% over the malignant PH2 dataset, which is then closely trailed by the proposed algorithm with a value of 99.50%. However, the accuracy (80.94%), Jaccard index (0.70), and Dice coefficient (0.82) showcase the strength of the proposed algorithm as topping the performance chart.
The explosion of deep learning methods for segmentation of lesion images has reported commendable results in recent years. In Table 10, some of the recent modern deep learning algorithms over the ISIC 2017 segmentation task have been highlighted. The authors of the deep learning works have reported performance results over 600 images from the ISIC datasets. Phan et al. [25] recorded the best accuracy, Jaccard index, and Dice coefficient. The accuracy recorded by Phan et al. [25] could be attributed to the two auxiliary tasks of boundary distance map regression and corresponding contour detection. The result by Shan et al. [27] recorded the highest specificity because it applied a dual-path network as a replacement for fully convolved DenseNets. The combined evaluation of our algorithm over 5400 ISIC images of benign and malignant, however, showed a promising result of 100.00% specificity, 94.98% accuracy, a 0.79 Jaccard index, and a 0.89 Dice coefficient, thus displaying favorable skin lesion generalization.

3.3. Discussion

Due to variation in the degree of noise per image, data preprocessing was discovered to improve the segmentation result of most of the test images. Noise artifacts such as hair follicles and ruler marking typically affect the segmentation results, thus necessitating the need to do some sort of initial removal of such artifacts. A fast line detector [59] with a line threshold of 20 was used to estimate the number of possible hair follicles and ruler markings represented as lines within a lesion image. Blackhat morphological operation was used to assist in estimating the traces of hair follicles or similar noise. The lengths of the detected lines were used to determine the kernel matrix for effective computation of the Blackhat operation. The mask generated from the Blackhat operation was then used to perform fast marching inpainting using the neighboring pixels. The Blackhat and inpainting operations were restricted to avoid unnecessary preprocessing of lesion images that have at least one detected line.
The keypoint feature detection process using the ORB detector was limited to 3000 to reduce the possibilities of noise masquerading as valid features. The ORB parameters were tuned to have a scale factor for pyramid decimation of 1.2, and the number of pyramid levels was limited to 8. The patch size and edge threshold for the oriented BRIEF descriptors were both computed as 9. The minimum acceptable contour area was set to 32,000 and 25,000 for the first-level and second-level contour filtering, respectively. To ensure noise artifacts are filtered off during first-level contour filtering, any contour less than   0.2   w × 0.2   h depicting 20% of the original width and 20% of the original height was discarded. The dimension assumption automatically falls back to a fixed minimum of 200 × 200 if the initial minimum required dimension fails to yield the acceptable segmentation result. Image intensity dominance was computed using 2D-array bin-count and RGB color range of (256, 256, 256). If the computed dominant color is greater than (20, 20, 20), the image is labeled as predominantly light, otherwise, it is labeled as a dark image. The brightness label further determines how the gamma correction of the image is performed. For light-labeled images, Otsu binary thresholding is performed, and if it does not yield the desired result, the procedure automatically falls back to the adaptive Gaussian binary thresholding. This is subsequently followed by gamma correction of the image intensity for the desired contrast. The usage of the ORB feature detector and descriptor in the proposed method ensures that the detected keypoints are invariant to basic transformations such as rotation, scale, and orientation. The DBSCAN of the identified keypoints was performed to ensure the detected keypoints that do not form part of the lesion areas are filtered. The clustering algorithm was used to filter groups with less than two contiguous members.
In this study, we tested our algorithm on the entire 200 images of the PH2 dataset and 5400 moderately to heavily noised lesion images from the ISIC dataset. The testing of our algorithm on the PH2 benign lesion images has yielded equidistant results of 89.42% sensitivity, 99.55% specificity, 96.76% accuracy, a 0.85 Jaccard index, and a 0.92 Dice coefficient. Compelling results of over 90% of the tested benign lesion images were seen to have a minimum of a 0.6 Jaccard index. Equidistant results of 70.39% sensitivity, 99.50% specificity, 80.94% accuracy, 0.70 Jaccard index, and 0.82 Dice coefficient scores were recorded over the selected malignant PH2 lesion images. The percentage of the malignant lesion images having a minimum of 0.6 Jaccard index score was seen to be 72.50%. Consequently, this has yielded an overall equidistant result of 87.29% sensitivity, 99.54% specificity, 96.02% accuracy, a 0.83 Jaccard index, and a 0.91 Dice coefficient over the entire 200 images from the PH2 dataset. Similarly, a convincing equidistant result of specificity, accuracy, Jaccard index, and Dice coefficient of 100.00%, 95.17%, 0.80, and 0.89, respectively, was recorded after testing our algorithm on the 4535 benign lesion images from the ISIC dataset. An improved result of 99.63% of the total benign lesion images from the ISIC dataset was seen to have at least a 0.6 Jaccard index. For malignant lesion images tested from the ISIC dataset, our algorithm achieved 80.02% sensitivity, 99.97% specificity, 93.45% accuracy, a 0.78 Jaccard index, and a 0.88 Dice coefficient. Up to 94.22% of the tested malignant lesion images from the ISIC dataset were seen to have a minimum of a 0.6 Jaccard index. Overall, our algorithm has yielded an enthralling result of 80.59% sensitivity, 100.00% specificity, 94.98% accuracy, 0.79 Jaccard index, and 0.89 Dice coefficient scores over the 5400 lesion images selected from the ISIC dataset.
The receiver operating characteristic (ROC) curve in Figure 1a–d illustrates the potential and effectiveness of our proposed algorithm. It has achieved a compelling area under the ROC curve (AUC) of 1.00 on benign lesion images from both PH2 and ISIC datasets. Similarly, an AUC result of 0.99 was respectively recorded for PH2 malignant and 0.98 for ISIC malignant lesion images. As illustrated in [74], we have considered an AUC value of a minimum of 0.70 acceptable and an AUC between 0.80 and 0.90 as excellent, while results over 0.90 are considered outstanding. The top-performing algorithms with a minimum AUC of 0.90 on both PH2 and ISIC datasets were Morphology ACWE, Morphology GAC, and the proposed GASISUK algorithm. Both Otsu and mean thresholding algorithms performed the worst on PH2 malignant lesion images, with an approximated value of 0.22 in the AUC result. Similarly, both Otsu and triangle thresholding reported the least-performing results for PH2 benign lesion images. The ISODATA thresholding, mean thresholding, triangle thresholding, Otsu thresholding, and simple linear iterative clustering performed the least on both ISIC malignant and benign lesion images. Statistical region merging (SRM) has performed well on both PH2 and ISIC benign lesion images, with AUC values of 0.90 and 0.95, respectively, though it has performed poorly when evaluated over malignant PH2 datasets. The efficient graph-based image segmentation (EGBS) appeared to be able to trace lesion boundaries of most ISIC datasets for both malignant and benign, reporting AUC values of 0.97 and 0.98 for malignant and benign lesion images, respectively. It was observed that EGBS has performed excellently on benign PH2 images, but poorly on malignant PH2 lesion images, like the behavior seen in SRM. The saliency detection method gave good results on benign ISIC lesion images and performed excellently on malignant ISIC lesion images.
Considering the 5400 datasets used for evaluation experimentation, we believe our method shows generalization in relation to skin lesion segmentation. It should also be noted that while deep learning methods have shown good promise in object classification challenges because of their learning ability using feature sets, recent literature reports have suggested that their accuracies in the domain of medical image segmentation need further improvement [75]. Deep learning segmentation methods have been reported to also lack pixel-level accuracy without the application of further processing [76,77]. This is primarily because most of them work on the feature level rather than the pixel level for image segmentation. In addition, the use of deep learning methods for image segmentation is currently being impaired because of factors such as the need for more datasets for continuous training, lack of memory-efficient models for both training and inference evaluation [76], limited reference information for accurate validation [78], and the possibility of over-fitted results [79].

4. Conclusions

The application of gamma correction, keypoint descriptors, and data clustering has assisted in the effective segmentation of melanocytic lesion images. The scaling of images to a standard dimension of 200 × 150 during the processing task has contributed towards increasing the execution speed of the proposed algorithm. Once the segmentation process is complete, the image is then rescaled to the desired dimension without loss of information. The novel application of gamma correction, keypoint features, and data clustering has been demonstrated for the segmentation of melanocytic lesion images. While it is important to identify multiple lesion areas within a given image, the proposed algorithm is highly effective. It can filter potential inner lesion areas if the outer areas have already been selected to avoid duplicate segmentation of the same lesion areas. This mechanism ensures that the proposed algorithm can effectively perform multiple segmentation of lesion areas without duplication of the segmented regions, such as segmented inner regions found within another segmentation region. The proposed lesion segmentation algorithm has proven to be compelling when compared to some modern segmentation algorithms, and it would be interesting to see how it contributes to an effective diagnosis of lesion images in clinical settings.

Author Contributions

Conceptualization by D.O. and O.O.O.; methodology by D.O.; data curation by D.O.; writing—review and editing by O.O.O.; supervision by O.O.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available ISIC datasets used in this study can be found at https://www.isic-archive.com/#!/topWithHeader/onlyHeaderTop/gallery?filter=%5B%5D (accessed on 29 July 2021). Similarly, publicly available PH2 datasets used can be found at https://www.dropbox.com/s/k88qukc20ljnbuo/PH2Dataset.rar (accessed on 29 July 2021). Description of datasets used in this study can also be found at https://github.com/dokuboyejo/gasisuk-pub (accessed on 29 July 2021).

Acknowledgments

The authors would like to sincerely thank the anonymous reviewers for the constructive sentiments and useful suggestions that have led to significant improvements in the quality and presentation of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Okuboyejo, D.A.; Olugbara, O.O. A review of prevalent methods for automatic skin lesion diagnosis. Open Dermatol. J. 2018, 12, 14–53. [Google Scholar] [CrossRef]
  2. Karimkhani, C.; Green, A.; Nijsten, T.; Weinstock, M.; Dellavalle, R.; Naghavi, M.; Fitzmaurice, C. The global burden of melanoma: Results from the global burden of disease study 2015. Br. J. Dermatol. 2017, 177, 134–140. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Guo, Y.; Shen, M.; Zhang, X.; Xiao, Y.; Zhao, S.; Yin, M.; Bu, W.; Wang, Y.; Chen, X.; Su, J. Unemployment and health-related quality of life in melanoma patients during the COVID-19 pandemic. Front. Public Health 2021, 9, 630620. [Google Scholar] [CrossRef] [PubMed]
  4. Jones, O.T.; Ranmuthu, C.K.I.; Hall, P.N.; Funston, G.; Walter, F. Recognising skin cancer in primary care. Adv. Ther. 2019, 37, 603–616. [Google Scholar] [CrossRef] [Green Version]
  5. Janda, M.; Horsham, C.; Koh, U.; Gillespie, N.; Loescher, L.J.; Vagenas, D.; Soyer, H.P. Redesigning skin cancer early detection and care using a new mobile health application: Protocol of the SKIN research project, a randomised controlled trial. Dermatology 2018, 235, 11–18. [Google Scholar] [CrossRef] [Green Version]
  6. Baade, P.D.; Youl, P.H.; Janda, M.; Whiteman, D.C.; Del Mar, C.B.; Aitken, J.F. Factors associated with the number of lesions excised for each skin cancer: A study of primary care physicians in Queensland, Australia. Arch. Dermatol. 2008, 144, 1468–1476. [Google Scholar] [CrossRef] [Green Version]
  7. Oliveira, R.B.; Filho, M.E.; Ma, Z.; Papa, J.P.; Pereira, A.S.; Tavares, J.M.R.S. Computational methods for the image segmentation of pigmented skin lesions: A review. Comput. Methods Programs Biomed. 2016, 131, 127–141. [Google Scholar] [CrossRef] [Green Version]
  8. Abbas, Q.; Celebi, M.E.; García, I.F. Skin tumour area extraction using an improved dynamic programming approach. Skin Res. Technol. 2012, 18, 133–142. [Google Scholar] [CrossRef]
  9. Olugbara, O.O.; Taiwo, T.B.; Heukelman, D. Segmentation of melanoma skin lesion using perceptual color difference saliency with morphological analysis. Math. Probl. Eng. 2018, 2018, 1524286. [Google Scholar] [CrossRef]
  10. Marquez-Neila, P.; Baumela, L.; Alvarez, L. A morphological approach to curvature-based evolution of curves and surfaces. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 36, 2–17. [Google Scholar] [CrossRef] [PubMed]
  11. Serrano, C.; Acha, B. Pattern analysis of dermoscopic images based on Markov random fields. Pattern Recognit. 2009, 42, 1052–1057. [Google Scholar] [CrossRef]
  12. Zortea, M.; Flores, E.; Scharcanski, J. A simple weighted thresholding method for the segmentation of pigmented skin lesions in macroscopic images. Pattern Recognit. 2017, 64, 92–104. [Google Scholar] [CrossRef]
  13. Okuboyejo, D.A.; Olugbara, O.O.; Odunaike, S.A. CLAHE inspired segmentation of dermoscopic images using mixture of methods. In Transactions on Engineering Technologies; Haeng-Kon, K., Sio-Iong, A., Mahyar, A.A., Eds.; Springer: Dordrecht, The Netherlands, 2014; pp. 355–365. [Google Scholar]
  14. Sadri, A.R.; Zekri, M.; Sadri, S.; Gheissari, N.; Mokhtari, M.; Kolahdouzan, F. Segmentation of dermoscopy images using wavelet networks. IEEE Trans. Biomed. Eng. 2013, 60, 1134–1141. [Google Scholar] [CrossRef] [PubMed]
  15. Lemon, J.; Kockara, S.; Halic, T.; Mete, M. Density-based parallel skin lesion border detection with webCL. BMC Bioinform. 2015, 16, S5. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Saha, A.; Prasad, P.; Thabit, A. Leveraging adaptive color augmentation in convolutional neural networks for deep skin lesion segmentation. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020. [Google Scholar]
  17. Topiwala, A.; Al-Zogbi, L.; Fleiter, T.; Krieger, A. Adaptation and evaluation of deep learning techniques for skin segmentation on novel abdominal dataset. In Proceedings of the 2019 IEEE 19th International Conference on Bioinformatics and Bioengineering (BIBE), Athens, Greece, 28–30 October 2019; pp. 752–759. [Google Scholar]
  18. Youssef, A.; Bloisi, D.D.; Muscio, M.; Pennisi, A.; Nardi, D.; Facchiano, A. Deep convolutional pixel-wise labeling for skin lesion image segmentation. In Proceedings of the 2018 IEEE International Symposium on Medical Measurements and Applications (MeMeA), Rome, Italy, 11–13 June 2018; pp. 1–6. [Google Scholar]
  19. Tang, J. A multi-direction GVF snake for the segmentation of skin cancer images. Pattern Recognit. 2009, 42, 1172–1179. [Google Scholar] [CrossRef]
  20. Mete, M.; Sirakov, N.M. Dermoscopic diagnosis of melanoma in a 4D space constructed by active contour extracted features. Comput. Med. Imaging Graph. 2012, 36, 572–579. [Google Scholar] [CrossRef]
  21. Bi, L.; Kim, J.; Ahn, E.; Kumar, A.; Feng, D.; Fulham, M. Step-wise integration of deep class-specific learning for dermoscopic image segmentation. Pattern Recognit. 2019, 85, 78–89. [Google Scholar] [CrossRef] [Green Version]
  22. Sarker, M.M.K.; Rashwan, H.A.; Akram, F.; Banu, S.F.; Saleh, A.; Singh, V.K.; Chowdhury, F.U.H.; Abdulwahab, S.; Romani, S.; Radeva, P.; et al. SLSDeep: Skin lesion segmentation based on dilated residual and pyramid pooling networks. In Proceedings of the Medical Image Computing and Computer Assisted Intervention (MICCAI 2018), Granada, Spain, 16–20 September 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Springer: Berlin, Germany, 2018; pp. 21–29. [Google Scholar]
  23. Liu, L.; Tsui, Y.; Mandal, M. Skin lesion segmentation using deep learning with auxiliary task. J. Imaging 2021, 7, 67. [Google Scholar] [CrossRef]
  24. Al-Masni, M.A.; Al-Antari, M.A.; Choi, M.-T.; Han, S.-M.; Kim, T.-S. Skin lesion segmentation in dermoscopy images via deep full resolution convolutional networks. Comput. Methods Programs Biomed. 2018, 162, 221–231. [Google Scholar] [CrossRef] [PubMed]
  25. Phan, T.-D.; Kim, S.-H.; Yang, H.-J.; Lee, G.-S. Skin lesion segmentation by u-net with adaptive skip connection and structural awareness. Appl. Sci. 2021, 11, 4528. [Google Scholar] [CrossRef]
  26. Yuan, Y.; Lo, Y.-C. Improving dermoscopic image segmentation with enhanced convolutional-deconvolutional networks. IEEE J. Biomed. Health Inform. 2017, 23, 519–526. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Shan, P.; Wang, Y.; Fu, C.; Song, W.; Chen, J. Automatic skin lesion segmentation based on FC-DPN. Comput. Biol. Med. 2020, 123, 103762. [Google Scholar] [CrossRef] [PubMed]
  28. Tang, P.; Liang, Q.; Yan, X.; Xiang, S.; Sun, W.; Zhang, D.; Coppola, G. Efficient skin lesion segmentation using separable-Unet with stochastic weight averaging. Comput. Methods Programs Biomed. 2019, 178, 289–301. [Google Scholar] [CrossRef]
  29. Khan, M.A.; Sharif, M.; Akram, T.; Damaševičius, R.; Maskeliūnas, R. Skin lesion segmentation and multiclass classification using deep learning features and improved moth flame optimization. Diagnostics 2021, 11, 811. [Google Scholar] [CrossRef] [PubMed]
  30. Khan, M.A.; Muhammad, K.; Sharif, M.; Akram, T.; de Albuquerque, V.H.C. Multi-class skin lesion detection and classification via teledermatology. IEEE J. Biomed. Health Inform. 2021, 1. [Google Scholar] [CrossRef]
  31. Lin, C.; Lu, W.; Sun, W.; Zeng, J.; Xu, T.; Lai, J.-H. Region duplication detection based on image segmentation and keypoint contexts. Multimed. Tools Appl. 2017, 77, 14241–14258. [Google Scholar] [CrossRef]
  32. Heinly, J.; Dunn, E.; Frahm, J.-M. Comparative evaluation of binary features. In Proceedings of the Computer Vision (ECCV 2012), Florence, Italy, 7–13 October 2012; Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C., Eds.; Springer: Berlin, Germany, 2012; pp. 759–773. [Google Scholar]
  33. Shi, J. Good features to track. In Proceedings of the 1994 IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 21–23 June 1994; pp. 593–600. [Google Scholar]
  34. Harris, C.; Stephens, M. A Combined corner and edge detector. In Proceedings of the Alvey Vision Conference, Manchester, UK, 31 August–2 September 1988. [Google Scholar]
  35. Rosten, E.; Drummond, T. Machine learning for high-speed corner detection. In Proceedings of the Computer Vision (ECCV 2006), Graz, Austria, 7–13 May 2006; Leonardis, A., Bischof, H., Pinz, A., Eds.; Springer: Berlin, Germany, 2006; pp. 430–443. [Google Scholar]
  36. Mair, E.; Hager, G.D.; Burschka, D.; Suppa, M.; Hirzinger, G. Adaptive and generic corner detection based on the accelerated segment test. In Proceedings of the European Conference on Computer Vision, Crete, Greece, 5–11 September 2010; Daniilidis, K., Maragos, P., Paragios, N., Eds.; Springer: Berlin, Germany, 2010; pp. 183–196. [Google Scholar]
  37. Calonder, M.; Lepetit, V.; Strecha, C.; Fua, P. BRIEF: Binary robust independent elementary features. In Proceedings of the European Conference on Computer Vision, Crete, Greece, 5–11 September 2010; Daniilidis, K., Maragos, P., Paragios, N., Eds.; Springer: Berlin, Germany, 2010; pp. 778–792. [Google Scholar]
  38. Calonder, M.; Lepetit, V.; Ozuysal, M.; Trzcinski, T.; Strecha, C.; Fua, P. BRIEF: Computing a local binary descriptor very fast. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 1281–1298. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Alahi, A.; Ortiz, R.; Vandergheynst, P. FREAK: Fast retina keypoint. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 510–517. [Google Scholar]
  40. Bay, H.; Tuytelaars, T.; Gool, L.V. SURF: Speeded up robust features. In Proceedings of the Computer Vision (ECCV 2006), Graz, Austria, 7–13 May 2006; Leonardis, A., Bischof, H., Pinz, A., Eds.; Springer: Berlin, Germany, 2006; pp. 404–417. [Google Scholar]
  41. Bay, H.; Ess, A.; Tuytelaars, T.; Gool, L.V. Speeded-up robust features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  42. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  43. Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar]
  44. Leutenegger, S.; Chli, M.; Siegwart, R.Y. BRISK: Binary robust invariant scalable keypoints. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2548–2555. [Google Scholar]
  45. Alcantarilla, P.F.; Bartoli, A.; Davison, A.J. KAZE features. In Proceedings of the Computer Vision (ECCV 2012), Florence, Italy, 7–13 October 2012; Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C., Eds.; Springer: Berlin, Germany, 2012; pp. 214–227. [Google Scholar]
  46. Tareen, S.A.K.; Saleem, Z. A comparative analysis of SIFT, SURF, KAZE, AKAZE, ORB, and BRISK. In Proceedings of the 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), Sukkur, Pakistan, 3–4 March 2018; pp. 1–10. [Google Scholar]
  47. Chien, H.; Chuang, C.; Chen, C.; Klette, R. When to use what feature? SIFT, SURF, ORB, or A-KAZE features for monocular visual odometry. In Proceedings of the 2016 International Conference on Image and Vision Computing New Zealand (IVCNZ), Palmerston North, New Zealand, 21–22 November 2016; pp. 1–6. [Google Scholar]
  48. Tafti, A.P.; Baghaie, A.; Kirkpatrick, A.B.; Holz, J.D.; Owen, H.A.; D’Souza, R.M.; Yu, Z. A comparative study on the application of SIFT, SURF, BRIEF and ORB for 3D surface reconstruction of electron microscopy images. Comput. Methods Biomech. Biomed. Eng. 2018, 6, 17–30. [Google Scholar] [CrossRef]
  49. Karami, E.; Prasad, S.; Shehata, M. Image matching using SIFT, SURF, BRIEF and ORB: Performance comparison for distorted images. arXiv 2015, arXiv:1710.02726. [Google Scholar]
  50. Hidalgo, F.; Bräunl, T. Evaluation of several feature detectors/extractors on underwater images towards vSLAM. Sensors 2020, 20, 4343. [Google Scholar] [CrossRef]
  51. Baptiste, M.; Montesinos, P.; Diep, D. Fast anisotropic edge detection using gamma correction in color images. In 7th International Symposium on Image and Signal Processing and Analysis (ISPA 2011), Dubrovnik, Croatia,, 4–6 September 2011. [Google Scholar]
  52. Felzenszwalb, P.F.; Huttenlocher, D.P. Efficient graph-based image segmentation. Int. J. Comput. Vis. 2004, 59, 167–181. [Google Scholar] [CrossRef]
  53. Nock, R.; Nielsen, F. Statistical region merging. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 1452–1458. [Google Scholar] [CrossRef] [PubMed]
  54. Bastan, M. Segment-Py. 2020. Available online: https://Github.Com/Mubastan/Segment-Py (accessed on 6 June 2021).
  55. Hou, X.; Zhang, L. Saliency detection: A spectral residual approach. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; pp. 1–8. [Google Scholar]
  56. Beyeler, M. OpenCV with Python Blueprints; Packt Publishing: Birmingham, UK, 2015; ISBN 978-1-78528-269-0. [Google Scholar]
  57. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  58. Ambrosio, L.; Tortorelli, V.M. Approximation of functional depending on jumps by elliptic functional via t-convergence. Commun. Pure Appl. Math. 1990, 43, 999–1036. [Google Scholar] [CrossRef]
  59. Lee, J.H.; Lee, S.; Zhang, G.; Lim, J.; Chung, W.K.; Suh, I.H. Outdoor place recognition in urban environments using straight lines. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 5550–5557. [Google Scholar]
  60. Huan, X.; Murali, B.; Ali, A.L. Image restoration based on the fast marching method and block based sampling. Comput. Vis. Image Underst. 2010, 114, 847–856. [Google Scholar] [CrossRef]
  61. Telea, A.A. An image inpainting technique based on the fast-marching method. J. Graph. Tools 2004, 9, 23–34. [Google Scholar] [CrossRef]
  62. Kassem, R.; Chehade, W.E.H.; El-Zaart, A. Bimodal skin cancer image segmentation based on different parameter shapes of gamma distribution. In Proceedings of the 2019 Third International Conference on Intelligent Computing in Data Sciences (ICDS), Marrakech, Morocco, 28–30 October 2019; pp. 1–5. [Google Scholar]
  63. Rawas, S.; El-Zaart, A. HCET-G2: Dermoscopic skin lesion segmentation via hybrid cross entropy thresholding using gaussian and gamma distributions. In Proceedings of the 2019 Third International Conference on Intelligent Computing in Data Sciences (ICDS), Marrakech, Morocco, 28–30 October 2019; pp. 1–7. [Google Scholar]
  64. Ester, M.; Kriegel, H.-P.; Sander, J.; Xu, X. A density-based algorithm for discovering clusters in large spatial databases with noise. In Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, Portland, OR, USA, 2–4 August 1996; pp. 226–231. [Google Scholar]
  65. Schubert, E.; Sander, J.; Ester, M.; Kriegel, H.P.; Xu, X. DBSCAN revisited, revisited: Why and how you should (still) use DBSCAN. ACM Trans. Database Syst. 2017, 42, 1–21. [Google Scholar] [CrossRef]
  66. Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef] [Green Version]
  67. Chan, T.F.; Vese, L.A. Active contours without edges. IEEE Trans. Image Process. 2001, 10, 266–277. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  68. Gonzalez, R.C.; Woods, R.E.; Barry, R. Digital image processing. J. Biomed. Opt. 2009, 14, 029901. [Google Scholar] [CrossRef]
  69. Ridler, T.W.; Calvard, S. Picture thresholding using an iterative selection method. IEEE Trans. Syst. Man Cybern. 1978, 8, 630–632. [Google Scholar]
  70. Glasbey, C.A. An analysis of histogram-based thresholding algorithms. CVGIP Graph. Models Image Process. 1993, 55, 532–537. [Google Scholar] [CrossRef]
  71. Zack, G.; Rogers, W.; Latt, S. Automatic measurement of sister chromatid exchange frequency. J. Histochem. Cytochem. 1977, 25, 741–753. [Google Scholar] [CrossRef]
  72. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  73. Kumar, A.; Hamarneh, G.; Drew, M.S. Illumination-based transformations improve skin lesion segmentation in dermoscopic images. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020; pp. 3132–3141. [Google Scholar]
  74. Mandrekar, J.N. Receiver operating characteristic curve in diagnostic test assessment. J. Thorac. Oncol. 2010, 5, 1315–1316. [Google Scholar] [CrossRef] [Green Version]
  75. Liu, X.; Song, L.; Liu, S.; Zhang, Y. A review of deep-learning-based medical image segmentation methods. Sustainability 2021, 13, 1224. [Google Scholar] [CrossRef]
  76. Fu, Y.; Lei, Y.; Wang, T.; Curran, W.J.; Liu, T.; Yang, X. A review of deep learning-based methods for medical image multi-organ segmentation. Phys. Med. 2021, 85, 107–122. [Google Scholar] [CrossRef]
  77. Wang, Z. Deep learning for image segmentation: Veritable or overhyped? arXiv 2020, arXiv:1904.08483. [Google Scholar]
  78. Minaee, S.; Boykov, Y.Y.; Porikli, F.; Plaza, A.J.; Kehtarnavaz, N.; Terzopoulos, D. Image segmentation using deep learning: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 1. [Google Scholar] [CrossRef] [PubMed]
  79. Zhou, T.; Ruan, S.; Canu, S. A review: Deep learning for medical image segmentation using multi-modality fusion. Array 2019, 3–4, 100004. [Google Scholar] [CrossRef]
Figure 1. (a) ROC over PH2 Malignant; (b) ROC over PH2 Benign; (c) ROC over ISIC Malignant; (d) ROC over ISIC Benign.
Figure 1. (a) ROC over PH2 Malignant; (b) ROC over PH2 Benign; (c) ROC over ISIC Malignant; (d) ROC over ISIC Benign.
Diagnostics 11 01366 g001aDiagnostics 11 01366 g001bDiagnostics 11 01366 g001c
Table 1. Experimental datasets.
Table 1. Experimental datasets.
DatabaseBenignMalignantTotal
MNSKAKDFVLUNBMMBCCSCCUNM
PH2160 40 200
ISIC40143991950401365710144635400
Table 2. Qualitative evaluation on a subset of ISIC dataset (benign).
Table 2. Qualitative evaluation on a subset of ISIC dataset (benign).
IDOriginal ImageGround TruthGASISUKMORPH_
CV_LS
[10]
MORPH_
GAC_LS
[10]
SALIENCY_
MAP [55]
SLIC_
CLUST [66]
SRM_OBJ [53]
ISIC_0000042 Diagnostics 11 01366 i001 Diagnostics 11 01366 i002 Diagnostics 11 01366 i003 Diagnostics 11 01366 i004 Diagnostics 11 01366 i005 Diagnostics 11 01366 i006 Diagnostics 11 01366 i007 Diagnostics 11 01366 i008
ISIC_0000090 Diagnostics 11 01366 i009 Diagnostics 11 01366 i010 Diagnostics 11 01366 i011 Diagnostics 11 01366 i012 Diagnostics 11 01366 i013 Diagnostics 11 01366 i014 Diagnostics 11 01366 i015 Diagnostics 11 01366 i016
ISIC_0000095 Diagnostics 11 01366 i017 Diagnostics 11 01366 i018 Diagnostics 11 01366 i019 Diagnostics 11 01366 i020 Diagnostics 11 01366 i021 Diagnostics 11 01366 i022 Diagnostics 11 01366 i023 Diagnostics 11 01366 i024
ISIC_0000125 Diagnostics 11 01366 i025 Diagnostics 11 01366 i026 Diagnostics 11 01366 i027 Diagnostics 11 01366 i028 Diagnostics 11 01366 i029 Diagnostics 11 01366 i030 Diagnostics 11 01366 i031 Diagnostics 11 01366 i032
ISIC_0000138 Diagnostics 11 01366 i033 Diagnostics 11 01366 i034 Diagnostics 11 01366 i035 Diagnostics 11 01366 i036 Diagnostics 11 01366 i037 Diagnostics 11 01366 i038 Diagnostics 11 01366 i039 Diagnostics 11 01366 i040
ISIC_0000179 Diagnostics 11 01366 i041 Diagnostics 11 01366 i042 Diagnostics 11 01366 i043 Diagnostics 11 01366 i044 Diagnostics 11 01366 i045 Diagnostics 11 01366 i046 Diagnostics 11 01366 i047 Diagnostics 11 01366 i048
ISIC_0000189 Diagnostics 11 01366 i049 Diagnostics 11 01366 i050 Diagnostics 11 01366 i051 Diagnostics 11 01366 i052 Diagnostics 11 01366 i053 Diagnostics 11 01366 i054 Diagnostics 11 01366 i055 Diagnostics 11 01366 i056
ISIC_0000214 Diagnostics 11 01366 i057 Diagnostics 11 01366 i058 Diagnostics 11 01366 i059 Diagnostics 11 01366 i060 Diagnostics 11 01366 i061 Diagnostics 11 01366 i062 Diagnostics 11 01366 i063 Diagnostics 11 01366 i064
ISIC_0000219 Diagnostics 11 01366 i065 Diagnostics 11 01366 i066 Diagnostics 11 01366 i067 Diagnostics 11 01366 i068 Diagnostics 11 01366 i069 Diagnostics 11 01366 i070 Diagnostics 11 01366 i071 Diagnostics 11 01366 i072
ISIC_0000223 Diagnostics 11 01366 i073 Diagnostics 11 01366 i074 Diagnostics 11 01366 i075 Diagnostics 11 01366 i076 Diagnostics 11 01366 i077 Diagnostics 11 01366 i078 Diagnostics 11 01366 i079 Diagnostics 11 01366 i080
ISIC_0000229 Diagnostics 11 01366 i081 Diagnostics 11 01366 i082 Diagnostics 11 01366 i083 Diagnostics 11 01366 i084 Diagnostics 11 01366 i085 Diagnostics 11 01366 i086 Diagnostics 11 01366 i087 Diagnostics 11 01366 i088
ISIC_0000247 Diagnostics 11 01366 i089 Diagnostics 11 01366 i090 Diagnostics 11 01366 i091 Diagnostics 11 01366 i092 Diagnostics 11 01366 i093 Diagnostics 11 01366 i094 Diagnostics 11 01366 i095 Diagnostics 11 01366 i096
ISIC_0000249 Diagnostics 11 01366 i097 Diagnostics 11 01366 i098 Diagnostics 11 01366 i099 Diagnostics 11 01366 i100 Diagnostics 11 01366 i101 Diagnostics 11 01366 i102 Diagnostics 11 01366 i103 Diagnostics 11 01366 i104
ISIC_0000250 Diagnostics 11 01366 i105 Diagnostics 11 01366 i106 Diagnostics 11 01366 i107 Diagnostics 11 01366 i108 Diagnostics 11 01366 i109 Diagnostics 11 01366 i110 Diagnostics 11 01366 i111 Diagnostics 11 01366 i112
ISIC_0000258 Diagnostics 11 01366 i113 Diagnostics 11 01366 i114 Diagnostics 11 01366 i115 Diagnostics 11 01366 i116 Diagnostics 11 01366 i117 Diagnostics 11 01366 i118 Diagnostics 11 01366 i119 Diagnostics 11 01366 i120
Table 3. Qualitative evaluation on a subset of PH2 dataset (benign).
Table 3. Qualitative evaluation on a subset of PH2 dataset (benign).
IDOriginal ImageGround TruthGASISUKMORPH_CV_LS [10]MORPH_GAC_LS [10]SALIENCY_MAP [55]SLIC_CLUST [66]SRM_OBJ [53]
PH2_IMD002 Diagnostics 11 01366 i121 Diagnostics 11 01366 i122 Diagnostics 11 01366 i123 Diagnostics 11 01366 i124 Diagnostics 11 01366 i125 Diagnostics 11 01366 i126 Diagnostics 11 01366 i127 Diagnostics 11 01366 i128
PH2_IMD003 Diagnostics 11 01366 i129 Diagnostics 11 01366 i130 Diagnostics 11 01366 i131 Diagnostics 11 01366 i132 Diagnostics 11 01366 i133 Diagnostics 11 01366 i134 Diagnostics 11 01366 i135 Diagnostics 11 01366 i136
PH2_IMD010 Diagnostics 11 01366 i137 Diagnostics 11 01366 i138 Diagnostics 11 01366 i139 Diagnostics 11 01366 i140 Diagnostics 11 01366 i141 Diagnostics 11 01366 i142 Diagnostics 11 01366 i143 Diagnostics 11 01366 i144
PH2_IMD015 Diagnostics 11 01366 i145 Diagnostics 11 01366 i146 Diagnostics 11 01366 i147 Diagnostics 11 01366 i148 Diagnostics 11 01366 i149 Diagnostics 11 01366 i150 Diagnostics 11 01366 i151 Diagnostics 11 01366 i152
PH2_IMD019 Diagnostics 11 01366 i153 Diagnostics 11 01366 i154 Diagnostics 11 01366 i155 Diagnostics 11 01366 i156 Diagnostics 11 01366 i157 Diagnostics 11 01366 i158 Diagnostics 11 01366 i159 Diagnostics 11 01366 i160
PH2_IMD048 Diagnostics 11 01366 i161 Diagnostics 11 01366 i162 Diagnostics 11 01366 i163 Diagnostics 11 01366 i164 Diagnostics 11 01366 i165 Diagnostics 11 01366 i166 Diagnostics 11 01366 i167 Diagnostics 11 01366 i168
PH2_IMD049 Diagnostics 11 01366 i169 Diagnostics 11 01366 i170 Diagnostics 11 01366 i171 Diagnostics 11 01366 i172 Diagnostics 11 01366 i173 Diagnostics 11 01366 i174 Diagnostics 11 01366 i175 Diagnostics 11 01366 i176
PH2_IMD101 Diagnostics 11 01366 i177 Diagnostics 11 01366 i178 Diagnostics 11 01366 i179 Diagnostics 11 01366 i180 Diagnostics 11 01366 i181 Diagnostics 11 01366 i182 Diagnostics 11 01366 i183 Diagnostics 11 01366 i184
PH2_IMD166 Diagnostics 11 01366 i185 Diagnostics 11 01366 i186 Diagnostics 11 01366 i187 Diagnostics 11 01366 i188 Diagnostics 11 01366 i189 Diagnostics 11 01366 i190 Diagnostics 11 01366 i191 Diagnostics 11 01366 i192
PH2_IMD304 Diagnostics 11 01366 i193 Diagnostics 11 01366 i194 Diagnostics 11 01366 i195 Diagnostics 11 01366 i196 Diagnostics 11 01366 i197 Diagnostics 11 01366 i198 Diagnostics 11 01366 i199 Diagnostics 11 01366 i200
PH2_IMD305 Diagnostics 11 01366 i201 Diagnostics 11 01366 i202 Diagnostics 11 01366 i203 Diagnostics 11 01366 i204 Diagnostics 11 01366 i205 Diagnostics 11 01366 i206 Diagnostics 11 01366 i207 Diagnostics 11 01366 i208
PH2_IMD306 Diagnostics 11 01366 i209 Diagnostics 11 01366 i210 Diagnostics 11 01366 i211 Diagnostics 11 01366 i212 Diagnostics 11 01366 i213 Diagnostics 11 01366 i214 Diagnostics 11 01366 i215 Diagnostics 11 01366 i216
PH2_IMD372 Diagnostics 11 01366 i217 Diagnostics 11 01366 i218 Diagnostics 11 01366 i219 Diagnostics 11 01366 i220 Diagnostics 11 01366 i221 Diagnostics 11 01366 i222 Diagnostics 11 01366 i223 Diagnostics 11 01366 i224
PH2_IMD375 Diagnostics 11 01366 i225 Diagnostics 11 01366 i226 Diagnostics 11 01366 i227 Diagnostics 11 01366 i228 Diagnostics 11 01366 i229 Diagnostics 11 01366 i230 Diagnostics 11 01366 i231 Diagnostics 11 01366 i232
PH2_IMD393 Diagnostics 11 01366 i233 Diagnostics 11 01366 i234 Diagnostics 11 01366 i235 Diagnostics 11 01366 i236 Diagnostics 11 01366 i237 Diagnostics 11 01366 i238 Diagnostics 11 01366 i239 Diagnostics 11 01366 i240
Table 4. Qualitative evaluation on a subset of ISIC dataset (malignant).
Table 4. Qualitative evaluation on a subset of ISIC dataset (malignant).
IDOriginal ImageGround TruthGASISUKMORPH_CV_LS [10]MORPH_GAC_LS [10]SALIENCY_MAP [55]SLIC_CLUST [66]SRM_OBJ [53]
ISIC_0000004 Diagnostics 11 01366 i241 Diagnostics 11 01366 i242 Diagnostics 11 01366 i243 Diagnostics 11 01366 i244 Diagnostics 11 01366 i245 Diagnostics 11 01366 i246 Diagnostics 11 01366 i247 Diagnostics 11 01366 i248
ISIC_0000030 Diagnostics 11 01366 i249 Diagnostics 11 01366 i250 Diagnostics 11 01366 i251 Diagnostics 11 01366 i252 Diagnostics 11 01366 i253 Diagnostics 11 01366 i254 Diagnostics 11 01366 i255 Diagnostics 11 01366 i256
ISIC_0000042 Diagnostics 11 01366 i257 Diagnostics 11 01366 i258 Diagnostics 11 01366 i259 Diagnostics 11 01366 i260 Diagnostics 11 01366 i261 Diagnostics 11 01366 i262 Diagnostics 11 01366 i263 Diagnostics 11 01366 i264
ISIC_0000043 Diagnostics 11 01366 i265 Diagnostics 11 01366 i266 Diagnostics 11 01366 i267 Diagnostics 11 01366 i268 Diagnostics 11 01366 i269 Diagnostics 11 01366 i270 Diagnostics 11 01366 i271 Diagnostics 11 01366 i272
ISIC_0000147 Diagnostics 11 01366 i273 Diagnostics 11 01366 i274 Diagnostics 11 01366 i275 Diagnostics 11 01366 i276 Diagnostics 11 01366 i277 Diagnostics 11 01366 i278 Diagnostics 11 01366 i279 Diagnostics 11 01366 i280
ISIC_0000547 Diagnostics 11 01366 i281 Diagnostics 11 01366 i282 Diagnostics 11 01366 i283 Diagnostics 11 01366 i284 Diagnostics 11 01366 i285 Diagnostics 11 01366 i286 Diagnostics 11 01366 i287 Diagnostics 11 01366 i288
ISIC_0000555 Diagnostics 11 01366 i289 Diagnostics 11 01366 i290 Diagnostics 11 01366 i291 Diagnostics 11 01366 i292 Diagnostics 11 01366 i293 Diagnostics 11 01366 i294 Diagnostics 11 01366 i295 Diagnostics 11 01366 i296
ISIC_0001100 Diagnostics 11 01366 i297 Diagnostics 11 01366 i298 Diagnostics 11 01366 i299 Diagnostics 11 01366 i300 Diagnostics 11 01366 i301 Diagnostics 11 01366 i302 Diagnostics 11 01366 i303 Diagnostics 11 01366 i304
ISIC_0001108 Diagnostics 11 01366 i305 Diagnostics 11 01366 i306 Diagnostics 11 01366 i307 Diagnostics 11 01366 i308 Diagnostics 11 01366 i309 Diagnostics 11 01366 i310 Diagnostics 11 01366 i311 Diagnostics 11 01366 i312
ISIC_0001118 Diagnostics 11 01366 i313 Diagnostics 11 01366 i314 Diagnostics 11 01366 i315 Diagnostics 11 01366 i316 Diagnostics 11 01366 i317 Diagnostics 11 01366 i318 Diagnostics 11 01366 i319 Diagnostics 11 01366 i320
ISIC_0001119 Diagnostics 11 01366 i321 Diagnostics 11 01366 i322 Diagnostics 11 01366 i323 Diagnostics 11 01366 i324 Diagnostics 11 01366 i325 Diagnostics 11 01366 i326 Diagnostics 11 01366 i327 Diagnostics 11 01366 i328
ISIC_0001124 Diagnostics 11 01366 i329 Diagnostics 11 01366 i330 Diagnostics 11 01366 i331 Diagnostics 11 01366 i332 Diagnostics 11 01366 i333 Diagnostics 11 01366 i334 Diagnostics 11 01366 i335 Diagnostics 11 01366 i336
ISIC_0001142 Diagnostics 11 01366 i337 Diagnostics 11 01366 i338 Diagnostics 11 01366 i339 Diagnostics 11 01366 i340 Diagnostics 11 01366 i341 Diagnostics 11 01366 i342 Diagnostics 11 01366 i343 Diagnostics 11 01366 i344
Table 5. Qualitative evaluation on a subset of PH2 dataset (malignant).
Table 5. Qualitative evaluation on a subset of PH2 dataset (malignant).
IDOriginal ImageGround TruthGASISUKMORPH_CV_LS [10]MORPH_GAC_LS [10]SALIENCY_MAP [55]SLIC_CLUST [66]SRM_OBJ [53]
PH2_IMD063 Diagnostics 11 01366 i345 Diagnostics 11 01366 i346 Diagnostics 11 01366 i347 Diagnostics 11 01366 i348 Diagnostics 11 01366 i349 Diagnostics 11 01366 i350 Diagnostics 11 01366 i351 Diagnostics 11 01366 i352
PH2_IMD064 Diagnostics 11 01366 i353 Diagnostics 11 01366 i354 Diagnostics 11 01366 i355 Diagnostics 11 01366 i356 Diagnostics 11 01366 i357 Diagnostics 11 01366 i358 Diagnostics 11 01366 i359 Diagnostics 11 01366 i360
PH2_IMD168 Diagnostics 11 01366 i361 Diagnostics 11 01366 i362 Diagnostics 11 01366 i363 Diagnostics 11 01366 i364 Diagnostics 11 01366 i365 Diagnostics 11 01366 i366 Diagnostics 11 01366 i367 Diagnostics 11 01366 i368
PH2_IMD211 Diagnostics 11 01366 i369 Diagnostics 11 01366 i370 Diagnostics 11 01366 i371 Diagnostics 11 01366 i372 Diagnostics 11 01366 i373 Diagnostics 11 01366 i374 Diagnostics 11 01366 i375 Diagnostics 11 01366 i376
PH2_IMD284 Diagnostics 11 01366 i377 Diagnostics 11 01366 i378 Diagnostics 11 01366 i379 Diagnostics 11 01366 i380 Diagnostics 11 01366 i381 Diagnostics 11 01366 i382 Diagnostics 11 01366 i383 Diagnostics 11 01366 i384
PH2_IMD285 Diagnostics 11 01366 i385 Diagnostics 11 01366 i386 Diagnostics 11 01366 i387 Diagnostics 11 01366 i388 Diagnostics 11 01366 i389 Diagnostics 11 01366 i390 Diagnostics 11 01366 i391 Diagnostics 11 01366 i392
PH2_IMD348 Diagnostics 11 01366 i393 Diagnostics 11 01366 i394 Diagnostics 11 01366 i395 Diagnostics 11 01366 i396 Diagnostics 11 01366 i397 Diagnostics 11 01366 i398 Diagnostics 11 01366 i399 Diagnostics 11 01366 i400
PH2_IMD349 Diagnostics 11 01366 i401 Diagnostics 11 01366 i402 Diagnostics 11 01366 i403 Diagnostics 11 01366 i404 Diagnostics 11 01366 i405 Diagnostics 11 01366 i406 Diagnostics 11 01366 i407 Diagnostics 11 01366 i408
PH2_IMD403 Diagnostics 11 01366 i409 Diagnostics 11 01366 i410 Diagnostics 11 01366 i411 Diagnostics 11 01366 i412 Diagnostics 11 01366 i413 Diagnostics 11 01366 i414 Diagnostics 11 01366 i415 Diagnostics 11 01366 i416
PH2_IMD404 Diagnostics 11 01366 i417 Diagnostics 11 01366 i418 Diagnostics 11 01366 i419 Diagnostics 11 01366 i420 Diagnostics 11 01366 i421 Diagnostics 11 01366 i422 Diagnostics 11 01366 i423 Diagnostics 11 01366 i424
PH2_IMD407 Diagnostics 11 01366 i425 Diagnostics 11 01366 i426 Diagnostics 11 01366 i427 Diagnostics 11 01366 i428 Diagnostics 11 01366 i429 Diagnostics 11 01366 i430 Diagnostics 11 01366 i431 Diagnostics 11 01366 i432
PH2_IMD409 Diagnostics 11 01366 i433 Diagnostics 11 01366 i434 Diagnostics 11 01366 i435 Diagnostics 11 01366 i436 Diagnostics 11 01366 i437 Diagnostics 11 01366 i438 Diagnostics 11 01366 i439 Diagnostics 11 01366 i440
PH2_IMD425 Diagnostics 11 01366 i441 Diagnostics 11 01366 i442 Diagnostics 11 01366 i443 Diagnostics 11 01366 i444 Diagnostics 11 01366 i445 Diagnostics 11 01366 i446 Diagnostics 11 01366 i447 Diagnostics 11 01366 i448
PH2_IMD426 Diagnostics 11 01366 i449 Diagnostics 11 01366 i450 Diagnostics 11 01366 i451 Diagnostics 11 01366 i452 Diagnostics 11 01366 i453 Diagnostics 11 01366 i454 Diagnostics 11 01366 i455 Diagnostics 11 01366 i456
PH2_IMD435 Diagnostics 11 01366 i457 Diagnostics 11 01366 i458 Diagnostics 11 01366 i459 Diagnostics 11 01366 i460 Diagnostics 11 01366 i461 Diagnostics 11 01366 i462 Diagnostics 11 01366 i463 Diagnostics 11 01366 i464
Table 6. Quantitative evaluation on a subset of ISIC dataset (benign).
Table 6. Quantitative evaluation on a subset of ISIC dataset (benign).
MethodsMed-Sensitivity (%)Med-Specificity (%)Med-Accuracy (%)Med-Jaccard-Index (Ji)Med-Dice-Coefficient (Dc)
CV_LS [67]75.2682.2979.880.480.65
MORPH_CV_LS [10]76.92100.0093.680.750.85
MORPH_GAC_LS [10]93.660.5161.630.400.55
ADAPTIVE_THRESH [68]97.950.6725.390.250.39
ISODATA_THRESH [69]24.070.227.330.060.11
MEAN_THRESH [70]6.007.259.080.020.03
TRIANGLE_THRESH [71]9.531.894.820.020.05
OTSU_THRESH [72]23.720.237.320.060.11
SALIENCY_MAP [55]75.3789.9481.100.480.65
SRM_OBJ [53]79.0599.9693.040.730.84
EGBS_OBJ [52]80.6199.6283.910.430.61
SLIC_CLUST [66]19.700.356.760.050.10
GASISUK (Proposed)80.68100.0095.170.800.89
Table 7. Quantitative evaluation on a subset of PH2 dataset (benign).
Table 7. Quantitative evaluation on a subset of PH2 dataset (benign).
MethodsMed-Sensitivity (%)Med-Specificity (%)Med-Accuracy (%)Med-Jaccard-Index (Ji)Med-Dice-Coefficient (Dc)
CV_LS [67]14.0053.6647.160.040.08
MORPH_CV_LS [10]77.4999.7093.400.720.84
MORPH_GAC_LS [10]95.6142.5051.910.280.44
ADAPTIVE_THRESH [68]91.133.9121.220.180.31
ISODATA_THRESH [69]27.2811.2414.850.060.12
MEAN_THRESH [70]7.9420.6719.430.020.04
TRIANGLE_THRESH [71]16.8813.5014.830.040.08
OTSU_THRESH [72]26.5711.2914.760.060.12
SALIENCY_MAP [55]66.9685.6277.660.360.53
SRM_OBJ [53]100.003.4523.430.210.34
EGBS_OBJ [52]89.4875.0150.950.150.26
SLIC_CLUST [66]20.9415.9417.130.070.13
GASISUK (Proposed)89.4299.5596.760.850.92
Table 8. Quantitative evaluation on ISIC dataset (malignant).
Table 8. Quantitative evaluation on ISIC dataset (malignant).
MethodsMed-Sensitivity (%)Med-Specificity (%)Med-Accuracy (%)Med-Jaccard Index (Ji)Med-Dice-Coefficient (Dc)
CV_LS [67]60.6779.8773.360.400.57
MORPH_CV_LS [10]73.78100.0091.270.720.84
MORPH_GAC_LS [10]93.2354.8967.680.480.65
ADAPTIVE_THRESH [68]96.850.1031.270.310.47
ISODATA_THRESH [69]28.041.7311.730.090.16
MEAN_THRESH [70]13.2512.2914.360.050.09
TRIANGLE_THRESH [71]20.402.379.120.060.11
OTSU_THRESH [72]27.441.8711.660.080.16
SALIENCY_MAP [55]62.9792.1277.950.460.63
SRM_OBJ [53]84.2299.6687.920.640.78
EGBS_OBJ [52]78.2399.3481.520.480.65
SLIC_CLUST [66]24.622.2511.500.080.15
GASISUK (Proposed)80.0299.9793.450.780.88
Table 9. Quantitative evaluation on PH2 dataset (malignant).
Table 9. Quantitative evaluation on PH2 dataset (malignant).
MethodsMed-Sensitivity (%)Med-Specificity (%)Med-Accuracy (%)Med-Jaccard Index (Ji)Med-Dice-Coefficient (Dc)
CV_LS [67]26.2245.0634.550.210.34
MORPH_CV_LS [10]53.86100.0071.740.540.70
MORPH_GAC_LS [10]81.6783.2074.870.680.81
ADAPTIVE_THRESH [68]92.703.7262.590.620.76
ISODATA_THRESH [69]24.9321.2223.720.180.30
MEAN_THRESH [70]28.3120.7225.690.180.31
TRIANGLE_THRESH [71]21.5923.0121.310.140.24
OTSU_THRESH [72]23.0721.2223.160.170.29
SALIENCY_MAP [55]36.1687.1559.080.320.49
SRM_OBJ [53]99.6610.8257.500.550.71
EGBS_OBJ [52]8.8996.5236.460.090.16
SLIC_CLUST [66]49.1022.2243.140.400.57
GASISUK (Proposed)70.3999.5080.940.700.82
Table 10. Illustrative analysis of deep learning algorithms on ISIC 2017 testing dataset segmentation task.
Table 10. Illustrative analysis of deep learning algorithms on ISIC 2017 testing dataset segmentation task.
MethodsSensitivity (%)Specificity (%)Accuracy (%)Jaccard Index (Ji)Dice-Coefficient (Dc)
Bi et al. [21]86.2096.7194.080.780.86
Sarker et al. [22]81.6098.3093.600.780.88
Liu et al. [23]88.7696.5194.320.790.87
Al-mansi et al. [24]85.4096.6994.030.770.87
Phan et al. [25] 94.550.800.88
Abhishek et al. [73]87.0695.1692.200.760.84
Yuan et al. [26]82.5097.5093.400.770.85
Shan et al. [27]83.8298.6593.710.760.85
Tang et al. [28]89.5396.3294.310.790.87
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Okuboyejo, D.; Olugbara, O.O. Segmentation of Melanocytic Lesion Images Using Gamma Correction with Clustering of Keypoint Descriptors. Diagnostics 2021, 11, 1366. https://doi.org/10.3390/diagnostics11081366

AMA Style

Okuboyejo D, Olugbara OO. Segmentation of Melanocytic Lesion Images Using Gamma Correction with Clustering of Keypoint Descriptors. Diagnostics. 2021; 11(8):1366. https://doi.org/10.3390/diagnostics11081366

Chicago/Turabian Style

Okuboyejo, Damilola, and Oludayo O. Olugbara. 2021. "Segmentation of Melanocytic Lesion Images Using Gamma Correction with Clustering of Keypoint Descriptors" Diagnostics 11, no. 8: 1366. https://doi.org/10.3390/diagnostics11081366

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop