Next Article in Journal
Cytology and High-Risk Human Papillomavirus Test for Cervical Cancer Screening Assessment
Next Article in Special Issue
Imperative Role of Machine Learning Algorithm for Detection of Parkinson’s Disease: Review, Challenges and Recommendations
Previous Article in Journal
Assessment of Serum Pepsinogens with and without Co-Testing with Gastrin-17 in Gastric Cancer Risk Assessment—Results from the GISTAR Pilot Study
Previous Article in Special Issue
COVID-19 Diagnosis from Chest X-ray Images Using a Robust Multi-Resolution Analysis Siamese Neural Network with Super-Resolution Convolutional Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Melanoma Detection Using XGB Classifier Combined with Feature Extraction and K-Means SMOTE Techniques

1
Department of Medical Informatics, Chung Shan Medical University, Taichung 402, Taiwan
2
Department of Medical Sociology and Social Work, Chung Shan Medical University, Taichung 402, Taiwan
3
Information Technology Office, Chung Shan Medical University Hospital, Taichung 402, Taiwan
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Diagnostics 2022, 12(7), 1747; https://doi.org/10.3390/diagnostics12071747
Submission received: 4 July 2022 / Revised: 14 July 2022 / Accepted: 18 July 2022 / Published: 19 July 2022

Abstract

:
Melanoma, a very severe form of skin cancer, spreads quickly and has a high mortality rate if not treated early. Recently, machine learning, deep learning, and other related technologies have been successfully applied to computer-aided diagnostic tasks of skin lesions. However, some issues in terms of image feature extraction and imbalanced data need to be addressed. Based on a method for manually annotating image features by dermatologists, we developed a melanoma detection model with four improvement strategies, including applying the transfer learning technique to automatically extract image features, adding gender and age metadata, using an oversampling technique for imbalanced data, and comparing machine learning algorithms. According to the experimental results, the improved strategies proposed in this study have statistically significant performance improvement effects. In particular, our proposed ensemble model can outperform previous related models.

1. Introduction

Malignant melanoma (MM) is the most severe form of skin cancer; although rare, it has a high mortality rate. According to GLOBOCAN statistics, in 2020, there were approximately 325,000 cases of melanoma skin-cancers worldwide, and melanoma accounted for 1.7% of the all-sites global cancer diagnoses. The calculations for the global age-standardized incidence rates show that the rate is 3.8/100,000 for males and 3.0/100,000 for females. The cumulative lifetime risk for males was 0.42% and for females was 0.33% [1]. MM easily spreads throughout the body, causing other cancers, such as brain, liver, and kidney cancers. Once it has spread, the survival rate will be less than 50%. The 5-year survival rate of patients is as high as 90 to 99%, if discovered early and resected; however, if detected late, the survival rate drops to approximately 15 to 20%. The actual cause of MM is unclear. At present, the medical community recognizes that exposure to ultraviolet light is a risk factor for the cancer. The frequency of MM in Asians is minimal, although it is more common on the palms/soles of the hands and feet, which are nonirradiated parts and hence unrelated to ultraviolet exposure. Recently, the incidence and mortality of MM have been increasing. Mortality caused by the cancer is common in young age groups, unlike the other types of cancer. Furthermore, a delay in medical treatment worsens the prognosis, causing metastasis, and even death; therefore, early diagnosis and treatment are essential [2,3].
In clinical diagnosis, it is difficult for dermatologists to identify early MM from a mole. South Queensland, Australia, has the highest global MM frequency. Edith Cowan University in Australia developed a blood test for MM antibodies, with an accuracy of 79% [4]; however, there are still testing and time cost limitations. Dermoscopy is a paramount technique for the initial diagnosis of MM. Therefore, if the development of artificial intelligence (AI) models of computer-aided diagnosis (CAD) systems can help dermatologists interpret the dermoscopy images, it will help to reduce medical costs.
Machine learning (ML) classifiers have been employed for the automatic diagnosis methods of skin lesions. Before modeling, these classifiers are input a set of handcrafted image features, such as the skin lesions-related features that dermatologists pay attention to. Recently, in most computer vision tasks, deep learning (DL) convolutional neural networks (CNNs) can automatically extract high-level image features and significantly improve the classification performance. Therefore, CNN-based CAD systems have been recently used to detect various diseases [5,6].
According to the latest review paper [7], on the topic of using neural networks to detect melanoma, the relevant architectures published in 2018–2021 were classified into the following four techniques. 1. Using a convolutional neural network; 2. Using multiple convolutional neural networks; 3. Using a convolutional neural network combined with other classifiers; 4. Using other techniques, such as combining ABCDE rules with traditional machine learning algorithms.
To test the performance of the AI models on dermoscopy images, there are many researchers who use public databases (such as PH2, MED-NODE, and ISIC). This article reviewed a total of 25 recent articles on MM CAD, published between 2016 and 2022, and listed the lowest and highest of the six indicators for the evaluation of efficacy, as shown in Table 1.
For example, Warsi et al. [9] used a 3D color texture feature (CTF) and a multilayer neural network model for the binary classification of MM diseases by a total of 200 dermoscopy images in the PH2 dataset. Based on the holdout method, 70% of the images were used as the training set, 15% of the images were used as the validation set, and 15% of the images were used as the test set. Their results showed the best performance in the PH2 dataset and reached 97.5% accuracy (ACC), 98.1% sensitivity (SEN), and 93.84% specificity (SPE). Iqbal et al. [14] proposed a new deep convolutional neural network (DCNN) model with multiple filter sizes: classification of the skin lesions network (CSLNet) architecture. Through data pre-processing and the data augmentation of ISIC-17, ISIC-18, and ISIC-19 images, it achieved 96.4% AUC, 93.25% ACC, 93.25% SEN, 90.64% SPE, 93.97% precision (PRE), and 94.47% F1 in the ISIC 2017 dataset, using the 7:1:2 holdout method.
To evaluate the performance of the MM prediction models using an oversampling technique, Kalwa et al. [28] used 200 dermoscopy images for the MM binary classification. By combining image feature extraction (FE), SVM, and synthetic minority oversampling technique (SMOTE) methods, the AUC was increased from 0.720 to 0.850. Magalhaes et al. [29] used 287 infrared thermography skin images for MM binary classification. Using an ensemble model of image FE, random forest (RF), SVM, and SMOTE methods, the recall increased from 0.473 to 0.696.
The contributions of this study are listed as follows:
  • Dermoscopy images (2299) were used for MM CAD, a dermatologist handcrafted feature method was used as a comparison base, and four classification efficiency improvement strategies were proposed: (1) a comparison of different transfer learning techniques for automatic image FE; (2) the addition of the metadata of gender and age; (3) a comparison of the class balance of the training data with different oversampling techniques; and (4) a comparison of the classification performance of different ML algorithms. According to the experimental results, the four proposed strategies are statistically significant for MM detection;
  • We combined the DL and ML methods to automatically extract the features directly from the dermoscopy images and perform benign and MM diagnosis. The experimental results show that our proposed model combining metadata, K-means SMOTE, and an extreme gradient boosting (XGB) classifier can achieve higher classification and predictability than using only the MELA-CNN feature extractor.

2. Methods

2.1. MM Dataset

In this study, we integrated the ISIC Challenge 2018 (ISIC2018) and the ISIC Challenge 2019 (ISIC2019) datasets [30,31,32,33] for the binary classification of benign and MM. The ISIC2018 dataset contains five handcrafted features provided by dermatologists: pigment networks; negative networks; streaks; globules; and milia-like cysts. Meanwhile, the ISIC2019 dataset contains two pieces of basic patient data: age and gender. There are 2299 records in this dataset, including 1849 benign and 450 MM. Because of the imbalanced data, subsequent processing is performed using oversampling techniques.

2.2. FE Techniques

FE is a preprocessing procedure in data mining. To evaluate the impact of the dermatologist handcrafted features [30] and automatic DL FE [34] on the classification performance of an ML algorithm for predicting MM, we compared the following five FE techniques.
(1)
Handcraft: We employed five handcrafted characteristics provided by dermatologists [30]: pigment networks; negative networks; streaks; globules; and milia-like cysts. A pigment network is a grid comprising many brown lines crossing each other; a negative network is a curve formed by many hyperpigmented cell connections; a streak comprises pigmented projections surrounding a melanocytic lesion; a globule comprises multiple brown circles; a milia-like cyst comprises many white, yellowish circles or ovals;
(2)
VGG16: VGG16 is a DL CNN model proposed by Karen Simonyan et al. [35]. They used the ImageNet dataset of one million images to classify one thousand classes. VGG16 takes 224 × 224 RGB images as the input and comprises 13 convolutional layers and 3 fully connected layers, as well as a nonlinear activation function—rectified linear unit (ReLU). All of the layers used three × three small convolution kernels, to avoid too many parameters. This DL model can automatically extract 512 features from the dermoscopy images;
(3)
InceptionV3: InceptionV3 is a CNN-based DL model of the inception series. The inception series includes InceptionV1, InceptionV2, InceptionV3, InceptionV4, and InceptionResNet series. InceptionV3 was proposed by Szegedy et al. [36] as an improved InceptionV2. They used the ImageNet dataset of one million images to classify one thousand classes. InceptionV3 takes 224 × 224 RGB images as input and comprises 47 layers. In addition, this model adopts the batch normalization of InceptionV2 to accelerate the model training. This DL model can automatically extract 2048 features from dermoscopy images;
(4)
InceptionResNetV2: InceptionResNetV2 is an Inception module-based DL model. It uses 299 × 299 RGB images as input. In addition, it replaces the pooling layers in the Inception modules A, B, and C, with ResNet connections to accelerate the training [37]. This DL model can automatically extract 1536 features from dermoscopy images;
(5)
MELA-CNN: Based on the transfer learning technique [34], we used the InceptionResNetV2 architecture as the backbone to develop MELA-CNN (Figure 1). After retrieving the feature maps of the average pooling layer of InceptionResNetV2, a fully connected layer of 256 nodes is added, and ReLU is used. Further, batch normalization and Sigmoid layers are introduced, and MELA-CNN trained weights are obtained after the fine-tuning process using the target dataset. This DL model can automatically extract 256 features from dermoscopy images.

2.3. SMOTE

Because our datasets are from the medical field, the feature of a considerable numerical imbalance in the number of negative and positive samples is common. Therefore, we employed a data oversampling method to solve the imbalance in the number of data categories to avoid misjudgment of the classifier during training. Chawla et al. [38] proposed SMOTE, which randomly selects the k-nearest neighbor samples to increase the number of transactions in minority categories to the same number as the number of transactions in the majority category, to solve the problem of data imbalance. Because the SMOTE sampling technique is prone to generate noise and affect the classifier prediction performance, Douzas et al. [39] proposed K-means SMOTE, which is based on SMOTE and k-means clustering, for data oversampling. First, the data are grouped using the k-means method, and the clusters with minority classes accounting for less than 50% are selected. Then, the number of samples to be generated is calculated, and more samples are assigned to the clusters with sparse samples. Finally, SMOTE is performed in this cluster, and the number of minority samples is increased to the same number as the majority samples, solving the problem of data imbalance, and improving the shortcoming that SMOTE is prone to noise.

2.4. XGB

XGB, proposed by Tianqi Chen et al. [40], is based on the concept of gradient boosting decision tree (GBDT). GBDT is a gradient boosting algorithm based on a decision tree. Gradient boosting is an ensemble learning model that mainly trains the multiple weak classifiers, assembling them into a stronger classifier. The goal is to minimize the loss function and increase the weight of the misclassified classes by computing negative gradients to improve the next iteration of the training.
Compared with GBDT, XGB adds a regularization method, to make the loss function smoother, reduce the model complexity, and avoid overfitting. In addition, an approximation algorithm is used to find the optimal solution for splits, optimize the gradient boosting, and increase the efficiency and scalability. Further, considering the processing of missing or sparse values, it can be designated as a specific branch to improve the efficiency of an algorithm. Finally, to accelerate the model operation, XGB also supports a parallel operation and an early stop. When the prediction result reaches the optimum, the tree can be stopped in advance to increase the training speed. XGB can also improve the model classification accuracy.

2.5. Evaluation Metrics

To evaluate the performance of the different models for binary classification, we employed the confusion matrix to calculate the true positive (TP), true negative (TN), false positive (FP), and false negative (FN), as well as deriving the following five evaluation indicators:
Accuracy (ACC): The proportion of correct diagnoses in all of the samples.
ACC = T P + T N T P + T N + F P + F N
Precision (PRE): The proportion of individuals who are positive in the group diagnosed with the disease.
PRE = T P T P + F P
Recall (REC): The proportion of positive diagnosis results that are true positive, which is also called the true positive rate (TPR).
REC = T P T P + F N
F1-score: The harmonic mean of PRE and REC.
2 F 1 = 1 P R E + 1 R E C
AUC: The AUC of TPR and FPR. FPR is the false positive rate, which refers to the proportion of false positives in the actual disease-free population.
FPR = F P T N + F P
The higher the value of the above five indicators, the better the classification performance of the model. Because of the use of ACC and PRE to evaluate the class-imbalanced dataset, the model may be biased due to numerous FNs. In this study, we aimed to develop a model that can effectively detect patients with MM. Therefore, we used REC, F1-score, and AUC as the main evaluation criteria for the model performance.

2.6. Stratified K-Fold Validation

We employed a stratified K-fold method for the 10-fold stratified cross-validation, which is an improvement of the K-fold cross-validation method. The K-fold cross-validation method divides the data into mutually exclusive k groups of equal sizes, and then repeats the training and testing k times. Each time, one group is used as the test data, and the others are used as the training data to verify the accuracy. Finally, the average of k times the accuracy is used as the final accuracy. The innovation of the stratified K-fold method is that each fold is extracted according to the category ratio for training and testing. Because the method ensures that the proportion of two categories in each fold is equal to the original dataset, it is suitable for imbalanced data classification.

2.7. Paired T-Test

To evaluate whether the difference in the MM detection ability using the proposed enhancement strategy is statistically significant, we used the paired t-test to compare the predictive performance of the two models:
p = d ¯ S d / n
where d ¯ denotes the mean of the difference between paired data; S d denotes the standard deviation of the difference between paired data; and n denotes the number of pairs of data. The null hypothesis is a 10-fold validated REC or F1-score mean difference of 0 between the two models. When p < 0.05, it means that there is a statistically significant difference in the classification performance between the two models.

3. Proposed Framework

In this study, we integrated the ISIC2018 dermoscopy image data [30,31] and the ISIC2019 patient age and gender basic data [32,33] to form a research dataset for developing an MM detection model. The overall research architecture is shown in Figure 2. First, the five FE methods were implemented on the dermoscopy images—VGG16, InceptionResNetV2, Inception V3, MELA-CNN, and the dermatologist handcrafted method. Then, we merged the optimal image features and metadata. Finally, we compared different oversampling techniques with different ML algorithms to find the optimal MM detection model.
The proposed model architecture is shown in Figure 3. Based on the transfer learning technique [34], MELA-CNN is developed to automatically extract image features. In Figure 3, the first, second, and third block diagrams depict InceptionResNetV2, MELA-CNN, and the optimal MM detection model proposed in this study. The overall architecture of the proposed model is based on InceptionResNetV2 as the backbone and applies a fine-tuning process to train MELA-CNN for automatic image FE. Then, by combining two sets of metadata with the optimal image features, 258 features are obtained. In addition, we used K-means SMOTE for class balance. Finally, we employed XGB for MM detection.

4. Experimental Result

In this study, 2299 images were manually annotated by dermatologists in the ISIC2018 and ISIC2019 datasets to train and test the optimal classification model. In the process, the stratified K-fold method was used for 10-fold cross-validation, and data were extracted according to the proportion of categories. Then, we put them into each fold, performed 10 rounds of training and testing, and obtained the following results.

4.1. FE Techniques

Five techniques were used for the FE of dermoscopy images—the dermatologist handcrafted method, VGG16, InceptionResNetV2, Inception V3, and MELA-CNN—and the number of features after extraction was 512, 1536, 2048, and 256, respectively.
Table 2 summarizes the results of the five techniques combined with XGB to compare their performance differences. Clearly, MELA-CNN was the most efficient method, with an F1-score value of 0.756. Meanwhile, the dermatologist handcrafted method had the worst performance, with an F1-score value of only 0.064. The F1-score values of VGG16, InceptionResNetV2, and Inception V3 are up to 0.282, 0.309, and 0.295, respectively. Figure 4 shows the performance comparison chart of the F1-score of the five FE techniques combined with XGB. Clearly, MELA-CNN significantly outperforms the other techniques.

4.2. Metadata

To evaluate the dermoscopy image features by adding metadata, including age and gender, for the difference in the predictability of the diagnostic model, we employed XGB for the model training and used the F1-score as the main evaluation metric. The results in Table 3 show that the F1-score of the five symptoms obtained using the dermatologist handcrafted method, after adding metadata, can reach 0.415. Compared with the results without metadata, the F1-score increased by 35.1%. After adding metadata, the F1-score increased from 0.756 to 0.800, i.e., a 4.4% increase, for the 256 image features extracted by MELA-CNN. Figure 5 shows the F1-score performance comparison chart of 5 and 256 features with metadata. This figure clearly shows the relevance of the metadata. The classification performance of both of the models—the dermatologist handcrafted method and MELA-CNN—improved.

4.3. SMOTE

Because of the problem of class imbalance in our datasets, we used 10 oversampling techniques to balance the classes of binary data with XGB, to assess the difference in performance. In this study, the oversampling technique was used only for the training set, and the test set was maintained in its original composition. The results in Table 4 represent the difference in performance on the test set that compared 10 oversampling techniques with the original no-sampling technique for model training. The results show that the original F1-score was only 0.800 without an oversampling technique. After using K-means SMOTE, which is the optimal oversampling method, the F1-score reached 0.861. The F1-score values of the other oversampling techniques are as follows. Random Over Sampler: 0.840; SMOTE: 0.839; SVMSMOTE: 0.835; SMOTETomek: 0.835; BorderlineSMOTE: 0.834; SMOTE-RandomUnderSampler: 0.831; SMOTENC: 0.830; SMOTEENN: 0.822; ADASYN: 0.814. Figure 6 shows the performance comparison chart of the F1-score under the 258 features obtained using the 10 oversampling techniques. Clearly, K-means SMOTE shows the most obvious improvement in the test performance of the original imbalanced training dataset.

4.4. ML Algorithms (Classifiers)

In this study, we compared the performance of 13 ML algorithms for MM classification, using the optimal results obtained using K-means SMOTE: XGB classifier, histogram-based gradient boosting (HistGB classifier), SVM, gradient boosting, RF, multilayer perceptron (MLP), Gaussian naive Bayes (Gaussian NB), logistic regression, bagging classifier, stochastic gradient descent logistic regression (SGD-LR), adaptive boosting (AdaBoost), decision tree, and K-neighbors classifier. Table 5 summarizes the results of the 13 ML algorithms for MM diagnosis. The F1-score of XGB is 0.861, which is the optimal classification performance. Figure 7 shows the F1-score performance comparison chart of the algorithms. Clearly, XGB significantly outperforms all of the other ML algorithms.

5. Discussion

5.1. Effect of FE and Metadata

In this study, the dermatologist handcrafted method was used as a comparison basis to discuss the differences in the improvement of the performance for MM detection by four strategies: (1) using the automatic image FE method; (2) adding metadata; (3) using SMOTE; and (4) using different ML algorithms. Figure 8 and Figure 9 show the ROC and PRE–REC (PR) curves for four comparison FE techniques.
MELA-CNN is an automatic image FE technique; it could achieve the optimal classification performance because it performed the fine-tuning process on the target dataset. Compared with the handcrafted method, MELA-CNN has 31.9% and 47.2% increases in the AUC and the PR curve area, respectively. The results show that using MELA-CNN improves the predictability of the model compared with the handcrafted method.
Moreover, adding the metadata with 256 features could further improve the predictability of the model. The AUC and PR curve area increased to 0.865 and 0.827, respectively. Age and gender had a good resolution in the MM diagnosis. Finally, K-means SMOTE was used to tackle the problem of prediction bias, due to the imbalance of the data categories, and the AUC improved by as much as 0.970. These results once again show that the four proposed improvement strategies can effectively improve MM predictability.

5.2. Effect of Oversampling Techniques

As mentioned above, using MELA-CNN for FE and adding the metadata with 258 features can improve the prediction performance. Therefore, this method is used as the basis, and K-means SMOTE, SMOTE, and RandomOverSampler are used as representative methods to compare the impact of the different oversampling techniques on classification performance. Compared with SMOTE, which randomly selects the k-nearest neighbor samples for oversampling without grouping, K-means SMOTE is a method for oversampling samples with denser minority classes in clusters. Using K-means SMOTE yields better results; its AUC and PR curve area can reach 0.970 and 0.924, respectively. Figure 10 and Figure 11 depict the ROC and PR curves for the performance comparisons of three oversampling methods. The results show that K-means SMOTE has the best classification and discernibility.

5.3. Effect of ML Algorithms (Classifiers)

As mentioned above, using 258 features with K-means SMOTE technology can achieve an optimal performance. Based on K-means SMOTE, we compared the differences in the prediction performance of different classifiers. Figure 12 and Figure 13 depict the ROC and PR curves for the performance comparison of three classifiers. Clearly, XGB has the best predictability and discernibility.
XGB, an ensemble learning classifier that combines multiple ML techniques, uses boosting to continuously train and revise weak learners to improve the prediction performance. It uses non-replacement random sampling to generate the different training subsets from the original training dataset and votes or averages for each training result to make the final prediction. Compared with the Gaussian NB and K-neighbors classifier, XGB has the best performance, with an AUC of 0.970 and a PR curve area of 0.924.

5.4. Significance Test for Performance Improvement

To assess whether the four improvement strategies proposed in this study are statistically significant, we employed XGB with the stratified 10-fold cross-validation method for five handcrafted features provided by dermatologists, 256 extracted features by MELA-CNN, 258 features obtained after adding the metadata of age and gender, and 258 features obtained by K-means SMOTE.
Table 6, Table 7 and Table 8 summarize the REC paired t-test results for the number of features of 5 versus 256, 256 versus 258, and 258 versus 258+K-means SMOTE, respectively. Besides, Table 9, Table 10 and Table 11 lists the corresponding F1-score paired t-test results. The null hypothesis is that the difference in 10-fold REC or F1-score between two models is 0. From Table 6, Table 7, Table 8, Table 9, Table 10 and Table 11, a significant p-value of less than 0.05 was obtained for all of the test results, thereby confirming the following three findings. (1) The use of the MELA-CNN feature extractor has a significant performance improvement over the dermatologist handcrafted method; (2) After adding the metadata of age and gender, the improvement in classification performance was also statistically significant; (3) Finally, there is also a statistically significant improvement in the predictive power of the model using K-means SMOTE.

5.5. Performance Comparison with Previous Related Studies

We compare five of the evaluation metrics of our research results and two previous studies [28,29] (as shown in Table 12 and Table 13). Based on the 7:3 holdout method, Kalwa et al. [28] performed only the binary MM classification for 200 dermoscopy images, combined with image FE, SVM, and SMOTE, the AUC can increase from 0.720 to 0.850. In this study, the binary MM classification was performed on 2299 dermoscopy images, and the proposed model was constructed by combining MELA-CNN, metadata, K-means SMOTE, and XGB and compared with five handcrafted image features provided by dermatologists, 1536 features extracted by the traditional transfer learning, 256 features extracted by MELA-CNN, and 258 by adding metadata. As a result, the AUC increased from 0.585 to 0.971, the F1-score increased from 0.056 to 0.890, and the REC increased from 0.030 to 0.867. Using the 8:2 holdout method, Magalhaes et al. [29] performed a binary classification of MM for 287 infrared thermography images. After combining image FE and the integrated model of RF and SVM with SMOTE, the REC can increase from 0.473 to 0.696. In this study, the AUC increased from 0.621 to 0.981, the F1-score increased from 0.063 to 0.905, and the REC increased from 0.033 to 0.878. The results in Table 12 and Table 13 confirm that the proposed model can obtain better classification and predictability than previous related models.
We also compared the performance of our proposed model with some other work in the literature. Based on the dataset, the imbalanced ratio (IR) of Non-Me and Me (IR) samples, the classification method, the validation method, and the performance of the test set, a comparative summary of these techniques is provided in Table 14. Since the different studies use different datasets and performance metrics, valid comparisons are difficult. However, the method proposed in this study still exhibits excellent performance.

6. Conclusions

Recently, the incidence of skin cancer has increased globally. The accurate classification of skin lesions directly influences the accurate and prompt diagnosis of skin cancer. MM is a highly lethal skin cancer that can rapidly metastasize, and eventually cause death if not detected early and treated properly.
Based on the method of expert manual annotation, an AI model for CAD of MM was developed for 2299 dermoscopy images in this study. We proposed four improvement strategies: (1) comparing different transfer learning techniques for automatic image FE; (2) adding the metadata of gender and age; (3) comparing different oversampling techniques for the class balancing of training data; and (4) comparing the classification performance of different ML algorithms. According to the experimental results, the proposed improvement strategies have a statistically significant effect on performance improvement.
After the analysis and comparison of the experimental results, we showed an effective combination of DL and ML methods to automatically extract features from dermoscopy images and perform benign and MM diagnoses. The experimental results also show that the proposed model, using the MELA-CNN feature extractor plus metadata, combined with K-means SMOTE and XGB, can obtain a better classification and prediction ability than the previous related models. Both the statistics and tests performed in this study confirmed that the proposed MM detection model has excellent classification performance.
However, if future clinical applications are to be met, it is necessary to further test the detection capabilities of a larger amount of case data and more categories of skin lesions to optimize the AI model. Based on the method proposed in this study, developing a computer-aided diagnosis system for melanoma with a user-friendly interface to support the clinical practice of dermatologists and provide an interpretation mechanism after automatic diagnosis is also the goal of the next stage of this study.

Author Contributions

Conceptualization, M.-H.T.; Data curation, C.-C.C. and Y.-Z.L.; Funding acquisition, H.-C.W. and M.-H.T.; Methodology, M.-H.T.; Project administration, H.-C.W.; Software, C.-C.C. and Y.-Z.L.; Visualization, C.-C.C. and Y.-Z.L.; Writing–original draft, C.-C.C., Y.-Z.L. and M.-H.T.; Writing–review and editing, H.-C.W. and M.-H.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the Ministry of Science and Technology, Taiwan, grant number MOST 110-2121-M-040-001.

Institutional Review Board Statement

The study protocol was approved by the Ethics Committee for Human Genome and Gene Analysis at Nagasaki University (#120221).

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Saginala, K.; Barsouk, A.; Aluru, J.S.; Rawla, P.; Barsouk, A. Epidemiology of Melanoma. Med. Sci. 2021, 9, 63. [Google Scholar] [CrossRef] [PubMed]
  2. Rigel, D.S.; Carucci, J.A. Malignant melanoma: Prevention, early detection and treatment in the 21st century. CA A Cancer J. Clin. 2000, 50, 215–236. [Google Scholar] [CrossRef] [PubMed]
  3. Carr, S.; Smith, C.; Wernberg, J. Epidemiology and risk factors of melanoma. Surg. Clin. N. Am. 2020, 100, 1–12. [Google Scholar] [CrossRef] [PubMed]
  4. Zaenker, P.; Lo, J.; Pearce, R.; Cantwell, P.; Cowell, L.; Lee, M.; Quirk, C.; Law, H.; Gray, E.; Ziman, M. A diagnostic autoantibody signature for primary cutaneous melanoma. Oncotarget 2018, 9, 30539–30551. [Google Scholar] [CrossRef] [Green Version]
  5. Wong, T.Y.; Bressler, N.M. Artificial intelligence with deep learning technology looks into diabetic retinopathy screening. JAMA 2016, 316, 2366–2367. [Google Scholar] [CrossRef]
  6. Hosny, A.; Parmar, C.; Quackenbush, J.; Schwartz, L.H.; Aerts, H. Artificial intelligence in radiology. Nat. Rev. Cancer 2018, 18, 500–510. [Google Scholar] [CrossRef]
  7. Popescu, D.; El-Khatib, M.; El-Khatib, H.; Ichim, L. New Trends in Melanoma Detection Using Neural Networks: A Systematic Review. Sensors 2022, 22, 496. [Google Scholar] [CrossRef]
  8. Adjed, F.; Safdar Gardezi, S.J.; Ababsa, F.; Faye, I.; Chandra Dass, S. Fusion of structural and textural features for melanoma recognition. IET Comput. Vis. 2018, 12, 185–195. [Google Scholar] [CrossRef]
  9. Salido, J.A.A.; Ruiz, C.R. Using Deep Learning for Melanoma Detection in Dermoscopy Images. Int. J. Mach. Learn. Comput. 2018, 8, 61–68. [Google Scholar] [CrossRef] [Green Version]
  10. Warsi, F.; Khanam, R.; Kamya, S.; Suárez-Araujo, C.P. An efficient 3D color-texture feature and neural network technique for melanoma detection. Inform. Med. 2019, 17, 100176. [Google Scholar] [CrossRef]
  11. El-Khatib, H.; Popescu, D.; Ichim, L. Deep Learning-Based Methods for Automatic Diagnosis of Skin Lesions. Sensors 2020, 20, 1753. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Al-masni, M.A.; Kim, D.H.; Kim, T.S. Multiple skin lesions diagnostics via integrated deep convolutional networks for segmentation and classification. Comput. Methods Progr. Biomed. 2020, 190, 105351. [Google Scholar] [CrossRef] [PubMed]
  13. Li, Y.; Shen, L. Skin Lesion Analysis towards Melanoma Detection Using Deep Learning Network. Sensors 2018, 18, 556. [Google Scholar] [CrossRef] [Green Version]
  14. Iqbal, I.; Younus, M.; Walayat, K.; Kakar, M.U.; Ma, J.W. Automated multi-class classification of skin lesions through deep convolutional neural network with dermoscopic images. Comput. Med. Imaging Graph. 2021, 88. [Google Scholar] [CrossRef]
  15. Li, X.; Wu, J.; Jiang, H.; Chen, E.Z.; Dong, X.; Rong, R. Skin Lesion Classification Via Combining Deep Learning Features and Clinical Criteria Representations. bioRxiv 2018. [Google Scholar]
  16. Gessert, N.; Sentkerac, T.; Madestaac, F.; Schmitz, R.; Kniepag, H.; Baltruschataef, I.; Werner, R.; Schlaeferb, A. Skin Lesion Diagnosis using Ensembles, Unscaled Multi-Crop Evaluation and Loss Weighting. arXiv 2018, arXiv:1808.01694. [Google Scholar]
  17. Bissoto, A.; Perez, F.; Ribeiro, V.; Fornaciali, M.; Avila, S.; Valle, E. Deep-Learning Ensembles for Skin-Lesion Segmentation, Analysis, Classification: RECOD Titans at ISIC Challenge 2018. arXiv 2018, arXiv:1808.08480. [Google Scholar]
  18. Zhuang, J.; Li, W.; Manivannan, S.; Wang, R.; Zhang, J.; Liu, J.; Pan, J.; Jiang, G.; Yin, Z. Skin Lesion Analysis Towards Melanoma Detection Using Deep Neural Network Ensemble. ISIC Chall. 2018, 1–6. [Google Scholar]
  19. Almaraz-Damian, J.A.; Ponomaryov, V.; Sadovnychiy, S.; Castillejos-Fernandez, H. Melanoma and Nevus Skin Lesion Classification Using Handcraft and Deep Learning Feature Fusion via Mutual Information Measures. Entropy 2020, 22, 484. [Google Scholar] [CrossRef] [Green Version]
  20. Gong, A.; Yao, X.; Lin, W. Classification for Dermoscopy Images Using Convolutional Neural Networks Based on the Ensemble of Individual Advantage and Group Decision. IEEE Access 2020, 8, 155337–155351. [Google Scholar] [CrossRef]
  21. Lucius, M.; De All, J.; De All, J.A.; Belvisi, M.; Radizza, L.; Lanfranconi, M.; Lorenzatti, V.; Galmarini, C.M. Deep Neural Frameworks Improve the Accuracy of General Practitioners in the Classification of Pigmented Skin Lesions. Diagnostics 2020, 10, 969. [Google Scholar] [CrossRef]
  22. Adegun, A.; Viriri, S. Deep Learning Model for Skin Lesion Segmentation: Fully Convolutional Network; Springer International Publishing: Berlin/Heidelberg, Germany, 2019; pp. 232–242. [Google Scholar]
  23. Alfi, I.A.; Rahman, M.M.; Shorfuzzaman, M.; Nazir, A. A Non-Invasive Interpretable Diagnosis of Melanoma Skin Cancer Using Deep Learning and Ensemble Stacking of Machine Learning Models. Diagnostics 2022, 12, 726. [Google Scholar] [CrossRef] [PubMed]
  24. Abbes, W.; Sellami, D. Deep Neural Network for Fuzzy Automatic Melanoma Diagnosis; Science and Technology Publications: Setúbal, Portugal, 2019; pp. 47–56. [Google Scholar]
  25. Abbas, Q.; Celebi, M.E. DermoDeep-A classification of melanoma-nevus skin lesions using multi-feature fusion of visual features and deep neural network. Multimed. Tools Appl. 2019, 78, 23559–23580. [Google Scholar] [CrossRef]
  26. Nasr-Esfahani, E.; Samavi, S.; Karimi, N.; Soroushmehr, S.M.; Jafari, M.H.; Ward, K.; Najarian, K. Melanoma detection by analysis of clinical images using convolutional neural network. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 1373–1376. [Google Scholar] [CrossRef]
  27. Harangi, B. Skin lesion classification with ensembles of deep convolutional neural networks. J. Biomed. Inform. 2018, 86, 25–32. [Google Scholar] [CrossRef] [PubMed]
  28. Kalwa, U.; Legner, C.; Kong, T.; Pandey, S. Skin cancer diagnostics with an all-inclusive smartphone application. Symmetry 2019, 11, 790. [Google Scholar] [CrossRef] [Green Version]
  29. Magalhaes, C.; Tavares, J.M.R.; Mendes, J.; Vardasca, R. Comparison of machine learning strategies for infrared thermography of skin cancer. Biomed. Signal Proc. Control. Proc. 2021, 69, 102872. [Google Scholar] [CrossRef]
  30. Codella, N.; Rotemberg, V.; Tschandl, P.; Celebi, M.E.; Dusza, S.; Gutman, D.; Helba, B.; Kalloo, A.; Liopyris, K.; Marchetti, M. Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the international skin imaging collaboration (ISIC). arXiv 2018, arXiv:03368. [Google Scholar]
  31. Tschandl, P.; Rosendahl, C.; Kittler, H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci. Data 2018, 5, 180161. [Google Scholar] [CrossRef]
  32. Codella, N.C.; Gutman, D.; Celebi, M.E.; Helba, B.; Marchetti, M.A.; Dusza, S.W.; Kalloo, A.; Liopyris, K.; Mishra, N.; Kittler, H. Skin lesion analysis toward melanoma detection: A challenge at the 2017 international symposium on biomedical imaging (isbi), hosted by the international skin imaging collaboration (isic). In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; pp. 168–172. [Google Scholar]
  33. Combalia, M.; Codella, N.C.; Rotemberg, V.; Helba, B.; Vilaplana, V.; Reiter, O.; Carrera, C.; Barreiro, A.; Halpern, A.C.; Puig, S. Bcn20000: Dermoscopic lesions in the wild. arXiv 2019, arXiv:1908.02288. [Google Scholar]
  34. Fan, J.; Lee, J.; Lee, Y. A Transfer Learning Architecture Based on a Support Vector Machine for Histopathology Image Classification. Appl. Sci. 2021, 11, 6380. [Google Scholar] [CrossRef]
  35. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  36. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
  37. Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A.A. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017. [Google Scholar]
  38. Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic minority over-sampling technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
  39. Douzas, G.; Bacao, F.; Last, F. Improving imbalanced learning through a heuristic oversampling method based on k-means and SMOTE. Inf. Sci. 2018, 465, 1–20. [Google Scholar] [CrossRef] [Green Version]
  40. Tianqi Chen, C.G. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
  41. Bisla, D.; Choromanska, A.; Berman, R.S.; Stein, J.A.; Polsky, D. Towards Automated Melanoma Detection with Deep Learning: Data Purification and Augmentation. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 16–17 June 2019; pp. 2720–2728. [Google Scholar]
  42. Daghrir, J.; Tlig, L.; Bouchouicha, M.; Sayadi, M. Melanoma skin cancer detection using deep learning and classical machine learning techniques: A hybrid approach. In Proceedings of the 2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP), Sfax, Tunisia, 2–5 September 2020; pp. 1–5. [Google Scholar]
Figure 1. MELA-CNN network architecture.
Figure 1. MELA-CNN network architecture.
Diagnostics 12 01747 g001
Figure 2. Research architecture.
Figure 2. Research architecture.
Diagnostics 12 01747 g002
Figure 3. Proposed model architecture.
Figure 3. Proposed model architecture.
Diagnostics 12 01747 g003
Figure 4. Comparison of F1-score of five feature extraction techniques.
Figure 4. Comparison of F1-score of five feature extraction techniques.
Diagnostics 12 01747 g004
Figure 5. F1-score comparison of adding metadata.
Figure 5. F1-score comparison of adding metadata.
Diagnostics 12 01747 g005
Figure 6. Comparison of F1-scores using 10 oversampling techniques.
Figure 6. Comparison of F1-scores using 10 oversampling techniques.
Diagnostics 12 01747 g006
Figure 7. Comparison of F1-score of 13 classifiers with K-means SMOTE.
Figure 7. Comparison of F1-score of 13 classifiers with K-means SMOTE.
Diagnostics 12 01747 g007
Figure 8. Comparison of ROC curves with different feature extractors.
Figure 8. Comparison of ROC curves with different feature extractors.
Diagnostics 12 01747 g008
Figure 9. Comparison of PR curves with different feature extractors.
Figure 9. Comparison of PR curves with different feature extractors.
Diagnostics 12 01747 g009
Figure 10. ROC curves comparison with different oversampling techniques.
Figure 10. ROC curves comparison with different oversampling techniques.
Diagnostics 12 01747 g010
Figure 11. PR curves comparison with different oversampling techniques.
Figure 11. PR curves comparison with different oversampling techniques.
Diagnostics 12 01747 g011
Figure 12. Comparison of ROC curves with different classifiers.
Figure 12. Comparison of ROC curves with different classifiers.
Diagnostics 12 01747 g012
Figure 13. PR curves comparison with different classifiers.
Figure 13. PR curves comparison with different classifiers.
Diagnostics 12 01747 g013
Table 1. Performance evaluation test results on the models’ melanoma binary classification.
Table 1. Performance evaluation test results on the models’ melanoma binary classification.
AuthorsDatasetAUCACCSENSPEPREF1
[8,9,10]PH2NA0.861~0.9750.790~0.9810.925~0.938NANA
[11]Subset of PH2NA0.9500.9250.966NANA
[12]ISIC 20160.7660.8180.8180.714NA0.826
[12,13,14]ISIC 20170.870~0.9640.857~0.9330.490~0.9330.872~0.9610.9400.813~0.935
[14,15,16,17,18,19,20,21]ISIC 20180.847~0.9890.803~0.9310.484~0.8880.957~0.9780.860~0.9050.491~0.891
[22,23]Subset of ISIC 20180.9700.880~0.9100.920~0.960NA0.840~0.9100.880~0.910
[14,20]ISIC 20190.919~0.9910.896~0.9240.483~0.8960.976~0.9770.9070.488~0.898
[11]Subset of ISIC 2019NA0.9300.9250.933NANA
[16,17,24,25]Combined0.880~0.9600.803~0.9500.851~0.9300.844~0.950NANA
[26]MED-NODE0.810NA0.8100.800NANA
[27]Subset of ISBI 20170.8910.8660.5560.785NANA
NA = Metrics not mentioned in the paper
Table 2. Performance evaluation of five feature extraction techniques.
Table 2. Performance evaluation of five feature extraction techniques.
Feature ExtractFeaturesACCPRERECAUCF1
Handcrafted50.8000.4010.0360.6130.064
MELA-CNN2560.9130.8370.6930.8300.756
VGG165120.8140.5690.1890.7380.282
InceptionResnet V215360.8220.6550.2040.7520.309
Inception V320480.8190.6410.1980.7460.295
Table 3. Performance evaluation of adding metadata.
Table 3. Performance evaluation of adding metadata.
FeaturesACCPRERECAUCF1
50.8000.4010.0360.6130.064
70.8210.5820.3270.7890.415
2560.9130.8370.6930.8300.756
2580.9260.8440.7640.8650.800
Table 4. Performance evaluation of 10 oversampling techniques.
Table 4. Performance evaluation of 10 oversampling techniques.
Oversampling TechniqueACCPRERECAUCF1
Original0.9260.8440.7640.8640.800
K-Means SMOTE0.9460.8730.8530.9700.861
RandomOverSampler0.9390.8620.8220.9640.840
SMOTE0.9370.8330.8490.9660.839
SVMSMOTE0.9340.8250.8510.9670.835
SMOTETomek0.9340.8290.8440.9670.835
BorderlineSMOTE0.9330.8110.8620.9670.834
SMOTE- RandomUnderSampler0.9330.8210.8440.9660.831
SMOTENC0.9320.8200.8490.9680.830
SMOTEENN0.9240.7700.8890.9670.822
ADASYN0.9240.7880.8470.9660.814
Table 5. Performance evaluation of 13 classifiers with K-means SMOTE.
Table 5. Performance evaluation of 13 classifiers with K-means SMOTE.
ClassifiersACCPRERECAUCF1
XGB Classifier0.9460.8730.8530.9700.861
Logistic Regression0.9410.8410.8640.9690.852
Gradient Boosting0.9400.8510.8420.9650.845
Bagging Classifier0.9390.8370.8510.9650.845
SVM0.9390.8590.8330.9680.844
HistGB Classifier0.9390.8610.8220.9680.839
Random Forest0.9360.8370.8420.9640.838
MLP0.9370.8620.8110.9630.834
AdaBoost0.9290.8060.8440.9610.823
K-Neighbors Classifier0.9250.8080.8160.9220.809
SGD-LR0.9220.7830.8360.9560.806
Decision Tree0.9110.7590.8040.8710.780
Gaussian NB0.7660.4520.8670.8460.593
Table 6. Paired t-test of recall for 5 Features vs. 256 Features.
Table 6. Paired t-test of recall for 5 Features vs. 256 Features.
Fold5 Features REC256 Features RECDifference between RECPaired t-Test
10.0220.5780.556p = 1.81 × 10−9
Average difference between REC
0.658
20.1110.6220.511
30.0440.7560.712
40.0440.6440.600
50.0440.8000.756
60.0440.6000.556
70.0000.7330.733
80.0220.6890.667
90.0000.7560.756
100.0220.7560.734
Table 7. Paired t-test of recall for 256 features vs. 258 features.
Table 7. Paired t-test of recall for 256 features vs. 258 features.
Fold256 Features REC258 Feature RECDifference between RECPaired t-Test
10.5780.8440.267p = 2.03 × 10−2
Average difference between REC
0.071
20.6220.7560.133
30.7560.7780.022
40.6440.6440.000
50.8000.778−0.022
60.6000.7560.156
70.7330.7330.000
80.6890.7780.089
90.7560.733−0.022
100.7560.8440.089
Table 8. Paired t-test of recall for 258 features w/wo K-Means SMOTE.
Table 8. Paired t-test of recall for 258 features w/wo K-Means SMOTE.
Fold258 Features REC258 Features with K-Means SMOTE RECDifference between RECPaired t-Test
10.8440.9330.089p = 7.07 × 10−4
Average difference between REC
0.089
20.7560.8670.111
30.7780.7780.000
40.6440.8440.200
50.7780.9110.133
60.7560.8890.133
70.7330.8440.111
80.7780.8000.022
90.7330.8000.067
100.8440.8670.022
Table 9. Paired t-test of F1-score for 5 Features vs. 256 Features.
Table 9. Paired t-test of F1-score for 5 Features vs. 256 Features.
Fold5 Features F1256 Features F1Difference between
F1
Paired t-Test
10.0420.6580.616p = 4.56 × 10−10
Average difference between F1
0.692
20.1850.7180.533
30.0830.8100.727
40.0830.7730.690
50.0830.8180.735
60.0770.6920.615
70.0000.7670.767
80.0420.7130.671
90.0000.8100.810
100.0430.8000.757
Table 10. Paired t-test of F1-score for 256 features vs. 258 features.
Table 10. Paired t-test of F1-score for 256 features vs. 258 features.
Fold256 Features F1258 Features F1Difference between
F1
Paired t-Test
10.6580.8260.168p = 3.40 × 10−2
Average difference between F1
0.040
20.7180.7910.073
30.8100.8330.024
40.7730.734−0.039
50.8180.795−0.023
60.6920.8100.117
70.7670.759−0.009
80.7130.8140.101
90.8100.8150.005
100.8000.8260.026
Table 11. Paired t-test of F1-score for 258 features w/wo K-Means SMOTE.
Table 11. Paired t-test of F1-score for 258 features w/wo K-Means SMOTE.
Fold258 Features F1258 Features with K-Means SMOTE F1Difference between
F1
Paired t-Test
10.8260.9130.087p = 3.35 × 10−4
Average difference between F1
0.061
20.7910.8130.022
30.8330.8430.010
40.7340.8740.139
50.7950.8910.096
60.8100.8700.060
70.7590.8170.059
80.8140.8570.043
90.8150.8570.042
100.8260.8760.050
Table 12. Performance comparison with Kalwa et al. [28].
Table 12. Performance comparison with Kalwa et al. [28].
Kalwa et al. (2019) [28]Proposed Model
SVM
(Kernel = RBF)
XGB Classifier
Holdout (7:3)Holdout (7:3)
OriginalSMOTEHandcraftedDL-TLDL-FEDL-FE+
Metadata
K-Means SMOTE
Number of samples2002299
Number of features4451536256258258
ACC0.8600.8800.8040.8360.9140.9230.958
AUC0.7200.8500.5850.7800.9360.9480.971
PRE0.1250.6670.5000.7200.8060.8200.914
REC0.5000.8000.0300.2670.7410.7780.867
F10.2000.7270.0560.3890.7720.7980.890
Table 13. Performance comparison with Magalhaes et al. [29].
Table 13. Performance comparison with Magalhaes et al. [29].
Magalhaes et al. (2021) [29]Proposed Model
SVM +
Random Forest
XGB Classifier
Holdout (8:2)Holdout (8:2)
OriginalSMOTEHandcraftedDL-TLDL-FEDL-FE+
Metadata
K-Means SMOTE
Number of samples2872299
Number of features404051536256258258
ACC0.4260.5850.8070.8390.9040.9300.965
AUC0.5580.5420.6210.7740.9370.9530.981
PRE0.5650.6720.6000.7670.7740.8370.974
REC0.4730.6960.0330.2560.7220.8000.878
F10.5150.6840.0630.3830.7470.8180.905
Table 14. A comparative summary of the existing techniques for melanoma binary classification.
Table 14. A comparative summary of the existing techniques for melanoma binary classification.
YearAuthorDatasetNon-Me: Me (IR)MethodValidationTest Result
2016Nasr
et al. [26]
MED-NODE100:70 (1.429)DLHoldout (8:2)
full: 7650
ACC: 0.810
SE: 0.810
SP: 0.800
2018Adjed et al. [8]PH2160:40
(4)
Multiresolution technique + MLRepeat 1000 times
Holdout (7:3)
full: 200
ACC: 0.861
SE: 0.790
SP: 0.933
2018Li et al. [15]ISIC 20188902:1113 (7.998)DL + MLHoldout (7:1:2)
full: 10015
ACC: 0.853
PRE: 0.860
REC: 0.850
F1: 0.860
2019Devansh
et al. [41]
Combine of
ISIC 2017, Edinburgh data, ISIC 2018, PH2
3063:919 (3.333)DLHoldout (85:15)
full: 3982
AUC: 0.880
2019Warsi et al. [10]PH2160:40
(4)
3D color-texture feature (CTF) + DLHoldout (70:15:15)
full: 200
ACC: 0.970
SE: 0.981
SP: 0.925
2019Abbes et al. [24]Combine of DermQuest and DermIS87:119 (0.731)FCM + DLHoldout (NA)
full: 206
ACC: 0.875
SE: 0.901
SP: 0.844
2019Abbas et al. [25]Subset of combining Skin-EDRA, ISIC 2018, DermNet, PH21420:1380 (1.029)DL + MLHoldout (1:1)
full: 2800
ACC: 0.950
AUC: 0.960
SE: 0.930
SP: 0.950
2020Almaraz-Damian et al. [19]ISIC 20188902:1113 (7.998)DL + MLHoldout (75:25)
full: 10015
ACC: 0.897
2020Daghrir
et al. [42]
Subset of ISIC archiveNADL+MLHoldout (8:2)
full: 640
ACC: 0.884
2022Iftiaz A. Alf et al. [23]Subset of ISIC 20181800:1497 (1.202)DL and MLHoldout (8:2)
full: 3297
DL
ACC: 0.910
PRE: 0.910
REC: 0.920
AUC: 0.970
F1: 0.910
ML
ACC: 0.880
PRE: 0.840
REC: 0.920
F1: 0.880
2022Our approach
(Holdout 8:2)
Subset of combining
ISIC 2018 and ISIC 2019
1849:450 (4.109)DL + MLHoldout (8:2)
full: 2299
ACC: 0.965
PRE: 0.974
REC: 0.878
AUC: 0.981
F1: 0.905
2022Our approach
(Stratified 10-fold Cross Validation)
Subset of combining
ISIC 2018 and ISIC 2019
1849:450 (4.109)DL + MLStratified 10-fold Cross-Validation
full: 2299
ACC: 0.941
PRE: 0.870
REC: 0.822
AUC: 0.968
F1: 0.844
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chang, C.-C.; Li, Y.-Z.; Wu, H.-C.; Tseng, M.-H. Melanoma Detection Using XGB Classifier Combined with Feature Extraction and K-Means SMOTE Techniques. Diagnostics 2022, 12, 1747. https://doi.org/10.3390/diagnostics12071747

AMA Style

Chang C-C, Li Y-Z, Wu H-C, Tseng M-H. Melanoma Detection Using XGB Classifier Combined with Feature Extraction and K-Means SMOTE Techniques. Diagnostics. 2022; 12(7):1747. https://doi.org/10.3390/diagnostics12071747

Chicago/Turabian Style

Chang, Chih-Chi, Yu-Zhen Li, Hui-Ching Wu, and Ming-Hseng Tseng. 2022. "Melanoma Detection Using XGB Classifier Combined with Feature Extraction and K-Means SMOTE Techniques" Diagnostics 12, no. 7: 1747. https://doi.org/10.3390/diagnostics12071747

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop