Next Article in Journal
Frequency Domain-Based Super Resolution Using Two-Dimensional Structure Consistency for Ultra-High-Resolution Display
Previous Article in Journal
The Adversarial Robust and Generalizable Stereo Matching for Infrared Binocular Based on Deep Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Skin Cancer Image Classification Using Artificial Intelligence Strategies: A Systematic Review

by
Ricardo Vardasca
1,2,*,
Joaquim Gabriel Mendes
2,3 and
Carolina Magalhaes
2,3
1
ISLA Santarem, Rua Teixeira Guedes 31, 2000-029 Santarem, Portugal
2
Instituto de Ciência e Inovação em Engenharia Mecânica e Engenharia Industrial, Universidade do Porto, 4099-002 Porto, Portugal
3
Faculdade de Engenharia, Universidade do Porto, 4099-002 Porto, Portugal
*
Author to whom correspondence should be addressed.
J. Imaging 2024, 10(11), 265; https://doi.org/10.3390/jimaging10110265
Submission received: 26 August 2024 / Revised: 26 September 2024 / Accepted: 17 October 2024 / Published: 22 October 2024
(This article belongs to the Section AI in Imaging)

Abstract

:
The increasing incidence of and resulting deaths associated with malignant skin tumors are a public health problem that can be minimized if detection strategies are improved. Currently, diagnosis is heavily based on physicians’ judgment and experience, which can occasionally lead to the worsening of the lesion or needless biopsies. Several non-invasive imaging modalities, e.g., confocal scanning laser microscopy or multiphoton laser scanning microscopy, have been explored for skin cancer assessment, which have been aligned with different artificial intelligence (AI) strategies to assist in the diagnostic task, based on several image features, thus making the process more reliable and faster. This systematic review concerns the implementation of AI methods for skin tumor classification with different imaging modalities, following the PRISMA guidelines. In total, 206 records were retrieved and qualitatively analyzed. Diagnostic potential was found for several techniques, particularly for dermoscopy images, with strategies yielding classification results close to perfection. Learning approaches based on support vector machines and artificial neural networks seem to be preferred, with a recent focus on convolutional neural networks. Still, detailed descriptions of training/testing conditions are lacking in some reports, hampering reproduction. The use of AI methods in skin cancer diagnosis is an expanding field, with future work aiming to construct optimal learning approaches and strategies. Ultimately, early detection could be optimized, improving patient outcomes, even in areas where healthcare is scarce.

1. Introduction

Skin cell mutations are common when the human body is repeatedly exposed to external hazardous situations. Ultraviolet radiation is the primary trigger for uncontrolled cell division, which can ultimately lead to the appearance of skin cancers, e.g., melanoma, squamous cell carcinoma (SCC), and basal cell carcinoma (BCC) [1,2]. The prevalence and incidence numbers associated with malignant skin tumors continue to increase, along with deaths, a tendency that can be reversed if changes are made to current detection strategies [3].
The diagnosis of skin tumors is based mainly on heuristic methods, as dermatologists try to improve their diagnosis skills through the assessment of multiple skin lesion patterns over time [4]. To assist in the visualization of characteristic features, physicians usually utilize dermoscopy devices to highly amplify the skin lesion and display features invisible to the unaided eye [5]. If a suspicion of malignancy is present, histopathological tests are prescribed and the lesion is excised/incised to reach a final conclusion [6]. Apart from being painful and sometimes risky for the patient, the current detection strategy is highly biased, being affected by inter- and intra-observer variability [7]. To counteract this, different non-invasive imaging techniques have appeared throughout the years, in the hopes of increasing the accuracy of early diagnosis while avoiding the performance of needless biopsies. Microscopy-based techniques, such as confocal scanning laser microscopy and multiphoton laser scanning microscopy, are some of the options preferred to dermoscopy equipment. The former scans samples with a single-photon laser, producing a three-dimensional (3D) image from a compilation of multiple two-dimensional (2D) images retrieved at different depths. The latter utilizes multiple photons to excite fluorescent molecules, delivering 3D images of deeper structures with great resolution. Still, the implementation of technologies based on infrared thermal (IRT) imaging, where temperature maps are produced based on the emission of infrared radiation, and ultrasonography, where images are produced based on the reflection of soundwaves, can also be found [8,9].
While the use of alternative techniques can provide valuable and unique knowledge, the interpretation of the collected data can not only be a tedious task but extremely hard to perform, due to the number of generated points, especially with a growing patient flow.
To address these challenges, artificial intelligence (AI) offers promising solutions.
AI is a relevant strategy for improving image analysis, reducing the time required and improving sensibility and accuracy [10]. Thus, many AI strategies based on machine learning (ML) methods have been developed. These strategies deliver a final diagnosis based on a set of input features that can potentially help domain experts to decide on the best course of action; this ultimately contributes to better patient care in a faster and cheaper way [11,12].
This work presents a systematic review focused on the critical evaluation of the implementation of AI methods for skin tumor classification, offering insights into their effectiveness across different imaging modalities.

2. Methods

The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) 2020 guidelines were followed to perform the present systematic review [13]. Studies focused on the application of AI classifiers for skin cancer classification through imaging data were meticulously chosen and evaluated, with the collected data partially employed to produce conclusions based on statistical methods. This review was registered on the PROSPERO registration system with the reference number 584122.

2.1. Information Sources

Three abstract and citation databases were used during the literature search process: PubMed, ISI Web of Science, and Scopus. The specific combination of keywords and Boolean operators implemented was as follows: (classification methods [Title/Abstract]) AND (imaging [Title/Abstract]) AND (artificial intelligence) AND (skin cancer [Title/Abstract]) OR (skin neoplasm/Abstract]); TOPIC: (classification methods) AND TOPIC: (imaging) AND TOPIC: (artificial intelligence) AND TOPIC: (skin cancer OR (skin neoplasms)); (TITLE-ABS-KEY (classification methods) AND TITLE-ABS-KEY (imaging) AND TITLE-ABS-KEY (artificial intelligence) AND TITLE-ABS-KEY (skin cancer OR (skin neoplasms)). These sources were consulted until June 2024. The main goal was to gather a large number of articles referring to the implementation of classification methods for skin cancer identification with a given imaging modality by using simple and broad terms. The Boolean operator OR was used for the similar expressions “skin cancer” and “skin neoplasm”, since different terminology is applied when referring to skin tumors. Duplicate articles were removed after bibliographic research.

2.2. Eligibility Criteria and Screening

After searching reference sources, the titles and abstracts of the retrieved records were screened to eliminate studies that did not focus on the topic of interest. Seven elimination criteria were established. Articles that performed classification using methods that did make use of ML were excluded, as well as studies that implemented AI methods only for image analysis tasks. The third eligibility criterion removed reviews and meeting abstracts. Papers written in languages other than English or those focused on the diagnosis of other skin diseases were also not considered. Additionally, only articles that reported the performance metrics of classification methods were retained; reports focused on the development of smartphone apps were also eliminated.

3. Results

3.1. Study Selection

The PRISMA flow diagram in Figure 1 illustrates the selection process [13]. The database search yielded 1406 publications. After removing duplicate records, 437 records were removed from the pool. The title and abstract screening resulted in the exclusion of 486 records because there was no mention of AI classification methods being used for skin cancer images, and 22 records could not be retrieved. An additional 255 studies were eliminated for the following reasons: making no use of ML algorithms for classification (n = 63), the implementation of AI strategies only for image analysis (n = 53), being meeting, abstracts, or literature reviews (n = 38), being written in a language other than English (n = 19), focusing on the diagnosis of other skin diseases (n = 30), not presenting any type of classification metrics (n = 25), and focusing on the development of smartphone apps (n = 27).
The remaining 206 records were subjected to qualitative assessment and divided into groups based on the imaging modality implemented for skin tumor assessment, as follows: digital dermoscopy (n = 153), microscopy (n = 19), spectroscopy (n = 12), macroscopy (n = 17), and other imaging modalities (n = 5). Table 1 summarizes the records encountered, according to image modality.

3.2. Qualitative Synthesis

3.2.1. Digital Dermoscopy Imaging

The use of dermoscopy as the gold standard for assessing melanocytic lesions has increased the attention of researchers focused on the development of computer-aided diagnostic tools. Thus, most studies found on the subject of ML applications for skin cancer are based on this imaging modality.
The support vector machines (SVM) algorithm is consistently used for a fast and effective interpretation of data in melanoma detection studies, combined with different image analysis strategies and training stages [14,15,16,17,18,19,20,21,22]. Vasconcelos et al. proposed a methodology that extracts 660 color descriptors and detects melanoma skin cancer using SVM with a 3rd order polynomial kernel [23]. The use of textural features has also been reported with an emphasis on local binary patterns (LBP) and gray level co-occurrence matrix [24]. Khan et al. developed an algorithm to improve lesion segmentation accuracy, by enhancing the contrast between the lesion’s foreground and background. Feature selection was performed using a genetic algorithm, that best differentiated melanoma and melanocytic nevus with an SVM learner [25]. Excellent results have also been previously achieved by the same group after performing feature selection by calculating the Bhattacharyya distance and variance and subsequent lesion classification with a multi-class SVM (accuracy (ACC) = 98.3%, sensitivity (SN) = 97.85, specificity (SP) = 100%) [26]. High ACC was also achieved by Rehman et al., who constructed a method with pixel-based seed segmentation and the concatenation of multilevel features that allowed for the detection of malignant melanocytic lesions with a 98.2% success rate [27]. Massod et al. followed a different strategy and upgraded a training stage with a self-advised SVM to learn from data that are not linearly separable, which outperformed common SVM learners [28]. The same concept was applied further, using a self-advised SVM in parallel with three independent classifiers to boost the results, but with slightly lower metrics than those previously attained (ACC = 89%, SN = 90%, SP = 88.3%) [29]. Frequently, the success of the proposed methodology is hampered by the use of datasets that misrepresent one or more classes. Thus, some techniques to deal with class imbalance are often encountered, such as random under sampling, random under sampling boosting, and synthetic-minority over sampling techniques, with the latter being favored [30,31,32]. Recently, support vector learners have also been applied for the detection of other skin lesion types, such as BCC and SK, as well as for the identification of lesions as candidates for excision or not [33,34,35,36]. Two-stage classification is also reported. At a primary research stage, Joseph et al. distinguished benign from abnormal skin lesions, with the latter being classified as dysplastic nevi or melanoma, with equally good results [37]. The successful differentiation of melanocytic and non-melanocytic tumors was reported by Suganya, followed by separating non-melanocytic lesions into BCC and AK [38]. Some authors have demonstrated that the use of SVM ensemble methods provides superior results to those of a single learner for this type of classification task, with a two-stage classification approach yielding better results than a single multi-classification stage [39,40,41,42,43,44,45]. The use of different feature subsets as inputs for the learning algorithms was also proven to boost classification results [40,43], with majority voting, decision template combination, and sum rules as examples to determine the final consensus [39,40,42,44].
Artificial neural networks (ANNs) for skin cancer classification have gained particular attention recently [46,47,48,49]. Messadi et al. detected melanoma tumors with a 76.76% ACC through the implementation of a simple ANN with a hidden layer comprising four neurons [50]. The use of shape and several textural parameters allowed the attainment of good results, similar to those of healthcare physicians, with a very straightforward approach. With a more elaborate view, Grochowski et al. used an ensemble of 10 neural networks to evaluate handcrafted features retrieved from melanocytic tumors. The results were combined using a single ANN, obtaining ACC, SN, and SP values of 84.4%, 90.8%, and 78%, respectively [51]. To simultaneously classify seven skin lesion types, Samsudin et al. decomposed images using a multi-resolution empirical mode into several bidimensional intrinsic mode functions (BIMFs) [52]. The BIMFs were combined with LBP features retrieved from the leased area and fed into an ANN learner, achieving an extremely high ACC value (98.9%) for multiclassification with a single learner. The search for excellent metrics has been noted in recent years, with attention shifting towards the use of deep learning techniques [53,54], particularly those focused on convolutional neural networks [55,56,57,58,59,60,61,62,63], with applications ranging from the detection of melanocytic lesions [64,65,66] and non-pigmented tumors [67] to multiple skin neoplasm types [68]. This tendency has grown increasingly during the last year [69,70,71,72,73,74,75,76,77,78,79]. Li et al. tackled the problem of class imbalance, using generative adversarial networks (GANs) to produce high-quality images of unbalanced classes [80]. The classification stage included an ensemble of convolutional neural network (CNN) models that performed multilayer feature fusion based on a fuzzy rank. Additionally, the focal and cross-entropy loss functions were combined to decrease the distance among samples that belonged to the same class, thereby improving the recognition rate of the model (ACC = 95.82%). Because these networks are usually very elaborate, Albahar proposed the use of a regularizer technique, based on the standard deviation of weights, to control the classifier’s complexity [81]. On the other hand, Kaur et al. built a CNN from scratch, and they suggested a network with multiple connect blocks organized in a manner that allowed large feature information to pass easily through the network [82]. With the need for extensive training data to properly train a network of this type, most authors chose pretrained models and performed small changes when needed [83,84,85,86,87,88]. This approach saves time, reduces computation power, and usually delivers more accurate and effective models. Hosni et al. served as an example, as the theory of transfer learning was successfully applied to a CNN with the small replacement of the classification layer with a softmax layer, which allowed for multi-classification with increased ACC values [89]. Lu et al. proposed XceptionNet with a swish activation function and depthwise separable convolutions [90]. The altered model reached the maximum ACC for the differentiation of multiple skin neoplasm types available in a public database. Recent studies have mostly focused on the use of public databases to develop different classification strategies [91,92,93,94,95,96,97,98,99]. The availability of these databases also eliminates the problem related to the shortage of large amounts of training data for the implementation of DL networks and eases comparisons between works and state-of-the-art methods. Another reported option for the improvement of CNN models is the combination of patient and image data, as demonstrated by Pacheco and Krohling, which secured an improvement of 7% in ACC compared with the single use of lesion features as inputs [100]. Based on these results, the authors used metadata to support lesion classification, emphasizing the most important feature maps extracted from images in different CNN models [101]. This strategy outperformed other approaches that did not implement metadata. Ningrum et al. compared the classification ACC of a CNN with image data input and a CNN combined with an ANN to handle image data and patients’ metadata, respectively [102]. The latter delivered better performance metrics (ACC = 92.34%) while still running on a medium-class computer. The results of other reports support the integration of clinical data as a means of reducing diagnostic uncertainty [103,104,105,106]. Despite their complexity, the intricacy of using these models for classification is valuable when dealing with equally complex tasks, such as the differentiation of multiple skin tumor types [107,108]. An attention cost-sensitive DL-based meta-classifier was proposed by Ravi with the goal of effectively detecting minimal portions of skin affected by a skin tumor and retrieving information from it to successfully detect and classify seven types of skin neoplasms (ACC = 99.0%) [109]. A different multiclass classification approach was proposed by Zhang et al. based on the extraction and input of rotation invariant features that better represent the lesion in question [110]. The proposed approach was embedded in a CNN, and although the classification results were higher than those of other rotation invariance methods, further work is still needed. In addition to classification, CNNs have also been implemented to assist with additional tasks such as segmentation, feature extraction, and feature selection [111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127]. However, new optimization ideas and applications are constantly being studied [128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148].
The detection of melanocytic lesions using dermoscopy images has also been successfully documented for classifiers based on decision trees [149]. Oliveira et al. extracted texture and color features from different color spaces, obtaining ACC, SN, and SP values of 91.6%, 87%, and 96.2%, respectively, using an optimum-path forest classifier. The good predictive performance of random forest (RF) classifiers has enabled them to outperform ensemble approaches, as proven by Rastgoo et al. [150] (SN = 94%, SP = 92%) with RF vs. weighted ensemble classifiers and Grzesiak-Kopéc et al. [151] (SN = 85.1%, SP = 89.6%) with bagging with decision tree vs. vote ensemble classifiers. The detection of BCC lesions was accomplished with an ACC of 82.4% by an RF learner after the implementation of a novel segmentation method, based on an independent component analysis, by Kharazmi et al. [152]. The same group of authors aimed to improve the classification metrics by automatically detecting the of vascular features that are characteristic of this type of lesion (SN = 90.4%, SP = 89.3%) [153].
When there is doubt regarding the best learner for this task, the comparison of different ML algorithms is an option [154,155]. La Torre et al. evaluated the recognition skills of k-nearest neighbor (kNN), SVM, and ANN learners. The top choice was SVM with a radial basis function kernel, which delivered SN and SP values of 100% and 99.47%, respectively. Another example of SVM’s advantage was reported in [156], where asymmetry features were collected and used for melanoma detection in dermoscopy images. This SVM technique had a 3rd-grade polynomial kernel better than kNN, naïve Bayes, boosting, and random forest in this task. The same learner exceeded both kNN and random forest in a new study [157]. In the differentiation of benign and melanoma lesions, conducted by Narasimhan et al. [158], RF outperformed kNN, naïve Bayes, and SVM with the use of wavelet-based energy features and outperformed kNN, SVM, ANN, and decision trees in a study by Janney and Roslin [159]. For the challenging task of distinguishing between melanoma and dysplastic lesions, RF was also considered the top learner, achieving SN and SP values of 98.46% and 70%, respectively [160]. Other methods, such as kNN [161,162] and AdaBoost [163], are also preferred. To deal with a complex and extensive dataset, HAM10000, Arshad et al. tested lesion classification with 10 different ML methods after augmentation, feature extraction, and fusion using two fine-tuned DL models (ResNet-50 and ResNet-101) [164]. A CNN with a Harris hawks optimized segmentation yielded the best outcomes (ACC = 97.3%, SN = 96.1%, SP = 98.6%) at the expense of other CNNs, ANNs, and SVM methods. Figure 2 shows the percentage of records that used SVM, ANN, DL approaches, and other ML techniques with dermoscopy images.

3.2.2. Microscopy Imaging Techniques

AI classifiers have shown potential in the diagnosis of melanoma through the extraction of features from various microscopy imaging techniques [165,166,167]. The first report is from a kNN learner with 24 neighbors for a two-stage classification process involving the distinction of benign, malignant, and dysplastic nevi (SN = 87%, SP = 92%) [168]. Odeh et al. also used this ML algorithm with varying k odd numbers from one to nine to select the best possible result [169]. After achieving maximum metrics with k = 1, the authors compared the kNN learner to a k-nearest neighbor with a genetic algorithm (GA), an ANN with GA, and an adaptive neuro-fuzzy inference system, with the kNN–GA (k = 1) having the best performance [170]. The implementation of decision trees using images acquired by confocal laser scanning microscopy was also well documented by the same research group. Wavelet transform features were extracted and used for classification, obtaining SN and SP values of 97.59% and 96.32% [171,172]. Later, the same group tested a similar approach with a different training and testing set, but with unsatisfactory results (SN = 44.32% and SP = 53.29%) [173]. X-polarized and transillumination epiluminescence microscopy images enabled the extraction of textural information for the identification of malignant lesions with 70% ACC, using an SVM with a 4th order polynomial kernel [174]. The implementation of collaborative methods also yielded interesting results. Masood et al. [175] constructed a deep belief NN in parallel with an advised SVM, combining its results with least squares estimation weighting to achieve ACC, SN, and SP values of 91.6%, 95%, and 93%, respectively. Ruiz et al. [176] also combined the outcomes of different classifiers (kNN, ANN, and naïve Bayes) using a vote-based system. The best result was obtained with a k = 7 and a hidden layer with seven neurons for the kNN and ANN learners, respectively. Noroozi et al. focused on the distinction of SCC from other lesions, first using features extracted from intensity profiles (naïve Bayes, ACC = 83.3%, SN = 84.6%, SP = 81.8%) [177] and then using Z-transform coefficients (SVM, ACC = 85.18%, SN = 91.66%, SP = 80%) [178]. The use of clinical data, e.g., patient age or lesion location, to assist skin cancer diagnosis using microscopy was reported by Hohn et al. [179]. The authors tested the implementation of a CNN to investigate which strategy delivered the best results: the use of individual patient or image data, or the fusion of both. In most cases, CNN performance with only the microscopy image information was sufficient to obtain a top performance.. A CNN with gradient-weighted class activation mapping (Grad–CAM) outperformed 20 pathologists (ACC = 93.3% vs. ACC = 73.2%). The inclusion of Grad–CAM improved physicians’ trust in classification results because the saliency map showed the exact areas of images where features were extracted, which matched normally accepted histological features. Finally, some recent reports have described the implementation of Raman confocal microscopy or hyperspectral microscopy imaging, retrieving data from different image sections that are then feed to a CNN-based classifier, e.g., ResNet50 [180], U-Net [181], or a RF learner [182]. Figure 3 presents the percentage of records that used SVM, ANN, DL approaches, and other ML techniques with microscopy images.

3.2.3. Spectroscopy Imaging Techniques

The extraction of features from spectroscopy data for effective skin lesion characterization and classification has also been documented [183,184,185,186,187,188,189]. Several learners, namely ANN (half number inputs + half number outputs = six hidden units), kNN (k = 3), and naïve Bayes, were compared by Li et al. in the diagnosis of melanoma using a spectroscopy device. Statistical features were computed based on image pixel intensities, and naïve Bayes proved to be the most suited for the classification task [190]. Likewise, Maciel et al. chose to test a kNN with three neighbors to achieve the best possible distinction between BCC, psoriasis, Bowen’s disease, and skin exposed to the sun. The authors attested that the SN and SP values did not vary greatly with an increase in the number of neighbors, indicating the classifier’s stability [191]. Tomatis et al. presented a multispectral approach that collects features at different wavelengths. The first study delivered results of SN = 78% and SP = 76% with a backpropagation neural network, while the second one delivered SN and SP values of 80.4% and 90%, respectively, with a multilayer perceptron, mostly due to major improvement in the image analysis stage [192,193]. The use of an ensemble classifier for spectroscopy was mentioned by only one author [194]. Electrical impedance spectra and clinical information were used as inputs for the ANN, kNN, SVM, and partial least squares classifiers, leading to high SN but low SP values. Figure 4 presents the percentage of records that used SVM, ANN, DL approaches, and other ML techniques with spectroscopy images.

3.2.4. Macroscopy Imaging

While there is a great tendency toward the development of AI systems for the classification of images acquired using handheld devices, some authors have focused their research on the development of CAD tools for the characterization of melanoma lesions using macro images. As in dermoscopy, SVM-based algorithms are preferred, with SN metrics ranging from 80% [195] to 100% [196], and most previous research has focused on feature extraction strategies [197]. Tabatabaie et al. [198] used an independent component analysis to extract features that described the structure and color of tumors and used a radial basis function kernel (γ = 8) for classification. The same kernel was selected by Gautam (C = 2.3784, γ = 3.3636) in an approach that counteracted the nonuniform illumination caused during image collection (ACC = 76.58%, SN = 82%, SP = 70.33%) [199]. The linear kernel preferred by Przystalski et al. [200] represented each lesion in four different color spaces to achieve an ACC of 97.44%. Similarly, Amelar et al. [201] obtained ACC, SN, and SP values of 83.59%, 96.64%, and 73.45% with the same kernel function, using high-level intuitive features to describe skin lesions. Other lesion extraction strategies for SVM classification include the top ABCD features, LBP, and 2D histograms [202,203,204,205,206]. The implementation of neural networks in macroscopy images was performed by Abbes and Sellami [207] using a random training/testing set of 66%/34%, obtaining ACC, SN, and SP values of 94%, 92.85%, and 95.45%. Jafari et al. [208] constructed a set of color features based on pigmentation intensity and fuzzy c-means clustering based on color variations. The results were good but required improvements. A set of decision trees and k-nearest neighbor learners were combined to allow melanoma detection with classification metrics close to 95% with standard camera images. Only one neighbor was used with Euclidean distance, and each tree was specialized in a feature subset [209]. Ensemble strategies have also been used in digital photography. Choudhury et al. [210] exemplified a two-stage classification, with the primary detection of melanoma lesions conducted using an SVM and extreme learning machine (ACC = 97.1%), and a differentiation between SCC, BCC, AK, and melanoma based on the best classifier of the first stage (94.18%). An ensemble comprising three SVM algorithms (radial basis function, C = 1024, and γ = 8.63 × 10−5) was proposed by Takruri et al. [211]. The results were combined with majority voting and probability averaging fusion to obtain an ACC of 82.5%. Figure 5 shows the percentage of records that used SVM, ANN, DL approaches, and other ML techniques with macroscopy images.

3.2.5. Other Imaging Modalities

The classification of skin tumors has also been performed using other imaging modalities in conjunction with ML. The use of infrared thermal imaging was explored by Magalhaes et al. [212], where thermal features retrieved from static and dynamic thermal images were computed and used as inputs to test the classification ability of different learners. The best results for the distinction of melanoma and nevi were obtained using SVM. Due to the shortage of thermal image data, the same research group experimented with different data augmentation strategies to successfully implement a DL approach and increase the previously achieved diagnostic accuracy [213]. The intensity, standard deviation, and entropy of regions of interest in sonogram images were retrieved and used for melanoma, BCC, and benign lesion distinction by Kia et al. [214]. The authors implemented a multilayer perceptron (Log-sigmoid transfer function (layer 1) and hyperbolic sigmoid transfer function (layer 2 and 3)) to achieve the maximum SN value (98%), but with a very low SP value (5%). Ding et al. [215] combined 3D and 2D ABCD features of images collected using a photometric stereo device. The SVM with a multilayer classification kernel obtained ACC, SN, and SP results of 87.8%, 94%, and 83.3%, respectively. More recently, Faita et al. [216] presented the differentiation of melanoma and melanocytic nevus using ultrasonographic images. Morphological parameters and texture and color features delivered the best classification results with a k-nearest neighbor learner (ACC = 76.9%, SN = 84%, and SP = 70%). Thus, this represents another non-invasive approach. Figure 6 presents the percentage of records that used SVM, ANN, DL approaches, and other ML techniques with macroscopy images.

4. Discussion

It is clear from the results encountered in the qualitative analysis that, regardless of the selected imaging modality, most classification tasks have been performed by SVM and/or ANN algorithms, with a recent focus on CNNs, e.g., [100,101,102,107,108,109,110,111,112,120]. Other learners, such as kNN, decision trees, and AdaBoost, have also been explored, but with less emphasis. When there is doubt regarding the best ML method for the proposed classification task, a comparison of different types of learners is a valid strategy frequently used to yield the best results, e.g., [23,39,151,156,158,160,170,176,190,200]. To improve the accuracy of this prediction, some studies have implemented more elaborate strategies based on ensemble models, to the detriment of single learners, e.g., [39,40,41,42,43,44,45,175,176,194,195]. The use of deep learning-based methods is also common; however, such methods involve the availability of a large dataset of labeled data [28,29,36,53,55,64,67,81,100,111,112,120,121,122,125,128,175,204]. In fact, most authors have emphasized the importance of freely available databases (e.g.: ISIC 2016, 2017, 2018, PH2, DermIS, MED-NODE) and the full disclosure of the dataset used, in order to allow the comparison of the developed classifier and/or image analysis process with those from the literature [25,26,27,30,31,32,34,36,37,51,53,54,55,81,89,111,112,120,121,122,125,150,159,163,192,193].
A detailed description of the classification parameters and training and testing conditions is scarce in some studies, likely due to the use of standard models available in ML libraries. Based on the results, the most popular tool for image analysis and classification was MATLAB, a non-open-source software [22,25,26,28,38,54,111,125,128,165,183,191,196,200,208,215], with only a few authors developing their work on free open-source platforms. Thus, to facilitate the implementation of CAD systems in daily clinical practice, the use of open-source tools is favored.
Most studies have focused on the optimization and innovation of the feature extraction stage, suggesting that it is probably more determinant than the development and/or selection of the optimal classification approach [23,39,45,168,174,195,201,204,209,211]. Different results can be achieved with the use of the same test samples; thus, its development appears to be a future trend.
Some studies do not include more than one parameter representative of learners’ performance, which hampers the comparison between different studies [27,39,40,46,100,112,174,184,200,210]. However, once again, it is important to take these results lightly, because the datasets used can differ between studies.
Lastly, it is important to guarantee an even balance regarding the correct identification of benign cases (SP) and cancerous tumor (SN) [50,55,121,159,194,212,214]. Thus, the loopholes of a given imaging modality can be compensated for by merging the different types of information. The establishment of standard metrics to evaluate the performance of classifiers is of interest, as it allows some degree of comparison. Table 2 shows the summary of the main findings encountered and their respective references.

5. Conclusions

The use of AI strategies based on ML methods is increasingly common in skin cancer assessment using characteristic image features. This review presents multiple noninvasive imaging modalities combined with AI. There is a clear preference for the use of SVM and ANNs, whether in a solo implementation or in an ensemble method, because of their good results with lower computational costs; deep learning approaches have also gained ground because of the exceptional classification metrics associated with their use. The optimization and innovation of the feature extraction stage is a clear topic for future work in most records because it appears to be the key to improving diagnostic accuracy. Additional testing and the comparison of different methodologies using the same dataset are of importance because they would benefit research in this area. Thus, the construction of databases freely available for scientific purposes with skin cancer images of different imaging modalities is of extreme interest for future work. Ultimately, the use of AI methods for skin cancer diagnosis is an expanding field, with accurate early detection being the primary approach used to improve patient prognosis [39,45,168].

Author Contributions

Conceptualization, R.V. and C.M.; methodology, C.M.; validation, J.G.M.; formal analysis, R.V. and J.G.M.; investigation, R.V. and C.M.; data curation, R.V. and C.M.; writing—original draft preparation, R.V. and C.M.; writing—review and editing, R.V., J.G.M. and C.M.; visualization, C.M.; supervision, R.V. and J.G.M.; project administration, R.V. and J.G.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Project LAETA [grant numbers UIDB/50022/2020, UIDP/50022/2020] and the PhD Scholarship supported by FCT (national funds through Ministério da Ciência, Tecnologia e Ensino Superior (MCTES)) and co-funded by ESF through the Programa Operacional Regional do Norte (NORTE 2020) (EU funds)) [grant number SFRH/BD/144906/2019].

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

Squamous cell carcinomaSCC
Basal cell carcinomaBCC
Artificial intelligenceAI
Infrared thermalIRT
Machine learningML
Preferred reporting items for systematic reviews and meta-analysesPRISMA
International database of prospectively registered systematic reviews in health and social care, welfare, public health, education, crime, justice, and international development, where there is a health-related outcomePROSPERO
Academic citation indexing and search service, which is combined with web linking and provided by Thomson ReutersISI
Support vector machineSVM
Local binary patternsLBP
AccuracyACC
SensitivitySN
SpecificitySP
Artificial neural networkANN
Bidimensional intrinsic mode functionBIMF
Generative adversarial networkGAN
Convolutional neural networkCNN
Deep learningDL
Random forestRF
k-nearest neighborkNN
Computer aided diagnosticCAD
Genetic algorithmGA
Gradient-weighted class activation mappingGrad-CAM

References

  1. Hunter, J.; Savin, J.; Dahl, M. Skin Tumours. In Clinical Dermatology, 3rd ed.; Blackwell Science: Hoboken, NJ, USA, 2002; pp. 253–282. ISBN 0632059168. [Google Scholar]
  2. Crowley, L.V. Neoplastic Disease. In An Introduction to Human Disease: Pathology and Pathophysiology Correlations, 9th ed.; Jones and Bartlett Learning: Burlington, MA, USA, 2013; pp. 192–209. [Google Scholar]
  3. The Global Cancer Observatory. 2022. Available online: https://gco.iarc.fr/ (accessed on 14 August 2024).
  4. Glazer, A.M.; Rigel, D.S.; Winkelmann, R.R.; Farberg, A.S. Clinical Diagnosis of Skin Cancer. Dermatol. Clin. 2017, 35, 409–416. [Google Scholar] [CrossRef] [PubMed]
  5. Massone, C.; Di Stefani, A.; Soyer, H.P. Dermoscopy for skin cancer detection. Curr. Opin. Oncol. 2005, 17, 147–153. [Google Scholar] [CrossRef] [PubMed]
  6. Apalla, Z.; Lallas, A.; Sotiriou, E.; Lazaridou, E.; Ioannides, D. Managing Skin Cancer; Springer: Berlin/Heidelberg, Germany, 2010; Volume 7, ISBN 978-3-540-79346-5. [Google Scholar]
  7. MacFarlane, D.F. Biopsy Techniques and Interpretation. In Skin Cancer Management: A Practical Approach; Springer: Berlin/Heidelberg, Germany, 2010; Chapter 10; pp. 1–8. ISBN 9780387884943. [Google Scholar]
  8. Heibel, H.D.; Hooey, L.; Cockerell, C.J. A Review of Noninvasive Techniques for Skin Cancer Detection in Dermatology. Am. J. Clin. Dermatol. 2020, 21, 513–524. [Google Scholar] [CrossRef] [PubMed]
  9. Narayanamurthy, V.; Padmapriya, P.; Noorasafrin, A.; Pooja, B.; Hema, K.; Firus Khan, A.Y.; Nithyakalyani, K.; Samsuri, F. Skin cancer detection using non-invasive techniques. RSC Adv. 2018, 8, 28095–28130. [Google Scholar] [CrossRef]
  10. Kulkarni, S.; Seneviratne, N.; Baig, M.S.; Khan, A.H.A. Artificial Intelligence in Medicine: Where Are We Now? Acad. Radiol. 2020, 27, 62–70. [Google Scholar] [CrossRef]
  11. Ngiam, K.Y.; Khor, I.W. Big data and machine learning algorithms for health-care delivery. Lancet Oncol. 2019, 20, e262–e273. [Google Scholar] [CrossRef]
  12. Cormier, J.; Voss, R.; Woods, T.; Cromwell, K.; Nelson, K. Improving outcomes in patients with melanoma: Strategies to ensure an early diagnosis. Patient Relat. Outcome Meas. 2015, 6, 229–242. [Google Scholar] [CrossRef]
  13. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef]
  14. Tan, T.Y.; Zhang, L.; Jiang, M. An intelligent decision support system for skin cancer detection from dermoscopic images. In Proceedings of the 2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), Changsha, China, 13–15 August 2016; pp. 2194–2199. [Google Scholar] [CrossRef]
  15. Jaworek-Korjakowska, J. Computer-Aided Diagnosis of Micro-Malignant Melanoma Lesions Applying Support Vector Machines. Biomed Res. Int. 2016, 2016, 4381972. [Google Scholar] [CrossRef]
  16. Poovizhi, S.; Ganesh Babu, T.R. An Efficient Skin Cancer Diagnostic System Using Bendlet Transform and Support Vector Machine. An. Acad. Bras. Cienc. 2020, 92, e20190554. [Google Scholar] [CrossRef]
  17. Sethy, P.K.; Behera, S.K.; Kannan, N. Categorization of Common Pigmented Skin Lesions (CPSL) using Multi-Deep Features and Support Vector Machine. J. Digit. Imaging 2022, 35, 1207–1216. [Google Scholar] [CrossRef] [PubMed]
  18. Yang, S.; Shu, C.; Hu, H.; Ma, G.; Yang, M. Dermoscopic Image Classification of Pigmented Nevus under Deep Learning and the Correlation with Pathological Features. Comput. Math. Methods Med. 2022, 2022, 9726181. [Google Scholar] [CrossRef]
  19. Wei, L.; Pan, S.X.; Nanehkaran, Y.A.; Rajinikanth, V. An Optimized Method for Skin Cancer Diagnosis Using Modified Thermal Exchange Optimization Algorithm. Comput. Math. Methods Med. 2021, 2021, 5527698. [Google Scholar] [CrossRef]
  20. Codella, N.C.F.; Nguyen, Q.B.; Pankanti, S.; Gutman, D.A.; Helba, B.; Halpern, A.C.; Smith, J.R. Deep learning ensembles for melanoma recognition in dermoscopy images. IBM J. Res. Dev. 2017, 61, 5:1–5:15. [Google Scholar] [CrossRef]
  21. Sabbaghi Mahmouei, S.; Aldeen, M.; Stoecker, W.V.; Garnavi, R. Biologically Inspired QuadTree Color Detection in Dermoscopy Images of Melanoma. IEEE J. Biomed. Health Inform. 2019, 23, 570–577. [Google Scholar] [CrossRef]
  22. Alamri, A.; Alsaeed, D. On the development of a skin cancer computer aided diagnosis system using support vector machine. Biosci. Biotechnol. Res. Commun. 2019, 12, 297–308. [Google Scholar] [CrossRef]
  23. Vasconcelos, M.J.M.; Rosado, L.; Ferreira, M. A new color assessment methodology using cluster-based features for skin lesion analysis. In Proceedings of the 2015 38th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija, Croatia, 25–29 May 2015; pp. 373–378. [Google Scholar] [CrossRef]
  24. Almansour, E.; Jaffar, M.A. Classification of Dermoscopic Skin Cancer Images Using Color and Hybrid Texture Features. IJCSNS Int. J. Comput. Sci. Netw. Secur. 2016, 16, 135–139. [Google Scholar]
  25. Khan, M.A.; Akram, T.; Sharif, M.; Saba, T.; Javed, K.; Lali, I.U.; Tanik, U.J.; Rehman, A. Construction of saliency map and hybrid set of features for efficient segmentation and classification of skin lesion. Microsc. Res. Tech. 2019, 82, 741–763. [Google Scholar] [CrossRef] [PubMed]
  26. Khan, M.A.; Akram, T.; Sharif, M.; Shahzad, A.; Aurangzeb, K.; Alhussein, M.; Haider, S.I.; Altamrah, A. An implementation of normal distribution based segmentation and entropy controlled features selection for skin lesion detection and classification. BMC Cancer 2018, 18, 638. [Google Scholar] [CrossRef]
  27. Rehman, A.; Khan, M.A.; Mehmood, Z.; Saba, T.; Sardaraz, M.; Rashid, M. Microscopic melanoma detection and classification: A framework of pixel-based fusion and multilevel features reduction. Microsc. Res. Tech. 2020, 83, 410–423. [Google Scholar] [CrossRef]
  28. Masood, A.; Al-Jumaily, A. SA-SVM based automated diagnostic system for skin cancer. In Proceedings of the Sixth International Conference on Graphic and Image Processing (ICGIP 2014), Beijing, China, 24–26 October 2014; SPIE: Sydney, Australia, 2015; Volume 9443. [Google Scholar] [CrossRef]
  29. Masood, A.; Al-Jumaily, A.; Anam, K. Self-supervised learning model for skin cancer diagnosis. In Proceedings of the 2015 7th International IEEE/EMBS Conference on Neural Engineering (NER), Montpellier, France, 22–24 April 2015. [Google Scholar] [CrossRef]
  30. Celebi, M.E.; Kingravi, H.A.; Uddin, B.; Iyatomi, H.; Aslandogan, Y.A.; Stoecker, W.V.; Moss, R.H. A methodological approach to the classification of dermoscopy images. Comput. Med. Imaging Graph. 2007, 31, 362–373. [Google Scholar] [CrossRef] [PubMed]
  31. Tajeddin, N.Z.; Asl, B.M. Melanoma recognition in dermoscopy images using lesion’s peripheral region information. Comput. Methods Programs Biomed. 2018, 163, 143–153. [Google Scholar] [CrossRef]
  32. Kalwa, U.; Legner, C.; Kong, T.; Pandey, S. Skin cancer diagnostics with an all-inclusive smartphone application. Symmetry 2019, 11, 790. [Google Scholar] [CrossRef]
  33. Wahba, M.A.; Ashour, A.S.; Napoleon, S.A.; Abd Elnaby, M.M.; Guo, Y. Combined empirical mode decomposition and texture features for skin lesion classification using quadratic support vector machine. Health Inf. Sci. Syst. 2017, 5, 10. [Google Scholar] [CrossRef]
  34. Chatterjee, S.; Dey, D.; Munshi, S. Integration of morphological preprocessing and fractal based feature extraction with recursive feature elimination for skin lesion types classification. Comput. Methods Programs Biomed. 2019, 178, 201–218. [Google Scholar] [CrossRef]
  35. Gilmore, S.J. Automated decision support in melanocytic lesion management. PLoS ONE 2018, 13, e0203459. [Google Scholar] [CrossRef] [PubMed]
  36. Mahbod, A.; Schaefer, G.; Ellinger, I.; Ecker, R.; Pitiot, A.; Wang, C. Fusing fine-tuned deep features for skin lesion classification. Comput. Med. Imaging Graph. 2019, 71, 19–29. [Google Scholar] [CrossRef] [PubMed]
  37. Joseph, S.; Panicker, J.R. Skin lesion analysis system for melanoma detection with an effective hair segmentation method. In Proceedings of the 2016 International Conference on Information Science (ICIS), Kochi, India, 12–13 August 2016; pp. 91–96. [Google Scholar] [CrossRef]
  38. Suganya, R. An automated computer aided diagnosis of skin lesions detection and classification for dermoscopy images. In Proceedings of the 2016 International Conference on Recent Trends in Information Technology (ICRTIT), Chennai, India, 8–9 April 2016; pp. 1–5. [Google Scholar] [CrossRef]
  39. Rahman, M.M.; Bhattacharya, P. An integrated and interactive decision support system for automated melanoma recognition of dermoscopic images. Comput. Med. Imaging Graph. 2010, 34, 479–486. [Google Scholar] [CrossRef]
  40. Faal, M.; Miran Baygi, M.H.; Kabir, E. Improving the diagnostic accuracy of dysplastic and melanoma lesions using the decision template combination method. Ski. Res. Technol. 2013, 19, 113–122. [Google Scholar] [CrossRef]
  41. Schaefer, G.; Krawczyk, B.; Celebi, M.E.; Iyatomi, H.; Hassanien, A.E. Melanoma Classification Based on Ensemble Classification of Dermoscopy Image Features. In Advanced Machine Learning Technologies and Applications, Proceedings of the AMLTA 2014, Cairo, Egypt, 28–30 November 2014; Communications in Computer and Information Science; Hassanien, A.E., Tolba, M.F., Taher Azar, A., Eds.; Springer: Cham, Switzerland, 2014; pp. 291–298. ISBN 9783319134604. [Google Scholar]
  42. Abbas, Q.; Sadaf, M.; Akram, A. Prediction of Dermoscopy Patterns for Recognition of both Melanocytic and Non-Melanocytic Skin Lesions. Computers 2016, 5, 13. [Google Scholar] [CrossRef]
  43. Xie, F.; Fan, H.; Li, Y.; Jiang, Z.; Meng, R.; Bovik, A. Melanoma classification on dermoscopy images using a neural network ensemble model. IEEE Trans. Med. Imaging 2017, 36, 849–858. [Google Scholar] [CrossRef] [PubMed]
  44. Oliveira, R.B.; Pereira, A.S.; Tavares, J.M.R.S. Skin lesion computational diagnosis of dermoscopic images: Ensemble models based on input feature manipulation. Comput. Methods Programs Biomed. 2017, 149, 43–53. [Google Scholar] [CrossRef] [PubMed]
  45. Castillejos-fernández, H.; López-ortega, O. An Intelligent System for the Diagnosis of Skin Cancer on Digital Images taken with Dermoscopy. Acta Polytech. Hung. 2017, 14, 169–185. [Google Scholar]
  46. Aswin, R.B.; Jaleel, J.A.; Salim, S. Hybrid genetic algorithm—Artificial neural network classifier for skin cancer detection. In Proceedings of the 2014 International Conference on Control, Instrumentation, Communication and Computational Technologies (ICCICCT), Kanyakumari, India, 10–11 July 2014; pp. 1304–1309. [Google Scholar] [CrossRef]
  47. Huang, H.; Hsu, B.W.; Lee, C.; Tseng, V.S. Development of a light-weight deep learning model for cloud applications and remote diagnosis of skin cancers. J. Dermatol. 2021, 48, 310–316. [Google Scholar] [CrossRef]
  48. Iqbal, I.; Younus, M.; Walayat, K.; Kakar, M.U.; Ma, J. Automated multi-class classification of skin lesions through deep convolutional neural network with dermoscopic images. Comput. Med. Imaging Graph. 2021, 88, 101843. [Google Scholar] [CrossRef]
  49. Mahmoud, N.M.; Soliman, A.M. Early automated detection system for skin cancer diagnosis using artificial intelligent techniques. Sci. Rep. 2024, 14, 9749. [Google Scholar] [CrossRef]
  50. Messadi, M.; Bessaid, A.; Taleb-Ahmed, A. New characterization methodology for skin tumors classification. J. Mech. Med. Biol. 2010, 10, 467–477. [Google Scholar] [CrossRef]
  51. Grochowski, M.; Mikołajczyk, A.; Kwasigroch, A. Diagnosis of malignant melanoma by neural network ensemble-based system utilising hand-crafted skin lesion features. Metrol. Meas. Syst. 2019, 26, 65–80. [Google Scholar] [CrossRef]
  52. Samsudin, S.S.; Arof, H.; Harun, S.W.; Abdul Wahab, A.W.; Idris, M.Y.I. Skin lesion classification using multi-resolution empirical mode decomposition and local binary pattern. PLoS ONE 2022, 17, e0274896. [Google Scholar] [CrossRef]
  53. Abbas, Q.; Celebi, M.E. DermoDeep-A classification of melanoma-nevus skin lesions using multi-feature fusion of visual features and deep neural network. Multimed. Tools Appl. 2019, 78, 23559–23580. [Google Scholar] [CrossRef]
  54. Divya, D.; Ganeshbabu, T.R. Fitness adaptive deer hunting-based region growing and recurrent neural network for melanoma skin cancer detection. Int. J. Imaging Syst. Technol. 2020, 30, 731–752. [Google Scholar] [CrossRef]
  55. Harangi, B. Skin lesion classification with ensembles of deep convolutional neural networks. J. Biomed. Inform. 2018, 86, 25–32. [Google Scholar] [CrossRef]
  56. Gessert, N.; Sentker, T.; Madesta, F.; Schmitz, R.; Kniep, H.; Baltruschat, I.; Werner, R.; Schlaefer, A. Skin Lesion Classification Using CNNs With Patch-Based Attention and Diagnosis-Guided Loss Weighting. IEEE Trans. Biomed. Eng. 2020, 67, 495–503. [Google Scholar] [CrossRef] [PubMed]
  57. Mahbod, A.; Schaefer, G.; Wang, C.; Dorffner, G.; Ecker, R.; Ellinger, I. Transfer learning using a multi-scale and multi-network ensemble for skin lesion classification. Comput. Methods Programs Biomed. 2020, 193, 105475. [Google Scholar] [CrossRef]
  58. Wu, J.; Hu, W.; Wen, Y.; Tu, W.; Liu, X. Skin Lesion Classification Using Densely Connected Convolutional Networks with Attention Residual Learning. Sensors 2020, 20, 7080. [Google Scholar] [CrossRef]
  59. Hekler, A.; Kather, J.N.; Krieghoff-Henning, E.; Utikal, J.S.; Meier, F.; Gellrich, F.F.; Upmeier zu Belzen, J.; French, L.; Schlager, J.G.; Ghoreschi, K.; et al. Effects of Label Noise on Deep Learning-Based Skin Cancer Classification. Front. Med. 2020, 7, 177. [Google Scholar] [CrossRef]
  60. Qian, S.; Ren, K.; Zhang, W.; Ning, H. Skin lesion classification using CNNs with grouping of multi-scale attention and class-specific loss weighting. Comput. Methods Programs Biomed. 2022, 226, 107166. [Google Scholar] [CrossRef] [PubMed]
  61. Cao, X.; Pan, J.-S.; Wang, Z.; Sun, Z.; ul Haq, A.; Deng, W.; Yang, S. Application of generated mask method based on Mask R-CNN in classification and detection of melanoma. Comput. Methods Programs Biomed. 2021, 207, 106174. [Google Scholar] [CrossRef]
  62. Angeline, J.; Siva Kailash, A.; Karthikeyan, J.; Karthika, R.; Saravanan, V. Automated Prediction of Malignant Melanoma using Two-Stage Convolutional Neural Network. Arch. Dermatol. Res. 2024, 316, 275. [Google Scholar] [CrossRef]
  63. Almufareh, M.F.; Tariq, N.; Humayun, M.; Khan, F.A. Melanoma identification and classification model based on fine-tuned convolutional neural network. Digit. Health 2024, 10, 20552076241253756. [Google Scholar] [CrossRef]
  64. Moura, N.; Veras, R.; Aires, K.; Machado, V.; Silva, R.; Araújo, F.; Claro, M. ABCD rule and pre-trained CNNs for melanoma diagnosis. Multimed. Tools Appl. 2019, 78, 6869–6888. [Google Scholar] [CrossRef]
  65. Sayed, G.I.; Soliman, M.M.; Hassanien, A.E. A novel melanoma prediction model for imbalanced data using optimized SqueezeNet by bald eagle search optimization. Comput. Biol. Med. 2021, 136, 104712. [Google Scholar] [CrossRef]
  66. Ruga, T.; Musacchio, G.; Maurmo, D. An Ensemble Architecture for Melanoma Classification. Stud. Health Technol. Inform. 2024, 314, 183–184. [Google Scholar] [PubMed]
  67. Tschandl, P.; Rosendahl, C.; Akay, B.N.; Argenziano, G.; Blum, A.; Braun, R.P.; Cabo, H.; Gourhant, J.Y.; Kreusch, J.; Lallas, A.; et al. Expert-Level Diagnosis of Nonpigmented Skin Cancer by Combined Convolutional Neural Networks. JAMA Dermatol. 2019, 155, 58–65. [Google Scholar] [CrossRef]
  68. Aladhadh, S.; Alsanea, M.; Aloraini, M.; Khan, T.; Habib, S.; Islam, M. An Effective Skin Cancer Classification Mechanism via Medical Vision Transformer. Sensors 2022, 22, 4008. [Google Scholar] [CrossRef]
  69. Abdelhafeez, A.; Mohamed, H.K.; Maher, A.; Khalil, N.A. A novel approach toward skin cancer classification through fused deep features and neutrosophic environment. Front. Public Health 2023, 11, 1123581. [Google Scholar] [CrossRef]
  70. Pérez, E.; Ventura, S. Progressive growing of Generative Adversarial Networks for improving data augmentation and skin cancer diagnosis. Artif. Intell. Med. 2023, 141, 102556. [Google Scholar] [CrossRef]
  71. Maurya, A.; Stanley, R.J.; Aradhyula, H.Y.; Lama, N.; Nambisan, A.K.; Patel, G.; Saeed, D.; Swinfard, S.; Smith, C.; Jagannathan, S.; et al. Basal Cell Carcinoma Diagnosis with Fusion of Deep Learning and Telangiectasia Features. J. Imaging Inform. Med. 2024, 37, 1137–1150. [Google Scholar] [CrossRef]
  72. Dahou, A.; Aseeri, A.O.; Mabrouk, A.; Ibrahim, R.A.; Al-Betar, M.A.; Elaziz, M.A. Optimal Skin Cancer Detection Model Using Transfer Learning and Dynamic-Opposite Hunger Games Search. Diagnostics 2023, 13, 1579. [Google Scholar] [CrossRef] [PubMed]
  73. Tahir, M.; Naeem, A.; Malik, H.; Tanveer, J.; Naqvi, R.A.; Lee, S.-W. DSCC_Net: Multi-Classification Deep Learning Models for Diagnosing of Skin Cancer Using Dermoscopic Images. Cancers 2023, 15, 2179. [Google Scholar] [CrossRef]
  74. Qasim Gilani, S.; Syed, T.; Umair, M.; Marques, O. Skin Cancer Classification Using Deep Spiking Neural Network. J. Digit. Imaging 2023, 36, 1137–1147. [Google Scholar] [CrossRef] [PubMed]
  75. Wang, Y.; Su, J.; Xu, Q.; Zhong, Y. A Collaborative Learning Model for Skin Lesion Segmentation and Classification. Diagnostics 2023, 13, 912. [Google Scholar] [CrossRef]
  76. Raghavendra, P.V.S.P.; Charitha, C.; Begum, K.G.; Prasath, V.B.S. Deep Learning–Based Skin Lesion Multi-class Classification with Global Average Pooling Improvement. J. Digit. Imaging 2023, 36, 2227–2248. [Google Scholar] [CrossRef] [PubMed]
  77. Anand, V.; Gupta, S.; Altameem, A.; Nayak, S.R.; Poonia, R.C.; Saudagar, A.K.J. An Enhanced Transfer Learning Based Classification for Diagnosis of Skin Cancer. Diagnostics 2022, 12, 1628. [Google Scholar] [CrossRef] [PubMed]
  78. Alam, T.M.; Shaukat, K.; Khan, W.A.; Hameed, I.A.; Almuqren, L.A.; Raza, M.A.; Aslam, M.; Luo, S. An Efficient Deep Learning-Based Skin Cancer Classifier for an Imbalanced Dataset. Diagnostics 2022, 12, 2115. [Google Scholar] [CrossRef]
  79. Montaha, S.; Azam, S.; Rafid, A.K.M.R.H.; Islam, S.; Ghosh, P.; Jonkman, M. A shallow deep learning approach to classify skin cancer using down-scaling method to minimize time and space complexity. PLoS ONE 2022, 17, e0269826. [Google Scholar] [CrossRef]
  80. Li, H.; Li, W.; Chang, J.; Zhou, L.; Luo, J.; Guo, Y. Dermoscopy lesion classification based on GANs and a fuzzy rank-based ensemble of CNN models. Phys. Med. Biol. 2022, 67, 185005. [Google Scholar] [CrossRef]
  81. Albahar, M.A. Skin Lesion Classification Using Convolutional Neural Network with Novel Regularizer. IEEE Access 2019, 7, 38306–38313. [Google Scholar] [CrossRef]
  82. Kaur, R.; GholamHosseini, H.; Sinha, R.; Lindén, M. Melanoma Classification Using a Novel Deep Convolutional Neural Network with Dermoscopic Images. Sensors 2022, 22, 1134. [Google Scholar] [CrossRef]
  83. Afza, F.; Sharif, M.; Mittal, M.; Khan, M.A.; Jude Hemanth, D. A hierarchical three-step superpixels and deep learning framework for skin lesion classification. Methods 2022, 202, 88–102. [Google Scholar] [CrossRef]
  84. Fraiwan, M.; Faouri, E. On the Automatic Detection and Classification of Skin Cancer Using Deep Transfer Learning. Sensors 2022, 22, 4963. [Google Scholar] [CrossRef] [PubMed]
  85. Ghazal, T.M.; Hussain, S.; Khan, M.F.; Khan, M.A.; Said, R.A.T.; Ahmad, M. Detection of Benign and Malignant Tumors in Skin Empowered with Transfer Learning. Comput. Intell. Neurosci. 2022, 2022, 4826892. [Google Scholar] [CrossRef] [PubMed]
  86. Khan, M.A.; Muhammad, K.; Sharif, M.; Akram, T.; Albuquerque, V.H.C. de Multi-Class Skin Lesion Detection and Classification via Teledermatology. IEEE J. Biomed. Health Inform. 2021, 25, 4267–4275. [Google Scholar] [CrossRef] [PubMed]
  87. Vaiyapuri, T.; Balaji, P.; S, S.; Alaskar, H.; Sbai, Z. Computational Intelligence-Based Melanoma Detection and Classification Using Dermoscopic Images. Comput. Intell. Neurosci. 2022, 2022, 2370190. [Google Scholar] [CrossRef] [PubMed]
  88. Wang, Y.; Wang, Y.; Cai, J.; Lee, T.K.; Miao, C.; Wang, Z.J. SSD-KD: A self-supervised diverse knowledge distillation method for lightweight skin lesion classification using dermoscopic images. Med. Image Anal. 2023, 84, 102693. [Google Scholar] [CrossRef]
  89. Hosny, K.M.; Kassem, M.A.; Foaud, M.M. Classification of skin lesions using transfer learning and augmentation with Alex-net. PLoS ONE 2019, 14, e0217293. [Google Scholar] [CrossRef]
  90. Lu, X.; Firoozeh Abolhasani Zadeh, Y.A. Deep Learning-Based Classification for Melanoma Detection Using XceptionNet. J. Healthc. Eng. 2022, 2022, 2196096. [Google Scholar] [CrossRef]
  91. Shan, P.; Fu, C.; Dai, L.; Jia, T.; Tie, M.; Liu, J. Automatic skin lesion classification using a new densely connected convolutional network with an SF module. Med. Biol. Eng. Comput. 2022, 60, 2173–2188. [Google Scholar] [CrossRef]
  92. Singh, S.K.; Abolghasemi, V.; Anisi, M.H. Skin Cancer Diagnosis Based on Neutrosophic Features with a Deep Neural Network. Sensors 2022, 22, 6261. [Google Scholar] [CrossRef]
  93. Singh, R.K.; Gorantla, R.; Allada, S.G.R.; Narra, P. SkiNet: A deep learning framework for skin lesion diagnosis with uncertainty estimation and explainability. PLoS ONE 2022, 17, e0276836. [Google Scholar] [CrossRef]
  94. Yao, P.; Shen, S.; Xu, M.; Liu, P.; Zhang, F.; Xing, J.; Shao, P.; Kaffenberger, B.; Xu, R.X. Single Model Deep Learning on Imbalanced Small Datasets for Skin Lesion Classification. IEEE Trans. Med. Imaging 2022, 41, 1242–1254. [Google Scholar] [CrossRef] [PubMed]
  95. Xia, M.; Kheterpal, M.K.; Wong, S.C.; Park, C.; Ratliff, W.; Carin, L.; Henao, R. Lesion identification and malignancy prediction from clinical dermatological images. Sci. Rep. 2022, 12, 15836. [Google Scholar] [CrossRef] [PubMed]
  96. Ali, M.U.; Khalid, M.; Alshanbari, H.; Zafar, A.; Lee, S.W. Enhancing Skin Lesion Detection: A Multistage Multiclass Convolutional Neural Network-Based Framework. Bioengineering 2023, 10, 1430. [Google Scholar] [CrossRef] [PubMed]
  97. Lai, W.; Kuang, M.; Wang, X.; Ghafariasl, P.; Sabzalian, M.H.; Lee, S. Skin cancer diagnosis (SCD) using Artificial Neural Network (ANN) and Improved Gray Wolf Optimization (IGWO). Sci. Rep. 2023, 13, 19377. [Google Scholar] [CrossRef] [PubMed]
  98. Nugroho, E.S.; Ardiyanto, I.; Nugroho, H.A. Boosting the performance of pretrained CNN architecture on dermoscopic pigmented skin lesion classification. Ski. Res. Technol. 2023, 29, e13505. [Google Scholar] [CrossRef]
  99. Abd Elaziz, M.; Dahou, A.; Mabrouk, A.; El-Sappagh, S.; Aseeri, A.O. An Efficient Artificial Rabbits Optimization Based on Mutation Strategy For Skin Cancer Prediction. Comput. Biol. Med. 2023, 163, 107154. [Google Scholar] [CrossRef]
  100. Pacheco, A.G.C.; Krohling, R.A. The impact of patient clinical information on automated skin cancer detection. Comput. Biol. Med. 2020, 116, 103545. [Google Scholar] [CrossRef]
  101. Pacheco, A.G.C.; Krohling, R.A. An Attention-Based Mechanism to Combine Images and Metadata in Deep Learning Models Applied to Skin Cancer Classification. IEEE J. Biomed. Health Inform. 2021, 25, 3554–3563. [Google Scholar] [CrossRef]
  102. Ningrum, D.N.A.; Yuan, S.-P.; Kung, W.-M.; Wu, C.-C.; Tzeng, I.-S.; Huang, C.-Y.; Li, J.Y.-C.; Wang, Y.-C. Deep Learning Classifier with Patient’s Metadata of Dermoscopic Images in Malignant Melanoma Detection. J. Multidiscip. Healthc. 2021, 14, 877–885. [Google Scholar] [CrossRef]
  103. Tognetti, L.; Bonechi, S.; Andreini, P.; Bianchini, M.; Scarselli, F.; Cevenini, G.; Moscarella, E.; Farnetani, F.; Longo, C.; Lallas, A.; et al. A new deep learning approach integrated with clinical data for the dermoscopic differentiation of early melanomas from atypical nevi. J. Dermatol. Sci. 2021, 101, 115–122. [Google Scholar] [CrossRef]
  104. Sun, Q.; Huang, C.; Chen, M.; Xu, H.; Yang, Y. Skin Lesion Classification Using Additional Patient Information. BioMed Res. Int. 2021, 2021, 6673852. [Google Scholar] [CrossRef] [PubMed]
  105. Xing, X.; Song, P.; Zhang, K.; Yang, F.; Dong, Y. ZooME: Efficient Melanoma Detection Using Zoom-in Attention and Metadata Embedding Deep Neural Network. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Online, 1–5 November 2021; pp. 4041–4044. [Google Scholar] [CrossRef]
  106. Kanchana, K.; Kavitha, S.; Anoop, K.J.; Chinthamani, B. Enhancing Skin Cancer Classification using Efficient Net B0-B7 through Convolutional Neural Networks and Transfer Learning with Patient-Specific Data. Asian Pac. J. Cancer Prev. 2024, 25, 1795–1802. [Google Scholar] [CrossRef]
  107. Xin, C.; Liu, Z.; Zhao, K.; Miao, L.; Ma, Y.; Zhu, X.; Zhou, Q.; Wang, S.; Li, L.; Yang, F.; et al. An improved transformer network for skin cancer classification. Comput. Biol. Med. 2022, 149, 105939. [Google Scholar] [CrossRef]
  108. Reis, H.C.; Turk, V.; Khoshelham, K.; Kaya, S. InSiNet: A deep convolutional approach to skin cancer detection and segmentation. Med. Biol. Eng. Comput. 2022, 60, 643–662. [Google Scholar] [CrossRef] [PubMed]
  109. Ravi, V. Attention Cost-Sensitive Deep Learning-Based Approach for Skin Cancer Detection and Classification. Cancers 2022, 14, 5872. [Google Scholar] [CrossRef]
  110. Zhang, Y.; Xie, F.; Song, X.; Zhou, H.; Yang, Y.; Zhang, H.; Liu, J. A rotation meanout network with invariance for dermoscopy image classification and retrieval. Comput. Biol. Med. 2022, 151, 106272. [Google Scholar] [CrossRef]
  111. Khan, M.A.; Akram, T.; Sharif, M.; Javed, K.; Rashid, M.; Bukhari, S.A.C. An integrated framework of skin lesion detection and recognition through saliency method and optimal deep neural network features selection. Neural Comput. Appl. 2019, 32, 15929–15948. [Google Scholar] [CrossRef]
  112. Saba, T.; Khan, M.A.; Rehman, A.; Marie-Sainte, S.L. Region Extraction and Classification of Skin Cancer: A Heterogeneous framework of Deep CNN Features Fusion and Reduction. J. Med. Syst. 2019, 43, 289. [Google Scholar] [CrossRef] [PubMed]
  113. Bakheet, S.; Alsubai, S.; El-Nagar, A.; Alqahtani, A. A Multi-Feature Fusion Framework for Automatic Skin Cancer Diagnostics. Diagnostics 2023, 13, 1474. [Google Scholar] [CrossRef]
  114. Maurya, R.; Mahapatra, S.; Dutta, M.K.; Singh, V.P.; Karnati, M.; Sahu, G.; Pandey, N.N. Skin cancer detection through attention guided dual autoencoder approach with extreme learning machine. Sci. Rep. 2024, 14, 17785. [Google Scholar] [CrossRef]
  115. Naseem, S.; Anwar, M.; Faheem, M.; Fayyaz, M.; Malik, M.S.A. Bayesian-Edge system for classification and segmentation of skin lesions in Internet of Medical Things. Ski. Res. Technol. 2024, 30, e13878. [Google Scholar] [CrossRef] [PubMed]
  116. Attallah, O. Skin-CAD: Explainable deep learning classification of skin cancer from dermoscopic images by feature selection of dual high-level CNNs features and transfer learning. Comput. Biol. Med. 2024, 178, 108798. [Google Scholar] [CrossRef] [PubMed]
  117. Ali, R.; Manikandan, A.; Lei, R.; Xu, J. A novel SpaSA based hyper-parameter optimized FCEDN with adaptive CNN classification for skin cancer detection. Sci. Rep. 2024, 14, 9336. [Google Scholar] [CrossRef]
  118. Zhang, D.; Li, A.; Wu, W.; Yu, L.; Kang, X.; Huo, X. CR-Conformer: A fusion network for clinical skin lesion classification. Med. Biol. Eng. Comput. 2023, 62, 85–94. [Google Scholar] [CrossRef] [PubMed]
  119. Akram, A.; Rashid, J.; Jaffar, M.A.; Faheem, M.; ul Amin, R. Segmentation and classification of skin lesions using hybrid deep learning method in the Internet of Medical Things. Ski. Res. Technol. 2023, 29, e13524. [Google Scholar] [CrossRef]
  120. Sarkar, R.; Chatterjee, C.C.; Hazra, A. Diagnosis of melanoma from dermoscopic images using a deep depthwise separable residual convolutional network. IET Image Process. 2019, 13, 2130–2142. [Google Scholar] [CrossRef]
  121. Serte, S.; Demirel, H. Gabor wavelet-based deep learning for skin lesion classification. Comput. Biol. Med. 2019, 113, 103423. [Google Scholar] [CrossRef]
  122. Xie, Y.; Zhang, J.; Xia, Y.; Shen, C. A Mutual Bootstrapping Model for Automated Skin Lesion Segmentation and Classification. IEEE Trans. Med. Imaging 2020, 39, 2482–2493. [Google Scholar] [CrossRef]
  123. Al-masni, M.A.; Kim, D.-H.; Kim, T.-S. Multiple skin lesions diagnostics via integrated deep convolutional networks for segmentation and classification. Comput. Methods Programs Biomed. 2020, 190, 105351. [Google Scholar] [CrossRef]
  124. Abdar, M.; Samami, M.; Dehghani Mahmoodabad, S.; Doan, T.; Mazoure, B.; Hashemifesharaki, R.; Liu, L.; Khosravi, A.; Acharya, U.R.; Makarenkov, V.; et al. Uncertainty quantification in skin cancer classification using three-way decision-based Bayesian deep learning. Comput. Biol. Med. 2021, 135, 104418. [Google Scholar] [CrossRef]
  125. Khan, M.A.; Sharif, M.; Akram, T.; Bukhari, S.A.C.; Nayak, R.S. Developed Newton-Raphson based deep features selection framework for skin lesion recognition. Pattern Recognit. Lett. 2020, 129, 293–303. [Google Scholar] [CrossRef]
  126. Khan, M.A.; Sharif, M.; Akram, T.; Damaševičius, R.; Maskeliūnas, R. Skin lesion segmentation and multiclass classification using deep learning features and improved moth flame optimization. Diagnostics 2021, 11, 811. [Google Scholar] [CrossRef] [PubMed]
  127. Maqsood, S.; Damaševičius, R. Multiclass skin lesion localization and classification using deep learning based features fusion and selection framework for smart healthcare. Neural Netw. 2023, 160, 238–258. [Google Scholar] [CrossRef] [PubMed]
  128. Zhang, L.; Gao, H.J.; Zhang, J.; Badami, B. Optimization of the Convolutional Neural Networks for Automatic Detection of Skin Cancer. Open Med. 2020, 15, 27–37. [Google Scholar] [CrossRef]
  129. Zunair, H.; Ben Hamza, A. Melanoma detection using adversarial training and deep transfer learning. Phys. Med. Biol. 2020, 65, 135005. [Google Scholar] [CrossRef]
  130. Alenezi, F.; Armghan, A.; Polat, K. A Novel Multi-Task Learning Network Based on Melanoma Segmentation and Classification with Skin Lesion Images. Diagnostics 2023, 13, 262. [Google Scholar] [CrossRef] [PubMed]
  131. Mahum, R.; Aladhadh, S. Skin Lesion Detection Using Hand-Crafted and DL-Based Features Fusion and LSTM. Diagnostics 2022, 12, 2974. [Google Scholar] [CrossRef]
  132. Shinde, R.K.; Alam, M.S.; Hossain, M.B.; Md Imtiaz, S.; Kim, J.; Padwal, A.A.; Kim, N. Squeeze-MNet: Precise Skin Cancer Detection Model for Low Computing IoT Devices Using Transfer Learning. Cancers 2022, 15, 12. [Google Scholar] [CrossRef]
  133. Jain, S.; Singhania, U.; Tripathy, B.; Nasr, E.A.; Aboudaif, M.K.; Kamrani, A.K. Deep Learning-Based Transfer Learning for Classification of Skin Cancer. Sensors 2021, 21, 8142. [Google Scholar] [CrossRef]
  134. Musthafa, M.M.; T R, M.; Kumar V, V.; Guluwadi, S. Enhanced skin cancer diagnosis using optimized CNN architecture and checkpoints for automated dermatological lesion classification. BMC Med. Imaging 2024, 24, 201. [Google Scholar] [CrossRef]
  135. Rasel, M.A.; Abdul Kareem, S.; Kwan, Z.; Yong, S.S.; Obaidellah, U. Bluish veil detection and lesion classification using custom deep learnable layers with explainable artificial intelligence (XAI). Comput. Biol. Med. 2024, 178, 108758. [Google Scholar] [CrossRef] [PubMed]
  136. Arjun, K.P.; Kumar, K.S.; Dhanaraj, R.K.; Ravi, V.; Kumar, T.G. Optimizing time prediction and error classification in early melanoma detection using a hybrid RCNN-LSTM model. Microsc. Res. Tech. 2024, 87, 1789–1809. [Google Scholar] [CrossRef] [PubMed]
  137. Saleh, N.; Hassan, M.A.; Salaheldin, A.M. Skin cancer classification based on an optimized convolutional neural network and multicriteria decision-making. Sci. Rep. 2024, 14, 17323. [Google Scholar] [CrossRef]
  138. Hu, Z.; Mei, W.; Chen, H.; Hou, W. Multi-scale feature fusion and class weight loss for skin lesion classification. Comput. Biol. Med. 2024, 176, 108594. [Google Scholar] [CrossRef]
  139. Kumar, A.; Kumar, M.; Bhardwaj, V.P.; Kumar, S.; Selvarajan, S. A novel skin cancer detection model using modified finch deep CNN classifier model. Sci. Rep. 2024, 14, 11235. [Google Scholar] [CrossRef] [PubMed]
  140. Alsaade, F.W.; Aldhyani, T.H.H.; Al-Adhaileh, M.H. Developing a Recognition System for Diagnosing Melanoma Skin Lesions Using Artificial Intelligence Algorithms. Comput. Math. Methods Med. 2021, 2021, 9998379. [Google Scholar] [CrossRef]
  141. Khan, S.; Khan, A. SkinViT: A transformer based method for Melanoma and Nonmelanoma classification. PLoS ONE 2023, 18, e0295151. [Google Scholar] [CrossRef]
  142. Maron, R.C.; Haggenmüller, S.; von Kalle, C.; Utikal, J.S.; Meier, F.; Gellrich, F.F.; Hauschild, A.; French, L.E.; Schlaak, M.; Ghoreschi, K.; et al. Robustness of convolutional neural networks in recognition of pigmented skin lesions. Eur. J. Cancer 2021, 145, 81–91. [Google Scholar] [CrossRef]
  143. Jojoa Acosta, M.F.; Caballero Tovar, L.Y.; Garcia-Zapirain, M.B.; Percybrooks, W.S. Melanoma diagnosis using deep learning techniques on dermatoscopic images. BMC Med. Imaging 2021, 21, 6. [Google Scholar] [CrossRef]
  144. Rezk, E.; Eltorki, M.; El-Dakhakhni, W. Interpretable Skin Cancer Classification based on Incremental Domain Knowledge Learning. J. Healthc. Inform. Res. 2023, 7, 59–83. [Google Scholar] [CrossRef]
  145. Winkler, J.K.; Blum, A.; Kommoss, K.; Enk, A.; Toberer, F.; Rosenberger, A.; Haenssle, H.A. Assessment of Diagnostic Performance of Dermatologists Cooperating With a Convolutional Neural Network in a Prospective Clinical Study. JAMA Dermatol. 2023, 159, 621. [Google Scholar] [CrossRef] [PubMed]
  146. Adepu, A.K.; Sahayam, S.; Jayaraman, U.; Arramraju, R. Melanoma classification from dermatoscopy images using knowledge distillation for highly imbalanced data. Comput. Biol. Med. 2023, 154, 106571. [Google Scholar] [CrossRef] [PubMed]
  147. Liu, Z.; Xiong, R.; Jiang, T. CI-Net: Clinical-Inspired Network for Automated Skin Lesion Recognition. IEEE Trans. Med. Imaging 2023, 42, 619–632. [Google Scholar] [CrossRef] [PubMed]
  148. Bandy, A.D.; Spyridis, Y.; Villarini, B.; Argyriou, V. Intraclass Clustering-Based CNN Approach for Detection of Malignant Melanoma. Sensors 2023, 23, 926. [Google Scholar] [CrossRef]
  149. Ferris, L.K.; Harkes, J.A.; Gilbert, B.; Winger, D.G.; Golubets, K.; Akilov, O.; Satyanarayanan, M. Computer-aided classification of melanocytic lesions using dermoscopic images. J. Am. Acad. Dermatol. 2015, 73, 769–776. [Google Scholar] [CrossRef]
  150. Rastgoo, M.; Morel, O.; Marzani, F.; Garcia, R. Ensemble approach for differentiation of malignant melanoma. In Proceedings of the Twelfth International Conference on Quality Control by Artificial Vision 2015, Le Creusot, France, 30 April 2015; Volume 9534. [Google Scholar] [CrossRef]
  151. Grzesiak-Kopeć, K.; Ogorzałek, M.; Nowak, L. Computational Classification of Melanocytic Skin Lesions. In Artificial Intelligence and Soft Computing, Proceeding of the 15th International Conference, ICAISC 2016, Zakopane, Poland, 12–16 June 2016; Springer International Publishing: Cham, Switzerland; Volume 9693, pp. 169–178. ISBN 978-3-642-13207-0.
  152. Kharazmi, P.; Lui, H.; Wang, Z.J.; Lee, T.K. Automatic detection of basal cell carcinoma using vascular-extracted features from dermoscopy images. In Proceedings of the 2016 IEEE Canadian Conference on Electrical and Computer Engineering (CCECE), Vancouver, BC, Canada, 15–18 May 2016; pp. 1–4. [Google Scholar] [CrossRef]
  153. Kharazmi, P.; AlJasser, M.I.; Lui, H.; Wang, Z.J.; Lee, T.K. Automated Detection and Segmentation of Vascular Structures of Skin Lesions Seen in Dermoscopy, With an Application to Basal Cell Carcinoma Classification. IEEE J. Biomed. Health Inform. 2017, 21, 1675–1684. [Google Scholar] [CrossRef]
  154. Benyahia, S.; Meftah, B.; Lézoray, O. Multi-features extraction based on deep learning for skin lesion classification. Tissue Cell 2022, 74, 101701. [Google Scholar] [CrossRef]
  155. Abdullah, A.; Siddique, A.; Shaukat, K.; Jan, T. An Intelligent Mechanism to Detect Multi-Factor Skin Cancer. Diagnostics 2024, 14, 1359. [Google Scholar] [CrossRef]
  156. Vasconcelos, M.J.M.; Rosado, L.; Ferreira, M. Principal Axes-Based Asymmetry Assessment Methodology for Skin Lesion Image Analysis. In Advances in Visual Computing, Proceedings of the ISVC 2014, Las Vegas, NV, USA, 8–10 December 2014; Springer International Publishing: Cham, Switzerland; pp. 21–31.
  157. Murugan, A.; Nair, S.A.H.; Kumar, K.P.S. Detection of Skin Cancer Using SVM, Random Forest and kNN Classifiers. J. Med. Syst. 2019, 43, 269. [Google Scholar] [CrossRef]
  158. Narasimhan, K.; Elamaran, V. Wavelet-based energy features for diagnosis of melanoma from dermoscopic images. Int. J. Biomed. Eng. Technol. 2016, 20, 243. [Google Scholar] [CrossRef]
  159. Janney, J.B.; Roslin, S.E. Classification of melanoma from Dermoscopic data using machine learning techniques. Multimed. Tools Appl. 2018, 79, 3713–3728. [Google Scholar] [CrossRef]
  160. Rastgoo, M.; Garcia, R.; Morel, O.; Marzani, F. Automatic differentiation of melanoma from dysplastic nevi. Comput. Med. Imaging Graph. 2015, 43, 44–52. [Google Scholar] [CrossRef]
  161. Barata, C.; Ruela, M.; Francisco, M.; Mendonca, T.; Marques, J.S. Two Systems for the Detection of Melanomas in Dermoscopy Images Using Texture and Color Features. IEEE Syst. J. 2014, 8, 965–979. [Google Scholar] [CrossRef]
  162. Giavina-Bianchi, M.; de Sousa, R.M.; de Almeida Paciello, V.Z.; Vitor, W.G.; Okita, A.L.; Prôa, R.; Severino, G.L.d.S.; Schinaid, A.A.; Espírito Santo, R.; Machado, B.S. Implementation of artificial intelligence algorithms for melanoma screening in a primary care setting. PLoS ONE 2021, 16, e0257006. [Google Scholar] [CrossRef] [PubMed]
  163. Pennisi, A.; Bloisi, D.D.; Nardi, D.; Giampetruzzi, A.R.; Mondino, C.; Facchiano, A. Skin lesion image segmentation using Delaunay Triangulation for melanoma detection. Comput. Med. Imaging Graph. 2016, 52, 89–103. [Google Scholar] [CrossRef]
  164. Arshad, M.; Khan, M.A.; Tariq, U.; Armghan, A.; Alenezi, F.; Younus Javed, M.; Aslam, S.M.; Kadry, S. A Computer-Aided Diagnosis System Using Deep Learning for Multiclass Skin Lesion Classification. Comput. Intell. Neurosci. 2021, 2021, 9619079. [Google Scholar] [CrossRef] [PubMed]
  165. Xu, H.; Lu, C.; Berendt, R.; Jha, N.; Mandal, M. Automated analysis and classification of melanocytic tumor on skin whole slide images. Comput. Med. Imaging Graph. 2018, 66, 124–134. [Google Scholar] [CrossRef] [PubMed]
  166. Ianni, J.D.; Soans, R.E.; Sankarapandian, S.; Chamarthi, R.V.; Ayyagari, D.; Olsen, T.G.; Bonham, M.J.; Stavish, C.C.; Motaparthi, K.; Cockerell, C.J.; et al. Tailored for Real-World: A Whole Slide Image Classification System Validated on Uncurated Multi-Site Data Emulating the Prospective Pathology Workload. Sci. Rep. 2020, 10, 3217. [Google Scholar] [CrossRef]
  167. Khan, S.; Khan, M.A.; Noor, A.; Fareed, K. SASAN: Ground truth for the effective segmentation and classification of skin cancer using biopsy images. Diagnosis 2024, 11, 283–294. [Google Scholar] [CrossRef]
  168. Ganster, H.; Pinz, P.; Rohrer, R.; Wildling, E.; Binder, M.; Kittler, H. Automated melanoma recognition. IEEE Trans. Med. Imaging 2001, 20, 233–239. [Google Scholar] [CrossRef]
  169. Odeh, S.M.; de Toro, F.; Rojas, I.; Saéz-Lara, M.J. Evaluating Fluorescence Illumination Techniques for Skin Lesion Diagnosis. Appl. Artif. Intell. 2012, 26, 696–713. [Google Scholar] [CrossRef]
  170. Odeh, S.M.; Baareh, A.K.M. A comparison of classification methods as diagnostic system: A case study on skin lesions. Comput. Methods Programs Biomed. 2016, 137, 311–319. [Google Scholar] [CrossRef] [PubMed]
  171. Gerger, A.; Wiltgen, M.; Langsenlehner, U.; Richtig, E.; Horn, M.; Weger, W.; Ahlgrimm-Siess, V.; Hofmann-Wellenhof, R.; Samonigg, H.; Smolle, J. Diagnostic image analysis of malignant melanoma in in vivo confocal laser-scanning microscopy: A preliminary study. Ski. Res. Technol. 2008, 14, 359–363. [Google Scholar] [CrossRef] [PubMed]
  172. Lorber, A.; Wiltgen, M.; Hofmann-Wellenhof, R.; Koller, S.; Weger, W.; Ahlgrimm-Siess, V.; Smolle, J.; Gerger, A. Correlation of image analysis features and visual morphology in melanocytic skin tumours using in vivo confocal laser scanning microscopy. Ski. Res. Technol. 2009, 15, 237–241. [Google Scholar] [CrossRef]
  173. Koller, S.; Wiltgen, M.; Ahlgrimm-Siess, V.; Weger, W.; Hofmann-Wellenhof, R.; Richtig, E.; Smolle, J.; Gerger, A. In vivo reflectance confocal microscopy: Automated diagnostic image analysis of melanocytic skin tumours. J. Eur. Acad. Dermatol. Venereol. 2011, 25, 554–558. [Google Scholar] [CrossRef]
  174. Yuan, X.; Yang, Z.; Zouridakis, G.; Mullani, N. SVM-based Texture Classification and Application to Early Melanoma Detection. In Proceedings of the 2006 International Conference of the IEEE Engineering in Medicine and Biology Society, New York, NY, USA, 30 August–3 September 2006; pp. 4775–4778. [Google Scholar] [CrossRef]
  175. Masood, A.; Al-Jumaily, A. Semi-advised learning model for skin cancer diagnosis based on histopathalogical images. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 631–634. [Google Scholar] [CrossRef]
  176. Ruiz, D.; Berenguer, V.; Soriano, A.; Sánchez, B. A decision support system for the diagnosis of melanoma: A comparative approach. Expert Syst. Appl. 2011, 38, 15217–15223. [Google Scholar] [CrossRef]
  177. Noroozi, N.; Zakerolhosseini, A. Differential diagnosis of squamous cell carcinoma in situ using skin histopathological images. Comput. Biol. Med. 2016, 70, 23–39. [Google Scholar] [CrossRef] [PubMed]
  178. Noroozi, N.; Zakerolhosseini, A. Computer assisted diagnosis of basal cell carcinoma using Z-transform features. J. Vis. Commun. Image Represent. 2016, 40, 128–148. [Google Scholar] [CrossRef]
  179. Höhn, J.; Krieghoff-Henning, E.; Jutzi, T.B.; von Kalle, C.; Utikal, J.S.; Meier, F.; Gellrich, F.F.; Hobelsberger, S.; Hauschild, A.; Schlager, J.G.; et al. Combining CNN-based histologic whole slide image analysis and patient data to improve skin cancer classification. Eur. J. Cancer 2021, 149, 94–101. [Google Scholar] [CrossRef]
  180. Chen, M.; Feng, X.; Fox, M.C.; Reichenberg, J.S.; Lopes, F.C.P.S.; Sebastian, K.R.; Markey, M.K.; Tunnell, J.W. Deep learning on reflectance confocal microscopy improves Raman spectral diagnosis of basal cell carcinoma. J. Biomed. Opt. 2022, 27, 065004. [Google Scholar] [CrossRef]
  181. La Salvia, M.; Torti, E.; Leon, R.; Fabelo, H.; Ortega, S.; Balea-Fernandez, F.; Martinez-Vega, B.; Castaño, I.; Almeida, P.; Carretero, G.; et al. Neural Networks-Based On-Site Dermatologic Diagnosis through Hyperspectral Epidermal Images. Sensors 2022, 22, 7139. [Google Scholar] [CrossRef]
  182. Liu, L.; Qi, M.; Li, Y.; Liu, Y.; Liu, X.; Zhang, Z.; Qu, J. Staging of Skin Cancer Based on Hyperspectral Microscopic Imaging and Machine Learning. Biosensors 2022, 12, 790. [Google Scholar] [CrossRef]
  183. Truong, B.C.Q.; Tuan, H.D.; Wallace, V.P.; Fitzgerald, A.J.; Nguyen, H.T. The Potential of the Double Debye Parameters to Discriminate between Basal Cell Carcinoma and Normal Skin. IEEE Trans. Terahertz Sci. Technol. 2015, 5, 990–998. [Google Scholar] [CrossRef]
  184. Mohr, P.; Birgersson, U.; Berking, C.; Henderson, C.; Trefzer, U.; Kemeny, L.; Sunderkötter, C.; Dirschka, T.; Motley, R.; Frohm-Nilsson, M.; et al. Electrical impedance spectroscopy as a potential adjunct diagnostic tool for cutaneous melanoma. Ski. Res. Technol. 2013, 19, 75–83. [Google Scholar] [CrossRef] [PubMed]
  185. Kupriyanov, V.; Blondel, W.; Daul, C.; Amouroux, M.; Kistenev, Y. Implementation of data fusion to increase the efficiency of classification of precancerous skin states using in vivo bimodal spectroscopic technique. J. Biophotonics 2023, 16, e202300035. [Google Scholar] [CrossRef] [PubMed]
  186. Zhao, J.; Lui, H.; Kalia, S.; Lee, T.K.; Zeng, H. Improving skin cancer detection by Raman spectroscopy using convolutional neural networks and data augmentation. Front. Oncol. 2024, 14, 1320220. [Google Scholar] [CrossRef]
  187. Zhang, W.; Patterson, N.H.; Verbeeck, N.; Moore, J.L.; Ly, A.; Caprioli, R.M.; De Moor, B.; Norris, J.L.; Claesen, M. Multimodal MALDI imaging mass spectrometry for improved diagnosis of melanoma. PLoS ONE 2024, 19, e0304709. [Google Scholar] [CrossRef] [PubMed]
  188. Chen, X.; Shen, J.; Liu, C.; Shi, X.; Feng, W.; Sun, H.; Zhang, W.; Zhang, S.; Jiao, Y.; Chen, J.; et al. Applications of Data Characteristic AI-Assisted Raman Spectroscopy in Pathological Classification. Anal. Chem. 2024, 96, 6158–6169. [Google Scholar] [CrossRef]
  189. Petracchi, B.; Torti, E.; Marenzi, E.; Leporati, F. Acceleration of Hyperspectral Skin Cancer Image Classification through Parallel Machine-Learning Methods. Sensors 2024, 24, 1399. [Google Scholar] [CrossRef]
  190. Li, L.; Zhang, Q.; Ding, Y.; Jiang, H.; Thiers, B.H.; Wang, J.Z. Automatic diagnosis of melanoma using machine learning methods on a spectroscopic system. BMC Med. Imaging 2014, 14, 36. [Google Scholar] [CrossRef]
  191. Maciel, V.H.; Correr, W.R.; Kurachi, C.; Bagnato, V.S.; da Silva Souza, C. Fluorescence spectroscopy as a tool to in vivo discrimination of distinctive skin disorders. Photodiagnosis Photodyn. Ther. 2017, 19, 45–50. [Google Scholar] [CrossRef]
  192. Tomatis, S.; Bono, A.; Bartoli, C.; Carrara, M.; Lualdi, M.; Tragni, G.; Marchesini, R. Automated melanoma detection: Multispectral imaging and neural network approach for classification. Med. Phys. 2003, 30, 212–221. [Google Scholar] [CrossRef]
  193. Tomatis, S.; Carrara, M.; Bono, A.; Bartoli, C.; Lualdi, M.; Tragni, G.; Colombo, A.; Marchesini, R. Automated melanoma detection with a novel multispectral imaging system: Results of a prospective study. Phys. Med. Biol. 2005, 50, 1675–1687. [Google Scholar] [CrossRef] [PubMed]
  194. Åberg, P.; Birgersson, U.; Elsner, P.; Mohr, P.; Ollmar, S. Electrical impedance spectroscopy and the diagnostic accuracy for malignant melanoma. Exp. Dermatol. 2011, 20, 648–652. [Google Scholar] [CrossRef] [PubMed]
  195. Giotis, I.; Molders, N.; Land, S.; Biehl, M.; Jonkman, M.F.; Petkov, N. MED-NODE: A computer-assisted melanoma diagnosis system using non-dermoscopic images. Expert Syst. Appl. 2015, 42, 6578–6585. [Google Scholar] [CrossRef]
  196. Eslava, J.; Druzgalski, C. Differential Feature Space in Mean Shift Clustering for Automated Melanoma Assessment. In Proceedings of the World Congress on Medical Physics and Biomedical Engineering, Toronto, Canada, 7–12 June 2015; Jaffray, D.A., Ed.; Springer International Publishing: Cham, Switzerland, 2015; Volume 51, pp. 1401–1404, ISBN 978-3-319-19386-1. [Google Scholar]
  197. Karami, N.; Esteki, A. Automated Diagnosis of Melanoma Based on Nonlinear Complexity Features. In Proceedings of the 5th Kuala Lumpur International Conference on Biomedical Engineering 2011, Kuala Lumpur, Malaysia, 20–23 June 2011; Osman, N.A.A., Abas, W.A.B.W., Wahab, A.K.A., Ting, H., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 270–274. [Google Scholar]
  198. Tabatabaie, K.; Esteki, A. Independent Component Analysis as an Effective Tool for Automated Diagnosis of Melanoma. In Proceedings of the 2008 Cairo International Biomedical Engineering Conference, Cairo, Egypt, 18–20 December 2008; pp. 1–4. [Google Scholar] [CrossRef]
  199. Gautam, D.; Ahmed, M.; Meena, Y.K.; Ul Haq, A. Machine learning–based diagnosis of melanoma using macro images. Int. J. Numer. Method. Biomed. Eng. 2018, 34, e2953. [Google Scholar] [CrossRef] [PubMed]
  200. Przystalski, K. Decision Support System for Skin Cancer Diagnosis. In Proceedings of the The Ninth International Symposium on Operations Research and Its Applications (ISORA’ 10), Chengdu, China, 19–23 August 2010; pp. 406–413. [Google Scholar]
  201. Amelard, R.; Glaister, J.; Wong, A.; Clausi, D.A. High-Level Intuitive Features (HLIFs) for Intuitive Skin Lesion Description. IEEE Trans. Biomed. Eng. 2015, 62, 820–831. [Google Scholar] [CrossRef]
  202. Liu, Z.; Sun, J.; Smith, M.; Smith, L.; Warr, R. Incorporating clinical metadata with digital image features for automated identification of cutaneous melanoma. Br. J. Dermatol. 2013, 169, 1034–1040. [Google Scholar] [CrossRef]
  203. Sanchez, I.; Agaian, S. Computer aided diagnosis of lesions extracted from large skin surfaces. In Proceedings of the 2012 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Seoul, Republic of Korea, 14–17 October 2012; pp. 2879–2884. [Google Scholar] [CrossRef]
  204. Oliveira, R.B.; Marranghello, N.; Pereira, A.S.; Tavares, J.M.R.S. A computational approach for detecting pigmented skin lesions in macroscopic images. Expert Syst. Appl. 2016, 61, 53–63. [Google Scholar] [CrossRef]
  205. Jafari, M.H.; Samavi, S.; Karimi, N.; Soroushmehr, S.M.R.; Ward, K.; Najarian, K. Automatic detection of melanoma using broad extraction of features from digital images. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 1357–1360. [Google Scholar] [CrossRef]
  206. Spyridonos, P.; Gaitanis, G.; Likas, A.; Bassukas, I.D. Automatic discrimination of actinic keratoses from clinical photographs. Comput. Biol. Med. 2017, 88, 50–59. [Google Scholar] [CrossRef]
  207. Abbes, W.; Sellami, D.; Control, A.; Departmement, E.E. High-Level features for automatic skin lesions neural network based classification. In Proceedings of the 2016 International Image Processing, Applications and Systems (IPAS), Hammamet, Tunisia, 5–7 November 2016; pp. 1–7. [Google Scholar]
  208. Jafari, M.H.; Samavi, S.; Soroushmehr, S.M.R.; Mohaghegh, H.; Karimi, N.; Najarian, K. Set of descriptors for skin cancer diagnosis using non-dermoscopic color images. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 2638–2642. [Google Scholar] [CrossRef]
  209. Cavalcanti, P.G.; Scharcanski, J. Automated prescreening of pigmented skin lesions using standard cameras. Comput. Med. Imaging Graph. 2011, 35, 481–491. [Google Scholar] [CrossRef] [PubMed]
  210. Choudhury, D.; Naug, A.; Ghosh, S. Texture and color feature based WLS framework aided skin cancer classification using MSVM and ELM. In Proceedings of the 2015 Annual IEEE India Conference (INDICON), New Delhi, India, 17–20 December 2015; pp. 1–6. [Google Scholar] [CrossRef]
  211. Takruri, M.; Rashad, M.W.; Attia, H. Multi-classifier decision fusion for enhancing melanoma recognition accuracy. In Proceedings of the 2016 5th International Conference on Electronic Devices, Systems and Applications (ICEDSA), Ras Al Khaimah, United Arab Emirates, 6–8 December 2016; pp. 1–5. [Google Scholar] [CrossRef]
  212. Magalhaes, C.; Vardasca, R.; Rebelo, M.; Valenca-Filipe, R.; Ribeiro, M.; Mendes, J. Distinguishing melanocytic nevi from melanomas using static and dynamic infrared thermal imaging. J. Eur. Acad. Dermatol. Venereol. 2019, 33, 1700–1705. [Google Scholar] [CrossRef] [PubMed]
  213. Magalhaes, C.; Tavares, J.M.R.S.; Mendes, J.; Vardasca, R. Comparison of machine learning strategies for infrared thermography of skin cancer. Biomed. Signal Process. Control 2021, 69, 102872. [Google Scholar] [CrossRef]
  214. Kia, S.; Setayeshi, S.; Shamsaei, M.; Kia, M. Computer-aided diagnosis (CAD) of the skin disease based on an intelligent classification of sonogram using neural network. Neural Comput. Appl. 2013, 22, 1049–1062. [Google Scholar] [CrossRef]
  215. Ding, Y.; John, N.W.; Smith, L.; Sun, J.; Smith, M. Combination of 3D skin surface texture features and 2D ABCD features for improved melanoma diagnosis. Med. Biol. Eng. Comput. 2015, 53, 961–974. [Google Scholar] [CrossRef]
  216. Faita, F.; Oranges, T.; Di Lascio, N.; Ciompi, F.; Vitali, S.; Aringhieri, G.; Janowska, A.; Romanelli, M.; Dini, V. Ultra-high-frequency ultrasound and machine learning approaches for the differential diagnosis of melanocytic lesions. Exp. Dermatol. 2022, 31, 94–98. [Google Scholar] [CrossRef]
Figure 1. PRISMA 2020 flow diagram summarizing the review process (based on [13]).
Figure 1. PRISMA 2020 flow diagram summarizing the review process (based on [13]).
Jimaging 10 00265 g001
Figure 2. Percentage of records that used SVM, ANN, DL approaches, and other ML techniques with dermoscopy images.
Figure 2. Percentage of records that used SVM, ANN, DL approaches, and other ML techniques with dermoscopy images.
Jimaging 10 00265 g002
Figure 3. Percentage of records that used SVM, ANN, DL approaches, and other ML techniques with microscopy images.
Figure 3. Percentage of records that used SVM, ANN, DL approaches, and other ML techniques with microscopy images.
Jimaging 10 00265 g003
Figure 4. Percentage of records that used SVM, ANN, DL approaches, and other ML techniques with spectroscopy images.
Figure 4. Percentage of records that used SVM, ANN, DL approaches, and other ML techniques with spectroscopy images.
Jimaging 10 00265 g004
Figure 5. Percentage of records that used SVM, ANN, DL approaches, and other ML techniques with macroscopy images.
Figure 5. Percentage of records that used SVM, ANN, DL approaches, and other ML techniques with macroscopy images.
Jimaging 10 00265 g005
Figure 6. Percentage of records that used SVM, ANN, DL approaches, and other ML techniques with other imaging modalities.
Figure 6. Percentage of records that used SVM, ANN, DL approaches, and other ML techniques with other imaging modalities.
Jimaging 10 00265 g006
Table 1. Reference of records included in the systematic review, grouped by imaging modality.
Table 1. Reference of records included in the systematic review, grouped by imaging modality.
Imaging ModalityReference of Records
Dermoscopy[14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164]
Microscopy[165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182]
Spectroscopy[183,184,185,186,187,188,189,190,191,192,193,194]
Macroscopy[195,196,197,198,199,200,201,202,203,204,205,206,207,208,209,210,211]
Other imaging modalities[212,213,214,215,216]
Table 2. Summary of the discussed main findings and their respective references.
Table 2. Summary of the discussed main findings and their respective references.
Main FindingsExample of Records
CNNs are a current tendency[100,101,102,107,108,109,110,111,112,120]
Ensembles are sometimes preferred for better ACC[39,40,41,42,43,44,45,175,176,194,195]
Different learners can be tested to achieve best performance[23,39,151,156,158,160,161,170,176,190,200]
Freely available databases are of extreme importance for comparison of works[25,26,27,30,31,32,34,36,37,51,53,54,55,81,89,111,112,120,121,122,125,150,159,162,192,193]
Some authors still prefer the use of licensed software[22,25,26,28,38,54,111,125,128,165,183,191,196,200,208,215]
Optimization of feature extraction stage is key[23,39,45,168,174,195,201,204,209,211]
Some studies lack reports of performance metrics[27,39,40,46,100,112,174,184,200,210]
Good balance between SN and SP is necessary[50,55,121,159,194,212,214]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Vardasca, R.; Mendes, J.G.; Magalhaes, C. Skin Cancer Image Classification Using Artificial Intelligence Strategies: A Systematic Review. J. Imaging 2024, 10, 265. https://doi.org/10.3390/jimaging10110265

AMA Style

Vardasca R, Mendes JG, Magalhaes C. Skin Cancer Image Classification Using Artificial Intelligence Strategies: A Systematic Review. Journal of Imaging. 2024; 10(11):265. https://doi.org/10.3390/jimaging10110265

Chicago/Turabian Style

Vardasca, Ricardo, Joaquim Gabriel Mendes, and Carolina Magalhaes. 2024. "Skin Cancer Image Classification Using Artificial Intelligence Strategies: A Systematic Review" Journal of Imaging 10, no. 11: 265. https://doi.org/10.3390/jimaging10110265

APA Style

Vardasca, R., Mendes, J. G., & Magalhaes, C. (2024). Skin Cancer Image Classification Using Artificial Intelligence Strategies: A Systematic Review. Journal of Imaging, 10(11), 265. https://doi.org/10.3390/jimaging10110265

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop