Next Article in Journal
An ONIOM-Based High-Level Thermochemistry Study on Hydrogen Abstraction Reactions of Large Straight-Chain Alkanes by Hydrogen, Hydroxyl, and Hydroperoxyl Radicals
Next Article in Special Issue
New Parametric 2D Curves for Modeling Prostate Shape in Magnetic Resonance Images
Previous Article in Journal
Statistical Inference and Application of Asymmetrical Generalized Pareto Distribution Based on Peaks-Over-Threshold Model
Previous Article in Special Issue
Integral Transforms and the Hyers–Ulam Stability of Linear Differential Equations with Constant Coefficients
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

An Extensive Investigation into the Use of Machine Learning Tools and Deep Neural Networks for the Recognition of Skin Cancer: Challenges, Future Directions, and a Comprehensive Review

by
Syed Ibrar Hussain
1,2 and
Elena Toscano
1,*
1
Department of Mathematics and Computer Science, University of Palermo, Via Archirafi 34, 90123 Palermo, Italy
2
Department of Mathematics, University of Houston, Houston, TX 77204, USA
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(3), 366; https://doi.org/10.3390/sym16030366
Submission received: 18 January 2024 / Revised: 11 March 2024 / Accepted: 14 March 2024 / Published: 18 March 2024
(This article belongs to the Special Issue Feature Papers in Mathematics Section)

Abstract

:
Skin cancer poses a serious risk to one’s health and can only be effectively treated with early detection. Early identification is critical since skin cancer has a higher fatality rate, and it expands gradually to different areas of the body. The rapid growth of automated diagnosis frameworks has led to the combination of diverse machine learning, deep learning, and computer vision algorithms for detecting clinical samples and atypical skin lesion specimens. Automated methods for recognizing skin cancer that use deep learning techniques are discussed in this article: convolutional neural networks, and, in general, artificial neural networks. The recognition of symmetries is a key point in dealing with the skin cancer image datasets; hence, in developing the appropriate architecture of neural networks, as it can improve the performance and release capacities of the network. The current study emphasizes the need for an automated method to identify skin lesions to reduce the amount of time and effort required for the diagnostic process, as well as the novel aspect of using algorithms based on deep learning for skin lesion detection. The analysis concludes with underlying research directions for the future, which will assist in better addressing the difficulties encountered in human skin cancer recognition. By highlighting the drawbacks and advantages of prior techniques, the authors hope to establish a standard for future analysis in the domain of human skin lesion diagnostics.

1. Introduction

Atypical skin cell growth is an example of skin cancer. Although both the epidermis and dermis can be affected by malignant development, epidermal skin cancer is the focus of this review. The skin’s purpose is to shield us from several kinds of hazardous chemicals that harm us. It is impossible to pinpoint all the causes of skin cancer, but it is well-known that some of them link to a weakened immune system, family history, and UV radiation exposure, for example [1,2,3]. The ability of the pigment melanin to soak up UV rays shields the skin from the damaging effects of ultraviolet radiation. A lesion is an area of skin that has been affected; contingent upon where it originated, it can be further classified into several categories. The existence or lack of specific dermoscopic characteristics is usually considered when comparing various lesion types. An autonomous dermoscopy, or image recognition framework, comprises three steps: image segmentation, pre-processing, and feature extraction. Since the output of this phase determines the subsequent steps, segmentation is essential; we refer to the works [4,5,6] and references therein. Segmentation is capable of being done under supervision by considering characteristics like skin type and texture in addition to shapes, colors, and sizes. A deeper look at the pigmented skin lesion can be obtained with dermoscopy, namely a non-intrusive imaging technique that is carried out with the use of a device known as a dermatoscope. With dermoscopy, the epidermis’s structure of the skin can be seen, something that is not normally possible for the naked eye; see the recent articles [7,8]. Studies show that more and more practitioners are incorporating dermoscopy into their everyday work. Three kinds of dermoscopy are distinguished: nonpolarized contact (also known as unpolarized dermoscopy), polarized proximity, and polarized non-invasive. Using both polarized and nonpolarized dermoscopy to get medical images improves the efficacy of the recognition. The recognition of skin cancer can then be aided by processing these images using artificial intelligence (AI) techniques, as it is shown in [9,10,11,12,13].
Despite the fact that skin cancer is a deadly disease, over 95% of cases survive because of its early identification. Melanoma, carcinoma of the basal cells, and squamous cell tumors are the three main forms of skin cancer. The deadliest type of cancer is melanoma. One of the signs is a mole that changes in shape, size, color, or border irregularities. Although just 4% of the general population is known to be affected, 75% of deaths due to skin cancer are caused by malignant melanoma. The malignant melanoma starts to enlarge and spread to other areas. Despite the elevated peril of melanoma, there is a strong likelihood of survival with prompt detection and treatment. Hence, it is important to conduct research in the challenging field of early detection of melanoma; for more details and information the reader can follow the discussion in references [14,15,16]. Briefly, most cancer instances fall into the non-melanoma group, which includes sebaceous gland carcinoma (SGC), basal cell carcinoma (BCC), and squamous cell carcinoma (SCC). Non-melanoma cancers are less likely to spread and are easier to treat than melanoma cancers. Treatment for skin cancer must begin with early detection. The biopsy technique is a common tool used by medical professionals for recognizing skin lesions. To determine whether the alleged skin lesion is cancerous, a sample is extracted for examination. AI has made it feasible to identify skin cancer early on, which lowers the incidence of cancer-related morbidity and death. We remark that AI is an actual subfield of computer science, studying how different technologies can mimic intelligent and wise human conduct [17,18].
Further, a subset of AI technology called machine learning (ML) uses statistical frameworks and algorithms to achieve its goals. To forecast the characteristics of fresh samples and complete the desired goal, these predictive algorithms and models can learn from data iteratively. As a result, complicated programming languages have been developed to carry out operations that human minds would find difficult. Within the field of AI, ML allows automated systems to gain insight through skills. It is possible to employ an unsupervised, semi-supervised, or supervised approach. The algorithm’s job in unsupervised learning is to look for structures or patterns in the data without the help of labeled samples. It seeks to find innate connections or clusters in the data, like grouping together comparable data points. The system learns from labeled data in supervised learning, where each input has a correlating targeted output. To enhance model performance, semi-supervised learning makes use of both labeled and unlabeled data [19]. A history of problems and their solutions in the observed configuration is sent to the machine. Next, machines determine the correct answer by a process of trial and error. Unsupervised learning involves the machine analyzing the input data without requiring a pre-programmed solution. A method called semi-supervised learning makes use of both labeled and unlabeled data. The neural network with a feedforward function is the most basic kind of artificial neural network (ANN). It is made up of three different kinds of layers: hidden, output, and input. Data exits the output node after passing through the hidden layer and enters through the input layer. Convolutional neural network (CNN) is an adaptation of the deep feedforward ANN that is widely used for image processing; here, we refer to the works [20,21,22]. CNNs are a particular category of ANNs, and this needs to be made clear. Although CNNs are specifically made for tasks involving visual data, ANNs are an extensive class of computational algorithms inspired by the human brain. CNNs use their distinctive architecture to extract features and recognize patterns in visual data. For example, we recall that a CNN usually presents only partially connected layers; further, higher layers are originated by a combination of lower layers. Symmetry is a crucial issue in the skin cancer image datasets; hence, in the architecture of CNNs. We know that all orientations of the same image are equivalent to each other and likely to appear. Processing each occurrence of the same image as a new one requires learning specific characteristics at all orientations, with time and resource expense. Recall that one of the features of CNNs is their invariance to translation. Further improving also rotation-equivariant skills, one can avoid learning the orientation transformations from the datasets. This is a true advantage as it releases the network’s capacities, which can be successfully used to learn more discriminative characteristics of the datasets. Symmetric layers in the architecture of the CNN and symmetric functions in the training process show improvements in the above directions.
The study’s significance stems from the pressing need for a precise and effective method of detecting skin cancer, which could result in a prior assessment and better results for patients. Human skin cancer is a major worldwide health issue that is becoming more common at a rapid pace. This work adds to the advancement of more precise and effective skin lesion identification systems, which may eventually save human lives, by demonstrating the efficacy of existing techniques and pointing out areas for improvement. Our report is based on several aspects, such as the thorough and current evaluation of recent works, the capacity to draw connections between disparate findings from studies, and the recognition of significant research gaps that demand additional investigations. A contribution of the current study is in the proposed analysis of previous studies on the usage of CNN and ANN for human skin lesion diagnosis, which offers a clear picture of the state of study in this field right now. Through an analysis of the method’s advantages and disadvantages, as well as areas in need of development, this study aids in the creation of lesion detection tools that are more precise and efficient. It is known that an interesting field of study is the application of deep learning (DL) approaches with image analysis for the diagnosis of skin cancer. This report identifies flaws in the research that must be addressed and provides details about the methods with the greatest potential through a thorough review of new and existing research. In conclusion, this work offers a basis for further research to construct and aims to be an updated and useful resource for schools involved in skin cancer identification, as well as in similar identification issues.
We offer a thorough analysis of many machine and deep learning techniques applied to the diagnosis of skin cancer. Multiple ML and DL techniques are included, and explained, along with brief descriptions of each algorithm used in skin cancer detection. This study presents a complete survey of the recent advancements in this field. Some of the unique contributions to this study are included below:
  • A detailed and comprehensive survey involves almost all the present ML and DL algorithms, their brief survey, positives, drawbacks, and their application in skin cancer detection;
  • A specific tabular overview of research on DL and ML methods for detecting and diagnosing skin cancer is presented. Important contributions, as well as their limitations, are included in the tabulated overview;
  • The article also outlines several current open research issues and potential future research paths for advancements in the diagnosis of skin cancer;
  • This study thoroughly explains the supervised and unsupervised learning algorithms involved in cancer detection.
The review article is comprised of six sections, in total. Section 1 and Section 2 explain the introduction of the work and methodology of the research, respectively. Section 3 presents the ANN and its various algorithms. Section 4 shows the CNN and its models used in the detection of skin cancer. Meanwhile, Section 5 and Section 6 present the challenges, future scope, results and discussions, and concluding remarks, respectively.

2. Research Methodology and ML Algorithms for the Diagnosis of Skin Cancer

This section aims to provide an up-to-date summary of the most recent findings regarding the use of advanced technologies and machine learning for skin cancer diagnosis. After a filtering procedure based on their applicability, the chosen studies were incorporated into the review. The authors acknowledge that to comprehend the topic matter more thoroughly, a thorough examination of recent studies is necessary. The study adopts a methodical approach by offering a comprehensive analysis of the body of literature to pinpoint important discoveries and patterns. Now, we briefly cover the various machine learning models in supervised and unsupervised learning types and the hybrid learning models in this study. The order of the presentation of paragraphs follows the scheme In Figure 1, starting from the supervised side, then the unsupervised side, with both sides reaching ensemble learning.

2.1. Random Forest

Random forests extend decision trees (see Section 2.9 for more information). These are cooperative learning techniques frequently applied to problems involving classification. As demonstrated in [23], random forests can also be used to categorize skin lesions and detect skin cancer. Random forests allow sampling allocation to be evaluated. The steps in the suggested approach include initializing the training dataset. After that, the initial training set is independently supported to produce numerous training subsets. The algorithm is then filled with decision points by computing the Gini index associated with each sub-training set. Once combined, everyone’s decision values yield a model that classifies by casting votes upon the examination samples. Using the algorithm based on random forests, the Mueller matrix components can be characterized to further classify skin cancer; for example, see [24,25]. The basis for the categorization and classification of tasks is built by the random forest algorithm using different sub-decision trees. Each decision tree is given its logic, which forms the basis of the binary question structure that is applied throughout the system. The random forest reduces variance bias and produces better results than the original decision tree. This aids in avoiding overfitting in the data, which decision trees would otherwise exhibit. Dermoscopic images are classified into seven sub-types in other studies related to the categorization of skin cancer. Random forests have been used to implement this; we mention the works [26,27]. In this investigation, the process for creating a random forest is somehow slightly different. The random forest is assembled by setting up a relapse tree after a dataset has been prepared for training. Various sub-classifications such as melanocytic nevi, melanoma, benign keratosis, vascular, basal cell, and dermatofibroma types were used to train the random forest algorithm. Similarly, random forest decision tree algorithms have been applied to categorize skin cancer, as in [28,29]. To determine the consistency of the results obtained by skin cancer classification, this approach enables future research to be more expansive by using its datasets and expanding them to a range of geographical locations; see also [30,31].

2.2. K Nearest Neighbors (KNN)

The KNN algorithm is one of the most popular ML techniques. It is a supervised learning algorithm that groups individual data points based on closeness to classify or predict data. By placing unlabeled observations in the same class as many similar labeled instances, it can be utilized to categorize unlabeled observations. An item is placed in the class that includes its K nearest neighbor. A new feature vector’s classification in the method known as KNN is determined by looking at the categories of its neighbors. The user chooses a parameter called K. Hence, K encompasses all possible scenarios, finds novel scenarios in related categories, and detects every comparable extant feature example with new cases. As a result, selecting the value of K is crucial and needs careful consideration; see the finding in [20,32]. The use of KNNs to identify the abnormal formation of skin lesions has broadened their range of applications; see [33]. To give statistical data on a skin lesion without first requiring needless skin biopsies, KNNs are combined with the Firefly algorithm (that is, a swarm intelligence algorithm for optimization problems). The accuracy of the KNN classifier in [34], using the total amount of neighbors chosen as 15, was 66.8%. The recall and precision for positive predictions were 46% and 71%, respectively. For negative predictions, the precision score hovers around 65%, but the recall value nearly doubles. Since they indicate a precision of more than 96%, the results presented in [35] offer an alternative viewpoint to the updated KNN classifiers. According to [36], fuzzy KNN classifiers exhibit a 93.33% accuracy rate, 88.89% sensitivity, and 100% specificity.

2.3. Support Vector Machine (SVM)

By determining the ideal decision border that optimally divides distinct classes, SVM serves to categorize data. It seeks to identify the optimal hyperplane that optimizes the gap between support vectors so that classification can be done effectively even in intricate, non-linear situations. The machine learning community has shown a great deal of interest in support vector machines. Nonparametric classifiers are SVMs. By counting the points on the edge of the class description, they attempt to determine which hyperplane across the categories is the best. The margin is the distance between two classes. More margins typically lead to more precise categorization. The data values on the outermost boundary are called support vectors. SVMs are used to solve categorization and regression issues; here, we refer to [37,38,39].
The picture was acquired, pre-processed, and segmented, features were extracted, the image was classified, and the outcome was viewed. These six stages make up the process. Three features were taken out of the experiment: color, shape, and texture. SVMs were additionally employed to recognize and identify infections or carcinoma in the initial phases before they worsen [38], extending the utility of the model. Unwanted features like noise and hair are eliminated from the SVM classifier by employing Bendlet Transform (BT) as a feature. Median filtering is the first step in removing these. BT is far more accurate at classifying singularities in images than representation systems like curvelets, wavelets, and contourlets; see [40]. The SVM classifier models reported in [41] had an average accuracy of roughly 98% and an average sensitivity and specificity of 95%. All three of the SVM model’s parameters were more than 90% in [42].

2.4. Naïve Bayes Classifier (NBC)

Probabilistic classifiers that utilize the Bayes theorem to function are known as NBCs. In the area of skin cancer, NBCs have been employed to precisely categorize dermatological and clinical images. The model uses significant pieces of data to form a strong judgment and helps doctors diagnose and precisely detect the disease; as a result, it has achieved an accuracy of 70.15%. The ability to identify and categorize skin conditions is one way that NBCs expand their uses. An afterward probability distribution for each classifier output is obtained; see [43]. Because iterative processing eliminates the requirement to conduct several training sessions, the method might need fewer processing resources. As demonstrated in [44], the Bayesian method has been applied to forecast the characteristics of a data point probabilistically and highly precisely. In this instance, the final classification combines the data points that were previously known to be used in the Bayesian evaluation. Models assisting in the detection of a melanoma invasion of human skin have also been enhanced by the application of the Bayesian sequential framework. Three model parameters were estimated using the model: the diffusivity of melanoma cells, the rate of proliferation of melanoma cells, and a constant that governs the rate of melanoma cell degradation in skin tissue. Therefore, essential measurements from the submitted images must be incorporated into the extraction technique, as demonstrated by the Bayesian structure in [45], as this is primarily feasible in scenarios where complex quantified medical observations, like skin lesion retrieval from academic images, are difficult.
According to [46], naïve Bayes classification algorithms achieve 70.15% accuracy and 73.33% specificity. Simultaneously, the classifiers’ sensitivity and accuracy remain above 70%. In naïve Bayes classification algorithms from additional research, like [47], where the reported diagnostic accuracies are 72.7%, the accuracy seems to conform to the same trend. The recurrent areas of enhancement center on experimenting with various color models and employing various kinds of datasets related to dermal cancer for training. They clarify the urgent need for additional pre-processing in [45,46,47] before NBCs are trained to recognize the skin lesion.

2.5. Linear Regression (LR)

By LR analysis, the value of one variable can be predicted based on the value of another. The dependent variable is the one you need to be able to forecast. An independent variable is the one you are using to forecast the value of the other variable. In mathematical investigations, where it is possible to evaluate the anticipated consequences and simulate them against many input variables, linear regression is typically employed. The existence of a linear connection between the independent and dependent variables is demonstrated by this data evaluation and modeling technique. There are two varieties of linear regression: simple and multiple. A single independent variable is utilized in simple linear regression to forecast the outcome of a numeric dependent factor. Multiple linear regression, on the other hand, uses several independent variables to forecast the value of a numerical dependent variable. For more details and information, we refer the readers to the works [48,49,50,51].

2.6. K-Means Clustering (KMC)

KMC is categorized as an unsupervised learning clustering technique. Research on early skin melanoma segmentation uses fuzzy logic in conjunction with the current k-means clustering algorithm. To identify the impacted areas, pre-processed clinical images are exposed to fuzzy KMC. This facilitates the procedure that will be used to identify melanoma disease later. Skin lesion segmentation is one of the many applications for k-means clustering [52]. By grouping the objects, the approach is that there is as little variance as possible within each group. The classifier can produce high-feature segmented images as a result. An arbitrarily initialized class center is assigned to every pixel in the image. Every new data point added causes the centers to be recalculated. Until every point of data has been allocated to clusters, the process is repeated. Fuzzy c-means allows the statistics to indicate the component of a variety of clusters, alongside a likelihood associated with each hit, in contrast to binary classifiers like k-means, in which every point of data can only belong to one cluster. When the fuzzy c-means technique is utilized instead of the more traditional k-means clustering algorithm, the results are noticeably better. For data points, fuzzy c-means offer an approximate likelihood that is dependent on the separation involving the point and cluster center. In [53], fuzzy c-means, which were motivated through a differential evolution ANN, were utilized in place of the k-means model to recognize skin cancer. According to the simulated results, the suggested strategy performed better in this regard than conventional methods. When trained on data from deep learning techniques, the k-means approach may also be utilized as an intermediary layer to generate outputs. They presented an algorithm in [54] that divided the input images into segments according to the intensity variations using k-means. After that, additional processing was applied to the clusters to facilitate the identification of melanoma cancer. To find skin lesions, one can also utilize the conventional k-means algorithm. It can be used with the blue, red, and green color modes, a neighborhood binary pattern, and a grey-level co-event matrix to improve the quality of the results; see [33]. Lesion orientation, color features, and image contrast are examples of external factors that must be successfully extracted for k-means clustering to function, as can be seen in the recent survey [55]. This makes the diagnosis pipeline, which makes use of k-means clustering, more coherent. Before the external features are used as input by the clustering algorithms, the pipeline must precisely extract them. Models for k-means clustering typically yield high detection accuracy. An example of an algorithm that expands fuzzy logic is found in [53], which yields an accuracy of more than 95%. Like those in [33,54], other k-means clustering algorithms also report a 90% detection accuracy.

2.7. Ensemble Learning (EL)

An ML model called ensemble learning integrates the predicted results of more than one model. Another name for the constituent models is ensemble members. These models can be tailored to something entirely different or learned on the same dataset. To generate a forecast for the problem assertion, the participants in the ensemble are grouped. Melanoma can be classified as benign or malignant using ensemble classifiers. By training each ensemble member separately on balanced subspaces, the number of redundant predictors is decreased. The neural network fuser is used to group the remaining classifiers. Compared to other specialized individual classifier models, the ensemble classifier model that is being presented yields statistically superior results. To help physicians detect skin lesions early, EL has been applied to the multi-class categorization of skin cancer. Five deep neural network (DNN) models were used in the ensemble model: ResNet, SeResNet, DenseNet, and Xception. ResNet makes it simpler to train deeper networks by introducing skip connections, often known as shortcuts, which enable the network to learn residual mappings. Squeeze-and-excitation blocks are added to the ResNet architecture as part of the SeResNet adaptation. DenseNet connects layers densely, which promotes feature reuse and helps gradient flow across the network. This architecture is renowned for its effective parameter management and potent picture classification capabilities. When compared to conventional convolutional layers, the Xception design seeks to reduce processing costs while capturing complicated patterns efficiently. When all of them were combined, the ensemble model outperformed them all; here, we refer to the recent contributions in [56,57,58].
To complete the material in this section, we give some additional background used in the bibliography, as follows.

2.8. Long Short-Term Memory (LSTM)

LSTM is a kind of recurrent neural network (RNN). RNNs cannot store long-term memory; instead, they rely on long short-term memory banks (LSTMs) to do so. LSTMs can recognize long-term connections between data time steps; they are mostly utilized for the learning, processing, and classification of sequential data. Initial stage memory in an LSTM can be accomplished via memory lines inserted in addition to gates. Four neuronal networks and storage-building components called cells make up the LSTM, which is arranged like a chain. All the information is stored in the cells, and the gates handle memory modification. There are three different kinds of gates: input, forget, and output gate. The input gate adds details to the cell state, and the output gate extracts valuable data from the present cell and displays it as an output, as noted in the articles [59,60,61,62].
Pretrained models like the MobileNet V2, as demonstrated in [63], perform better when LSTM components are added. Training accuracy is 93.79%, while the validation accuracy is 90.62%. This is an enhancement of the cutting-edge model. Given a standard sensitivity that is 53% and a specificity of 80%, the research done in [64] showed that LSTM outperforms most machine learning models.

2.9. Decision Tree (DT)

The most widely used methods in several fields, including pattern recognition, machine learning, and computer vision, are decision tree models. Decision trees are a type of serial model that efficiently connects several fundamental tests. This test compares a numerical feature to each test’s threshold value of 11. The decision trees are composed of leaf nodes, branches, and root nodes. Every internal node has the characteristic under test; the branch node shows the decision, and the leaf node displays the result. Consequently, we can define a DT as a tree in which the leaf represents the outcome, the link, sometimes referred to as a branch, shows a rule, and the nodal point displays a feature. The primary goal is to create a tree with this structure for all the data and obtain a single result at each leaf; see also [65,66].
Authors use deep convolution neural networks in [67] to show how well this architecture works for acquiring areas and categorizing skin cancer. DTs and various models, like SVMs and KNN, are utilized to classify most of the features. In [68], decision trees are additionally employed to achieve simplicity in the categorization of breast cancer. In comparison to its predecessors, the basic DT models offer users highly accurate clinical recognition and diagnostic accuracy, as demonstrated by the error investigation of the proposed approach. According to [69], the decision tree simulation has 42% specificity and 91% sensitivity. The models reported in [70] also yield results with sensitivity, specificity, and accuracy higher than 90%. The model put forth, where all three parameters cross 94%, exhibits a similar trend. On the other hand, models like those in [71] report a 100% specificity while returning with a slightly less sensitive value of 77%. The stature of the datasets has a considerable impact on the predictions made by decision tree models. The typical drawbacks in obtaining the precision and accuracy made by [70,71] include the fact that the training and testing datasets for the model had the same variable distribution, which removes the possibility of training the model on totally distinct populations.

2.10. Auto-Regressive Integrated Moving Average (ARIMA)

This model is used in economics and statistics to quantify events that occur across time. The model is applied to interpret historical data or forecast data in a sequence (that is, in time series). Numerous sectors employ auto-regressively integrating moving averages for a variety of purposes. Typically, it is applied to demand forecasting, and it is often denoted by the “ARIMA model (p, d, q)”, where p means the total amount of autoregressive terms, q represents the moving average, and d refers to differences. We selected the articles [72,73,74] for more information.

3. Artificial Neural Network in Skin Lesion Detection

Skin cancer recognition using technology has demonstrated that this is feasible to overcome the drawbacks of conventional approaches, creating a novel field of study. This section provides a summary of several relevant studies to strengthen the reader’s comprehension of the subject matter and level of understanding. The use of ANNs in ML is an actual aspect of the topic. This methodology was grounded in the structure and functions of the brain. To determine the accuracy of computerized methods, researchers examined the data and saw the approach designed in [75]. A more precise method for identifying melanoma skin cancer was discussed by researchers in [76]. To create fake melanoma images, a nonlinear segmentation inclusion layer was used. The dataset augmentation was employed to take cryoscopy images from the accessible PH2 dataset [77] and turn them into a new dataset of melanoma. These images served as training data for the squeeze-net deep learning model. A statistical, nonlinear prediction technique is an ANN. The biological makeup that comprises the human brain served as inspiration for its design. The input layer, middle layer, and output layer make up an ANN. The first layer, known as the input layer, is responsible for sending data to the middle layer. Hidden layers refer to the intermediate layers. An artificial neural network’s classic model has multiple hidden layers. The third layer of the output neuron receives the data from the intermediary portion of the neuron. Backpropagation is employed at each layer to learn computations and comprehend the intricate relationship between the layers of output and input. Artificial neural networks classify the retrieved features to identify skin cancer. The pictures are categorized as either non-melanoma or melanoma; see [77,78]. The entire amount of input photos determines the entire quantity of hidden layers in an ANN. Connecting the hidden layer with the input information set is the task of input layers. An unsupervised or supervised training algorithm may be used to handle either labeled or unlabeled datasets. Gathering photos of skin lesions that are both cancerous and non-cancerous forms of the skin is the first phase in the image-collecting process. Image pre-processing, which is the second phase, entails scaling and grayscale-to-RGB conversion to enhance certain images. Image segmentation’s goal is to change the image’s description. Usually, it is employed to locate boundaries and things in pictures. Finding an image’s recognized textures is known as feature extraction. These features pass after the feature extracting procedure; we mention the articles [79,80,81].
In 2020, Kumar et al. [53] put up a report that aimed to improve the accuracy of recognizing skin cancer by electronic devices like mobile phones. This research suggests an algorithm for early identification of three forms of the lesion. Images of cancer lesions that have been classified as malignant or non-cancerous are provided as input. Fuzzy c-means clustering is employed in the picture segmentation process. It divides comparable image parts. PH2 and HAM10000 [3] are the datasets that were used in this investigation. When compared to other conventional approaches, the proposed differential evolution with artificial neural network (DE-ANN) method is more precise and efficient. It yields 97.4% total accuracy [53]. Nawaz et al. have also effectively accomplished this in their work [82].
An approach consisting of three main phases was proposed in [83]: binary thresholding, extraction of features, and training, as well as testing NNs using the features that were obtained. The suggested technique uses images to accurately identify and categorize skin cancer. The suggested approach has a 97.84% success rate, exceeding expectations. Ninety-two images make up the study’s dataset. In the future, this technique could be utilized to identify cancers of the brain, lung, breast, and other organs. Based on easily accessible personal health information, the ANN was created to aid in the early diagnosis of skin cancer. Despite the lack of family history and UV radiation, the recognition remained extremely sensitive and specific. Thirteen different sorts of characteristics were considered, ranging from age to gender, from heart disease to smoking status, and from body mass index (BMI) to diabetes status. The included characteristics were color, ethnicity, and level of physical activity. Although not all 13 of the characteristics were associated with non-melanoma, they were nonetheless employed since they were readily available and because ANN’s non-linearity allows them to have a greater impact on accuracy than other conventional techniques.
The authors in [84] created a Medical Vision Transformer (MVT)-based classification model for skin cancer as part of the second tier of the system, considering the MVT’s impressive performance in the processing of medical imagery on HAM10000 datasets, which achieved efficient results. The input picture is divided into image patches by this MVT, which subsequently feeds the patches to the transducer in a word embedding-like sequence. A methodology for early detection of skin cancer by artificial intelligence and digital image processing was presented by Vijayakumar et al. in 2019 [85]. Direct skin contact is not necessary for this procedure. Dermoscopic images are processed after being used. Border editing and noise reduction are the purposes of this. The hairs in the photos have been eradicated using DullRazor algorithm. The lesions must be extracted from their surrounding skin and backdrop in the following step. Feature extraction is the step after that. After being fed into the classifier, the characteristics collected are categorized as either malignant or non-cancerous. A hybrid genetic algorithm is used in this technique. Experimental biopsies are inferior to this approach [85]. Following the Vision Transformer’s (ViT) impressive performance, which attracted the attention of researchers, certain research was done in 2022 on skin lesion categorization and segmentation using the ViT algorithm (see [86,87,88]). Researchers integrated a contrastive learning strategy with an innovative approach, incorporating the image features block from the original ViT model in the work [86]. The researchers in [87] carried out tests using their enhanced position encoding technique to solve the problem of bottlenecks in the original ViT. When compared to the prior top performance in the classification of skin cancer, both obtained equivalent findings. The field of skin lesion segmentation has also seen attempts at using ViT [88]. The primary transformer’s multiheaded consideration was enhanced with a spatially pyramid pooling component which led to superior segmentation performance, as compared to default CNN algorithms and increased computational effectiveness.
The work presented by Hasan et al. [89] aims to develop a model that uses ANNs to determine whether a person possesses melanoma using raw set approaches to identify the prime aspects set. ANN accomplishes this by evaluating and analyzing several factors and signals. The rough set approach is chosen as a subset of characteristics in the first phase and is regarded as a diminution of features. The next stage is to use ANN for clarification. The model’s outcome demonstrates that ANNs are performing better for three different kinds of cancer. There are different cancer types: atypical nevus, common nevus, and melanoma.
An artificial neural network (ANN) with numerous layers of hidden units is called a Deep Belief Network (DBN). It incorporates features from neural networks and probabilistic graphical models. Usually, multiple layers of stochastic generative neural networks called Restricted Boltzmann Machines (RBMs) comprise DBNs [90]. Based on input from the layer above, each layer of the RBM learns to represent features at a higher level. The suggested skin cancer identification technique makes use of DBNs, which are a recent development in deep learning. A model for producing probabilities based on energy, called a DBN, is built by stacking numerous RBMs. Training and testing are the two stages in which the DBN model runs. Several RBM layers are employed in the course of training the DBN layers. These layers are made up of neurons that are located over the data input layer in the hidden layer. While intra-layer communication is limited, neuronal communication between layers is unrestricted. Natural language processing, speech recognition, image recognition, and other fields have all found use for DBNs. They are well-known for their capacity to identify complicated trends in data that are highly dimensional and have made substantial contributions to the field of deep learning research. To achieve greater efficacy in many terms, a modified version of the recently established thermal exchange optimization (dTEO) technique has been used to carry out the optimization process in DBN by Wang [91] and achieved highly efficient results. Through a comparison of the segmented outcomes with reality, significant performance metrics were developed.
Researchers classified skin cancer by applying both artificial and human intelligence. An ANN specialist and 112 German dermatology specialists classified 300 skin cancer lesions that were confirmed through biopsy in five categories. Using boost, two distinct sets of diagnostics were combined into the form of a single classifier. According to the findings, individuals as well as machines were correct in multiple areas of study 82.95% of the time. Technology based on deep learning can distinguish between malignant and benign tumors; see [92]. In a standard laboratory setting, the technique was examined with HAM10000, ISIC 2018–2020 datasets. Using the ISIC 2018 dataset, the InSiNet framework performs better than the other techniques, with a precision rate of 95.69%, 92.79%, and 91.64%, correspondingly. The researchers have developed a technique that employs fuzzy k-means clustering and a CNN based on regions to detect melanoma in its early stages; see [93]. The study’s findings demonstrate that the suggested approach performs well in diagnosing skin cancer. Growing the number of images analyzed and implementing new ANN models are expected to improve the study’s outcomes. Table 1 presents the datasets for skin cancer detection that are freely available publicly; we refer to [94].

4. CNN in the Detection of Skin Cancer

For digital imaging applications, convolutional neural networks have an edge across fully interconnected feedforward neural layers because of the weight-sharing and spark connectivity of the images’ pixels. Additionally, CNN may be adjusted for various learning strategies, including regularization techniques, learning algorithms, and backpropagation. Nonlinear pooling, convolution, and fully connected layers make up CNN’s hidden layer. Multiple convolutional layers precede multiple fully connected layers in CNN. In CNN, the input ratio size is decreased, and the convolution layer produces output by managing the weights that are lowered by the pooling layers. The pooling layer’s output is utilized and transmitted into the fully connected layer that comes after the convolutional layer; see [95,96]. A crucial component of CNN is the layer known as convolutional, which has multiple 2D matrices and a range of weights for various applications, such as image segmentation. Figure 2 shows the different examples of melanoma skin cancer types, as presented in the ISIC dataset, that are available publicly on their website.
In 2020, Zhang et al. [97] presented a research study that elaborates on how the behavior of convolutional neural network-oriented skin lesion image recognition is affected when detailed image features and client data are combined. The present study investigates the efficacy of the employed patient data types, the coding and merging of on-image data with image features, and the impact of these factors on the performance of CNN. This study’s data came from ScienceDirect, PubMed, and Google Scholar. Search terms such as skin cancer categorization, CNN, lesion, deep learning, melanoma, and so on are used to filter peer-reviewed papers (see [97]).
In 2020, Fu’adah et al. [98] presented a study in which they used convolutional neural networks to create a system that can distinguish between benign and skin cancer. With arbitrary regulators, the CNN process yielded a 97.49% accuracy in this study. Certain skin lesions, such as nevus lesions, carcinoma, and melanoma, can be distinguished by it. The ISIC dataset’s augmentation data is used in this investigation. The CNN algorithm featuring the Adam optimization algorithm yields the best results in dataset classification, with a 99% accuracy rate. The performance outcome guarantees that medical professionals can use the suggested model as a tool for diagnosing skin cancer.
In 2019, Hasan et al. [89] suggested a study that used automated skin lesion recognition. CNNs were employed to categorize cancer pictures as benign or malignant. In that study, feature extraction approaches were utilized to extract the characteristics of skin cells affected by cancer. CNNs were employed in the following stage to sort the extracted features. Using the publicly available dataset, this method yields a precision of 89.5% and an accuracy for training of 93.7%. The method can be regarded as a standard for the detection of skin lesions, based on its testing and evaluation sections.
Using the HAM10000 openly accessible datasets, the training dataset for the suggested model that describes the seven forms of skin cancer, Waweru (2020) [99] performed a DCNN to recognize skin cancer. In this study, the suggested methodology yielded an adequate precision of 78.0%. Using the HAM10000 [3] unbalanced dataset, the melanoma detection model created by Cakmak and Tenekeci [100] with a Basnet mobile neural network attained an accuracy of 89.20% in 2021. Furthermore, 97.90% accuracy in the identification of skin cancer was attained with the dataset. A prediction paradigm for the classification of human skin diseases was presented by Fujisawa et al. in [101]. VGGNet, LeNet, AlexNet, ResNet, ZFNet, and GoogleNet, the most recent CNN model with skin sore extractors integrated, are a few examples of models utilized in the clinical setting. In this investigation, results were completed by 42% of dermatology graduates in a 14-class grouping, 60% of board-certified dermatologists, and 75% of CNN classifiers. The results of the CNN classifier are 92%, board-certified dermatologists are 85%, and dermatology students’ concurrent categorization (malignant or benign) is 74%.
Yu et al. [102] built over 50 layers of CNN in 2016 to categorize malignant melanoma. For this task, the maximum classification accuracy was 85.5%. Haenssle et al. [103] demonstrated 86.6% specificity and sensitivity in 2018. ECOCSVM and CNNs were used to create a multiclass classifier to classify multiclass data. We note that 95.1% is the stated accuracy percentage for the current survey in [104]. Using a hybrid technique, Ragab et al. [105] have reported precise and accurate diagnosis of the melanoma type of skin cancer. A novel system for classifying lesions was implemented. Features included bubbles, gel, hair filters, and specular reflection. The malignant hairs were identified and removed using the leveling mode. The skin lesion can be identified among the surrounding tissue by the adaptable sigmoid procedure, which assesses the severity of localized lesions. Error rate was decreased, and accuracy was enhanced.
In a novel ensembled CNN structure, Several CNN algorithms were integrated by Qureshi and Roos [106] in the year 2022; some were pre-trained, while others were trained only on the data that was provided with supplemental data and information about the image inputs employing a meta learner. The suggested method enhances the algorithm’s capacity to manage sparse and unbalanced input. They used a dataset including 33,126 images obtained from the 2056 patients to illustrate the advantages of the suggested model. They assessed the suggested method’s performance using the F1-measure (see next section), area below the ROC curve, PR-curve, and contrasted it with seven other standard approaches, two of which were recently CNN-based. Figure 3 explains the various steps and processes involved in the CNN architecture to classify and determine skin cancer types. It includes the pre-processing steps that involve data augmentation and data segmentation. Convolutional Neural Networks require data segmentation in order to allow distinct training and testing periods, avoid overfitting, and precisely assess the model’s performance on unknown data. Later, the training of the CNN model is performed. Then, if the desired accurate results are achieved, we save the results; otherwise, we again re-adjust the hyperparameters and weights and reapply until the best possible accurate results are achieved. Figure 3 basically explains the overall structure of the CNN algorithm.

Validation Metrics in ML

A number of metrics will be employed in this study to evaluate models developed using machine learning [106]. The accuracy rate can be determined by counting the number of accurate classifications a classifier made across the complete test set. Accuracy is calculated using equation as given below:
A c c = T P + T N T p + T n + F P + F N ,
where TP means true positive, TN means true negative, and FP and FN stand for false positive and false negative, respectively.
Precision is defined as the ratio of true positive to the total observed positive instances. In this case, the true positives are all positive, as are some true negatives that were incorrectly recorded as positives (FP). To compute precision, use the equation as follows:
P r e c = T P T p + F P .
Recall, sometimes referred to as the true positive rate, is a model’s ability to correctly identify the true positive. Recall is computed as follows:
R e c = T P T p + F N .
The number of benign/malignant circumstances that a given set of whole, acceptable cases may identify is known as specificity. It is given as:
S p e c = T N N .
The F1 score is calculated by averaging recall and precision. The precision as well as the recall of the evaluation serve as the basis for this measurement, which indicates how accurate a test is. To compute F1, we use the equation:
F 1 = 2     P r e c R e c P r e c + R e c .

5. Challenges and Future Scope

While neural networks (NN), including ANN and CNN, are valuable tools for early skin cancer detection, there are several drawbacks to employing these methods. The need for intensive training in the NN-based human skin lesion recognition approaches is one of the biggest obstacles to be overcome. Precise training is necessary for the system to perform accurate analysis and interpretation. This takes a long time and needs hardware that is robust and powerful. For CNN and ANN to acquire key characteristics of human skin lesions, as well as enhanced skin cancer detection, robust hardware and graphics processing power sources are essential. One significant obstacle, though, is the lack of such a strong processing power. The variance in lesion dimension is a common challenge encountered. Lesions of the 1- or 2 mm size on the skin are typically harder to find and more prone to mistakes in the early stages of diagnosis. It can be very difficult to distinguish between melanomas and non-melanoma lesions, or between a birthmark and melanoma, in many cases. It is exceedingly challenging to categorize and analyze the skin lesion image due to the slight interclass difference. The current datasets for the diagnosis of skin cancer are incredibly uneven. There could be thousands of photos of common types of skin cancer, but very few of the uncommon ones. As a result, it is exceedingly challenging to make deductions from the images’ visual features.
The lack of age-based image segmentation in typical datasets presents another difficulty. Some skin malignancies, like SCC, BCC, and Merkel cell carcinoma, usually affect persons over the age of 65 to 70, yet there are images of skin lesions on younger people in the current study. Therefore, neural networks need images of individuals older than 50 to accurately diagnose older adults (Table 1). Neural networks often have trouble identifying people with darker skin tones because most of the images in the current standard dataset are of fair-skinned individuals from various regions of the world. Neural networks must, therefore, be trained to take skin tone into account. These days, most of the research on skin cancer detection focuses on determining if a cancerous skin image is malignant or not. Nevertheless, the results of the current study are insufficient to respond to patient inquiries about whether a particular skin cancer symptom manifests in any region of the human body. The present research has mostly focused on the specific issue of signal image classification. To address patients’ concerns, future research on skin lesion recognition may concentrate on incorporating complete body pictures. Getting a full-body shot will expedite the image-acquiring process. The concept of auto organization is one that was recently developed.
This discusses unsupervised learning and aims to identify features and investigate relationships between the images in the dataset. Experts can obtain a higher level of visualization of features with CNN owing to its use of auto-organization techniques. Research and study on this model are currently underway. Nonetheless, more advancements can greatly boost image processing’s effectiveness.
  • Non-public databases and images collected through the World Wide Web are employed for research when publicly accessible information is not available. Because of this, replicating the results is more challenging, considering a dataset is not available.
  • Additionally, most studies have found that lesion scaling is significant if it is less than 6 mm, which makes it impossible to diagnose melanoma and considerably reduces the efficacy of the diagnostic.
  • Most of the methods concentrate on fundamental deep-learning techniques. Fusion methods, on the other hand, are reported more accurately. Despite this, fusion methods for datasets are not as frequently documented in the literature.
  • It has been discovered that deep learning techniques properly identify 70% of training images and thirty percent of testing images. On the other hand, results indicate that a high training ratio is required to achieve satisfactory results. When the ideal balance is achieved, deep learning techniques perform effectively. Developing hybrid techniques that perform better with lower training ratios is a difficult task.
  • An annual melanoma diagnosis competition has been organized by the International Skin Imaging Collaboration (ISIC) since 2016, yet one of the limitations of ISIC is the availability of only light-skinned data. For the images to be featured in the databases, they must have dark hair.
  • For a more accurate diagnosis of skin cancer and to extract the features of an image, the artificial neural network needs a lot of processing capacity and a strong GPU. Because deep learning has limited processing power, it is challenging to develop algorithms for skin cancer detection.
  • The inefficiency of employing neural networks for skin cancer diagnosis is one of the most significant issues. Before the system can effectively analyze and interpret the characteristics from picture data, it must go through a rigorous training process that takes a lot of patience and exceptionally powerful hardware.
Future research could focus on several avenues to enhance the identification of skin cancer from dermoscopic pictures by leveraging CNN’s Deep Siamese domain adaptation improved with the Honey Badger (HB) algorithm. Increasing the diversity of the dataset, incorporating additional modalities like clinical data, improving the model’s decision-making interpretability, investigating transfer learning and domain-specific adaptability approaches, putting virtual and active learning approaches into practice, enhancing the algorithm for real-time implementation on mobile devices, carrying out validation studies, and assessing the system’s effect on patient outputs are a few of these. Transfer learning is the process of applying knowledge gained from one task or dataset to another. Using this method, a pre-trained model is adjusted to the unique properties of the target task or dataset. The model is usually trained on a sizable dataset for a related task. Transfer learning can enhance model performance by moving learning visualizations, variables, or characteristics from the source environment to the target area, thereby lowering the quantity of labeled data required in the target domain. The problem of learning from few labeled samples is tackled via few-shot learning. Few-shot learning is especially helpful in situations when getting huge volumes of labeled data is impracticable or expensive because the model is trained using only a few samples per class. For solving real-world ML problems, both few-shot learning and transfer learning are effective approaches, particularly in fields where labeled data is hard to come by or prohibitively expensive [106]. The objective of these forthcoming paths is to optimize the model’s efficacy, applicability, and therapeutic usefulness in precisely identifying skin cancer.

6. Results, Discussion, and Conclusions

This review study has examined the application of deep learning methods, particularly ANNs and CNNs, for early diagnosis of skin cancer. The performance and efficacy of these approaches in the detection of skin cancer have been verified through an in-depth analysis of numerous research studies. Deep learning methods are necessary for intricate and multimodal preprocessing methods, including cropping and resizing images, as well as pixel-level standards. Table 2 presents the results in the form of accuracy, sensitivity, and specificity for different machine learning models. It also expresses the limitations of the models presented and the novel contribution to the literature on skin cancer detection. Figure 4 explains the accuracy chart that presents the accuracy obtained in classifying skin cancer by different researchers in the most recently published research articles by using different machine learning algorithms. An overview of various algorithms for skin cancer detection by using convolutional neural networks by different researchers has been elaborated in Figure 5 and Figure 6, and Table 3. Figure 5 and Figure 6 present the CNN algorithms and the respective datasets used in the form of chart and sunburst forms. Figure 5 and Figure 6 are basically covering Table 3 results in a graphical manner to enhance the presentation of the results that are further explained in Table 3. Table 3 presents the results in the form of accuracy, specificity, and sensitivity results on comparative bases by various researchers, using multiple CNN models, and the datasets and have obtained quite a remarkable result. It explains quite an accurate result by using the recent datasets and by applying the ML and DL models. Table 4 expresses the most recent contributions of the researchers in the field of skin cancer detection by using multiple types of datasets and applying the various ML and DL techniques. It also presents the accuracy, specificity, and sensitivity of the corresponding techniques and datasets that are presented in 2023–2024 literature. For the preprocessing and segmentation retrieval abilities, numerous research studies have used specified features. The findings demonstrate that CNN, as well as ANN in general, can effectively identify skin cancer using multiple sets of data and hybrid algorithms, suggesting that these technologies can increase the precision of human skin cancer recognition. While examining what has been learned from previous works, a stimulating discussion is conducted.
Table 2. An overview of the use of various algorithms in current skin cancer detection research.
Table 2. An overview of the use of various algorithms in current skin cancer detection research.
Ref.AlgorithmsLimitations and Novel ContributionsResults
[3]SVMCNN models have been implemented in this work, but SVM has outperformed all the CNN algorithms by showing the best results and classifying the types of skin cancer.ACC.: 99%,
PREC.: 0.99,
RECALL:0.99
F1: 0.99
ResNet 50Implemented the CNN models for the detection and classification of skin cancer using the HAM10000 datasets, where Adam is used as an optimizer.ACC.: 83%,
PREC.: 0.81,
RECALL: 0.83
F1: 0.78
MobileNetACC.: 72%,
PREC.: 0.86,
RECALL: 0.72
F1: 0.77
[102]SVM+CNNDL techniques for lesion categorization and segmentation. Skin color variances can cause it to operate less well than necessary, as was previously indicated. Transfer learning is encouraged because the sample size is small.ACC.: 92%
[106]ANNPre-processing and the smooth bootstrap technique are employed before data augmentation. Features are extracted from a pre-processed image.ACC.: 85.93%,
SPEC.: 85.89%,
SENS.: 88.78%
[107]DL + K-Means
Clustering
An automated approach that uses preprocessing to reduce noise and improve visual information segments of skin melanoma at an early stage using quicker RCNN and FKM clustering based on deep learning. The technique helps dermatologists identify the potentially fatal illness early on through testing with clinical images.ACC.: 95.40%,
SPEC.: 97.10%,
SENS.: 90.00%
[108]RCNNUtilizing RCNN enhances segmentation efficacy by computing deep features. The given method is not scalable and is difficult, which results in overhead costs.ACC.: 94.78%,
SPEC.: 94.18%,
SENS.: 97.61%
[109]GRU/IOPAImages of skin lesions are pre-processed, the lesion is segmented, features are extracted on HAM10000, detecting skin cancer with enhanced orca predation algorithm (IOPA) and gated recurrent unit (GRU) networks.ACC.: 99%,
SPEC.: 97%,
SENS.: 95%
[110]Deep Learning
model
Lesion classification and segmentation were carried out with 2000 photos from the ISIC dataset and created a multiscale FCRN deep learning network.ACC.: 75.11%,
SPEC.: 84.44%,
SENS.: 90.88%
[111]ResFCNA new automated technique for segmenting skin lesions has been created. Utilizing a step-by-step, probability-based technique, it integrates complementary segmentation results after identifying the distinct visual characteristics for each category (melanoma versus non-melanoma) through a deep, group-specific learning approach. Because the process is non-scalable and difficult, it results in additional costs.ACC.: 94.29%,
SPEC.: 93.05%,
SENS.: 93.77%
[112]SVM + ANNThe use of SVM and several ANN structures for accuracy, performance, and image categorization of human skin lesion are discussed. When multiple algorithms are compared, SVM performs better than others, like the Gaussian kernel.ACC.: 96.78%,
SPEC.: 89.29%,
SENS.: 95.44%
ACC: Accuracy, SPEC: Specificity, SENS: Sensitivity.
Figure 4. An illustration of results obtained by machine learning models to diagnose skin cancer. (A) SVM [42], (B) NBC [44], (C) DT [69], (D) KMC [82], (E) KNN [113], (F) EL [114].
Figure 4. An illustration of results obtained by machine learning models to diagnose skin cancer. (A) SVM [42], (B) NBC [44], (C) DT [69], (D) KMC [82], (E) KNN [113], (F) EL [114].
Symmetry 16 00366 g004
Figure 5. An illustration of results obtained by CNN models to diagnose skin cancer. (A) Novel Regularizer + CNN [115], (B) InceptionNet V3 + ResNet Ensemble [116], (C) CNN Optimized [117], (D) Fuzzy c-means + Deep RCNN [107], (E) DCNN [118], (F) CNN + LDA [117], (G) DNN + Transfer Learning [119], (H) DenseNet + ICNR [120], (I) VGG16 + GoogleNet [121], (J) VGG16 + AlexNet [122].
Figure 5. An illustration of results obtained by CNN models to diagnose skin cancer. (A) Novel Regularizer + CNN [115], (B) InceptionNet V3 + ResNet Ensemble [116], (C) CNN Optimized [117], (D) Fuzzy c-means + Deep RCNN [107], (E) DCNN [118], (F) CNN + LDA [117], (G) DNN + Transfer Learning [119], (H) DenseNet + ICNR [120], (I) VGG16 + GoogleNet [121], (J) VGG16 + AlexNet [122].
Symmetry 16 00366 g005
Table 3. An overview of the use of CNN algorithms in skin cancer detection research.
Table 3. An overview of the use of CNN algorithms in skin cancer detection research.
Ref.DatasetsAlgorithmResults %
SpecificityAccuracySensitivity
[107]ISBI 2016Fuzzy c-means + Deep RCNN95.1094.3194.04
[115]ISICNovel Regularizer + CNN94.2697.5093.59
[116]ISBI-2018InceptionNet V3 + ResNet Ensemble86.3188.9779.58
[117]Dermis, DermQuestCNN Optimized99.3792.9593.87
[117]ISBI-2017CNN + LDA52.6785.1597.38
[118]PH2DCNN92.7894.9193.92
[119]MED NodeDNN + Transfer Learning97.2097.3797.52
[120]ISBI-2017DenseNet + IcNR9393.43NA
[121]ISBI-2016VGG16 + GoogleNet70.0388.9293.75
[122]PH2VGG16 + AlexNet99.7797.5196.90
Figure 6. A representation of CNN models and the respective datasets used in the above tabular form.
Figure 6. A representation of CNN models and the respective datasets used in the above tabular form.
Symmetry 16 00366 g006
Table 4. An overview of the use of DL and ML algorithms in skin cancer detection.
Table 4. An overview of the use of DL and ML algorithms in skin cancer detection.
Ref.DatasetsAlgorithmResults %
SpecificityAccuracySensitivity
[28]ISIC 2019ResNet 50 + Sand Cat Swarm Optimization93.4792.0392.56
[38]ISIC 2018Ensemble Learning of ML and DL92.39394
[123]HAM10000Randon Forest DNN97.5996.8066.11
[124]ISIC 2020Contextual image feature fusion (CIFF). NET96.898.340.1
[125]ISIC 2020Teacher Student96.2195.231.03
[126]ISIC 2018Lightweight U Architecture
(Lea Net)
96.293.589.9
[127]ISIC 2017ResU-Net94.4992.3887.31
[128]HAM10000Deep Convolutional Ensemble Net DCEN84.7999.5398.58
The authors observed that CNN outperformed ANN and other algorithms, in general, because it recognized visual information with greater precision than the rest of the neural networks. In addition to highlighting the importance of developing a computerized approach for human skin cancer diagnosis to minimize the time and effort needed for diagnosis, the review study underscores the uniqueness of applying deep learning methodologies to the recognition of skin cancer. One of the study’s possible uses is the production of human skin cancer detection systems that are more effective and precise, which could result in earlier detection and improved treatments. Its goal is to establish this survey as a standard for future research in diagnosing skin cancer by considering the drawbacks and advantages of earlier studies. In summary, several prerequisites, dependency issues, and challenges need to be fixed before artificial intelligence (AI)-based medical treatments can be expanded. A dearth of medical data for various types of skin, legal and ethical difficulties, and AI investigation all contribute to an unexpected bias in the forecast provided by the model. Additionally, even if AI is becoming more popular in the field of dermatology, there is still much space for growth and improvement regarding the specificity, sensitivity, and precision of identifying lesions in the skin. Dermatologists also need to be the first to embrace AI, not as a concern to their occupations, but as a supplementary tool to help them make diagnoses. There is an opportunity to enhance autonomous detection of skin cancer, even though the scant data indicates an equality between people who are eager to adopt AI treatment and those who are averse to it. Scientists and medical professionals can use the thorough analysis and assessment of recent research in the current area as a great resource for developing and executing skin lesion diagnoses that are more successful. This article presents a thorough analysis of the scientific literature on techniques for skin cancer detection.

Author Contributions

Conceptualization, S.I.H. and E.T.; methodology, S.I.H.; formal analysis, E.T.; investigation, S.I.H.; writing—original draft preparation, S.I.H.; writing—review and editing, E.T.; visualization, S.I.H.; supervision, E.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data can be made available by authors upon request.

Acknowledgments

The authors thank the Reviewers and Editor for the constructive comments that helped in improving the quality of the manuscript. The second author is supported by the research fund of University of Palermo: “FFR 2024 Elena Toscano”. The second author is a member of the Gruppo Nazionale Calcolo Scientifico-Istituto Nazionale di Alta Matematica (GNCS-INdAM).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Shah, A.; Shah, M.; Pandya, A.; Sushra, R.; Sushra, R.; Mehta, M.; Patel, K.; Patel, K. A Comprehensive Study on Skin Cancer Detection using Artificial Neural Network (ANN) and Convolutional Neural Network (CNN). Clin. eHealth 2023, 6, 76–84. [Google Scholar] [CrossRef]
  2. Narmatha, P.; Gupta, S.; Lakshmi, T.V.; Manikavelan, D. Skin cancer detection from dermoscopic images using Deep Siamese domain adaptation convolutional Neural Network optimized with Honey Badger Algorithm. Biomed. Signal Process. Control 2023, 86, 105264. [Google Scholar] [CrossRef]
  3. Mampitiya, L.I.; Rathnayake, N.; De Silva, S. Efficient and low-cost skin cancer detection system implementation with a comparative study between traditional and CNN-based models. J. Comput. Cogn. Eng. 2023, 2, 226–235. [Google Scholar] [CrossRef]
  4. Murugan, A.; Nair, S.A.H.; Preethi, A.A.P.; Kumar, K.S. Diagnosis of skin cancer using machine learning techniques. Microprocess. Microsyst. 2021, 81, 103727. [Google Scholar] [CrossRef]
  5. Tabrizchi, H.; Parvizpour, S.; Razmara, J. An improved VGG model for skin cancer detection. Neural Process. Lett. 2023, 55, 3715–3732. [Google Scholar] [CrossRef]
  6. Ali, M.S.; Miah, M.S.; Haque, J.; Rahman, M.M.; Islam, M.K. An enhanced technique of skin cancer classification using deep convolutional neural network with transfer learning models. Mach. Learn. Appl. 2021, 5, 100036. [Google Scholar] [CrossRef]
  7. Ahmad, I.; Ilyas, H.; Hussain, S.I.; Raja, M.A.Z. Evolutionary Techniques for the Solution of Bio-Heat Equation Arising in Human Dermal Region Model. Arab. J. Sci. Eng. 2023, 49, 3109–3134. [Google Scholar] [CrossRef]
  8. Tahir, M.; Naeem, A.; Malik, H.; Tanveer, J.; Naqvi, R.A.; Lee, S.W. DSCC_Net: Multi-Classification Deep Learning Models for Diagnosing of Skin Cancer Using Dermoscopic Images. Cancers 2023, 15, 2179. [Google Scholar] [CrossRef]
  9. Arshed, M.A.; Mumtaz, S.; Ibrahim, M.; Ahmed, S.; Tahir, M.; Shafi, M. Multi-Class Skin Cancer Classification Using Vision Transformer Networks and Convolutional Neural Network-Based Pre-Trained Models. Information 2023, 14, 415. [Google Scholar] [CrossRef]
  10. Veeramani, N.; Jayaraman, P.; Krishankumar, R.; Ravichandran, K.S.; Gandomi, A.H. DDCNN-F: Double decker convolutional neural network’F’feature fusion as a medical image classification framework. Sci. Rep. 2024, 14, 676. [Google Scholar] [CrossRef]
  11. Shetty, B.; Fernandes, R.; Rodrigues, A.P.; Chengoden, R.; Bhattacharya, S.; Lakshmanna, K. Skin lesion classification of dermoscopic images using machine learning and convolutional neural network. Sci. Rep. 2022, 12, 18134. [Google Scholar] [CrossRef] [PubMed]
  12. Giansanti, D. Advancing Dermatological Care: A Comprehensive Narrative Review of Tele-Dermatology and mHealth for Bridging Gaps and Expanding Opportunities beyond the COVID-19 Pandemic. Healthcare 2023, 11, 1911. [Google Scholar] [CrossRef] [PubMed]
  13. Lai, W.; Kuang, M.; Wang, X.; Ghafariasl, P.; Hosein Sabzalian, M.; Lee, S. Skin cancer diagnosis (SCD) using Artificial Neural Network (ANN) and Improved Gray Wolf Optimization (IGWO). Nat. Sci. Rep. 2022, 13, 19377. [Google Scholar] [CrossRef] [PubMed]
  14. Nassir, J.; Alasabi, M.; Qaisar, S.M.; Khan, M. Epileptic Seizure Detection Using the EEG Signal Empirical Mode Decomposition and Machine Learning. In Proceedings of the 2023 International Conference on Smart Computing and Application (ICSCA), Hail, Saudi Arabia, 5–6 February 2023; pp. 1–6. [Google Scholar]
  15. Khan, S.I.; Qaisar, S.M.; López, A.; Nisar, H.; Ferrero, F. EEG Signal based Schizophrenia Recognition by using VMD Rose Spiral Curve Butterfly Optimization and Machine Learning. In Proceedings of the 2023 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), Kuala Lumpur, Malaysia, 22–25 May 2023; pp. 1–6. [Google Scholar]
  16. Pietkiewicz, P.; Giedziun, P.; Idziak, J.; Todorovska, V.; Lewandowicz, M.; Lallas, A. Diagnostic accuracy of hyperpigmented microcircles in dermatoscopy of non-facial non-acral melanomas: A Pilot Retrospective Study using a Public Image Database. Dermatology 2023, 239, 976–987. [Google Scholar] [CrossRef] [PubMed]
  17. Khristoforova, Y.; Bratchenko, I.; Bratchenko, L.; Moryatov, A.; Kozlov, S.; Kaganov, O.; Zakharov, V. Combination of Optical Biopsy with Patient Data for Improvement of Skin Tumor Identification. Diagnostics 2022, 12, 2503. [Google Scholar] [CrossRef]
  18. Greenwood, J.D.; Merry, S.P.; Boswell, C.L. Skin biopsy techniques. Prim. Care Clin. Off. Pract. 2022, 49, 2503. [Google Scholar] [CrossRef]
  19. Acar, D.D.; Witkowski, W.; Wejda, M.; Wei, R.; Desmet, T.; Schepens, B.; De Cae, S.; Sedeyn, K.; Eeckhaut, H.; Fijalkowska, D.; et al. Integrating artificial intelligence-based epitope prediction in a SARS-CoV-2 antibody discovery pipeline: Caution is warranted. eBioMedicine 2024, 100, 104960. [Google Scholar] [CrossRef]
  20. Bhatt, H.; Shah, V.; Shah, K.; Shah, R.; Shah, M. State-of-the-art machine learning techniques for melanoma skin cancer detection and classification: A comprehensive review. Intell. Med. 2023, 3, 180–190. [Google Scholar] [CrossRef]
  21. Jones, O.T.; Matin, R.N.; van der Schaar, M.; Bhayankaram, K.P.; Ranmuthu, C.K.I.; Islam, M.S.; Behiyat, D.; Boscott, R.; Calanzani, N.; Emery, J.; et al. Artificial intelligence and machine learning algorithms for early detection of skin cancer in community and primary care settings: A systematic review. Lancet Digit. Health 2022, 4, e466–e476. [Google Scholar] [CrossRef]
  22. Imran, A.; Nasir, A.; Bilal, M.; Sun, G.; Alzahrani, A.; Almuhaimeed, A. Skin cancer detection using combined decision of deep learners. IEEE Access 2022, 10, 118198–118212. [Google Scholar] [CrossRef]
  23. Murugan, A.; Nair, S.A.H.; Kumar, K.S. Detection of skin cancer using SVM, random forest and kNN classifiers. J. Med. Syst. 2019, 43, 269. [Google Scholar] [CrossRef] [PubMed]
  24. Kalpana, B.; Reshmy, A.K.; Pandi, S.S.; Dhanasekaran, S. OESV-KRF: Optimal ensemble support vector kernel random forest based early detection and classification of skin diseases. Biomed. Signal Process. Control 2023, 85, 104779. [Google Scholar] [CrossRef]
  25. Juan, C.K.; Su, Y.H.; Wu, C.Y.; Yang, C.S.; Hsu, C.H.; Hung, C.L.; Chen, Y.J. Deep convolutional neural network with fusion strategy for skin cancer recognition: Model development and validation. Sci. Rep. 2023, 13, 17087. [Google Scholar] [CrossRef] [PubMed]
  26. Boadh, R.; Yadav, A.; Kumar, A.; Rajoria, Y.K. Diagnosis of Skin Cancer by Using Fuzzy-Ann Expert System with Unification of Improved Gini Index Random Forest-Based Feature. J. Pharm. Negat. Results 2023, 14, 1445–1451. [Google Scholar]
  27. Alshawi, S.A.; Musawi, G.F.K.A. Skin cancer image detection and classification by CNN based ensemble learning. Int. J. Adv. Comput. Sci. Appl. 2023, 14, 710–717. [Google Scholar] [CrossRef]
  28. Akilandasowmya, G.; Nirmaladevi, G.; Suganthi, S.U.; Aishwariya, A. Skin cancer diagnosis: Leveraging deep hidden features and ensemble classifiers for early detection and classification. Biomed. Signal Process. Control. 2023, 88, 105306. [Google Scholar] [CrossRef]
  29. Zhu, A.Q.; Wang, Q.; Shi, Y.L.; Ren, W.W.; Cao, X.; Ren, T.T.; Wang, J.; Zhang, Y.Q.; Sun, Y.K.; Chen, X.W.; et al. A deep learning fusion network trained with clinical and high-frequency ultrasound images in the multi-classification of skin diseases in comparison with dermatologists: A prospective and multicenter study. eClinicalMedicine 2024, 67, 102391. [Google Scholar] [CrossRef]
  30. Kumar, T.K.; Himanshu, I.N. Artificial Intelligence Based Real-Time Skin Cancer Detection. In Proceedings of the 2023 15th International Conference on Computer and Automation Engineering (ICCAE), Sydney, Australia, 3–5 March 2023; pp. 215–219. [Google Scholar]
  31. Balaji, P.; Hung, B.T.; Chakrabarti, P.; Chakrabarti, T.; Elngar, A.A.; Aluvalu, R. A novel artificial intelligence-based predictive analytics technique to detect skin cancer. PeerJ Comput. Sci. 2023, 9, e1387. [Google Scholar] [CrossRef]
  32. Singh, S.K.; Abolghasemi, V.; Anisi, M.H. Fuzzy Logic with Deep Learning for Detection of Skin Cancer. Appl. Sci. 2023, 13, 8927. [Google Scholar] [CrossRef]
  33. Melarkode, N.; Srinivasan, K.; Qaisar, S.M.; Plawiak, P. AI-Powered Diagnosis of Skin Cancer: A Contemporary Review, Open Challenges and Future Research Directions. Cancers 2023, 15, 1183. [Google Scholar] [CrossRef]
  34. Nagaraj, P.; Saijagadeeshkumar, V.; Kumar, G.P.; Yerriswamyreddy, K.; Krishna, K.J. Skin Cancer Detection and Control Techniques Using Hybrid Deep Learning Techniques. In Proceedings of the 2023 3rd International Conference on Pervasive Computing and Social Networking (ICPCSN), Salem, India, 19–20 June 2023; pp. 442–446. [Google Scholar]
  35. Alhasani, A.T.; Alkattan, H.; Subhi, A.A.; El-Kenawy, E.S.M.; Eid, M.M. A comparative analysis of methods for detecting and diagnosing breast cancer based on data mining. Methods 2023, 7, 8–17. [Google Scholar]
  36. Chadaga, K.; Prabhu, S.; Sampathila, N.; Nireshwalya, S.; Katta, S.S.; Tan, R.S.; Acharya, U.R. Application of artificial intelligence techniques for monkeypox: A systematic review. Diagnostics 2023, 13, 824. [Google Scholar] [CrossRef] [PubMed]
  37. Keerthana, D.; Venugopal, V.; Nath, M.K.; Mishra, M. Hybrid convolutional neural networks with SVM classifier for classification of skin cancer. Biomed. Eng. Adv. 2023, 5, 100069. [Google Scholar] [CrossRef]
  38. Tembhurne, J.V.; Hebbar, N.; Patil, H.Y.; Diwan, T. Skin cancer detection using ensemble of machine learning and deep learning techniques. Multimed. Tools Appl. 2023, 82, 27501–27524. [Google Scholar] [CrossRef]
  39. Ul Huda, N.; Amin, R.; Gillani, S.I.; Hussain, M.; Ahmed, A.; Aldabbas, H. Skin Cancer Malignancy Classification and Segmentation Using Machine Learning Algorithms. JOM 2023, 75, 3121–3135. [Google Scholar] [CrossRef]
  40. Ganesh Babu, T.R. An efficient skin cancer diagnostic system using Bendlet Transform and support vector machine. An. Acad. Bras. Ciênc. 2020, 92, e20190554. [Google Scholar]
  41. Melbin, K.; Raj, Y.J.V. Integration of modified ABCD features and support vector machine for skin lesion types classification. Multimed. Tools Appl. 2021, 80, 8909–8929. [Google Scholar] [CrossRef]
  42. Alsaeed, A.A.D. On the development of a skin cancer computer aided diagnosis system using support vector machine. Biosci. Biotechnol. Res. Commun. 2019, 12, 297–308. [Google Scholar]
  43. Alwan, O.F. Skin cancer images classification using naïve bayes. Emergent J. Educ. Discov. Lifelong Learn. 2022, 3, 19–29. [Google Scholar]
  44. Balaji, V.R.; Suganthi, S.T.; Rajadevi, R.; Kumar, V.K.; Balaji, B.S.; Pandiyan, S. Skin disease detection and segmentation using dynamic graph cut algorithm and classification through Naive Bayes classifier. Measurement 2020, 163, 107922. [Google Scholar] [CrossRef]
  45. Mobiny, A.; Singh, A.; Van Nguyen, H. Risk-Aware Machine Learning Classifier for Skin Lesion Diagnosis. J. Clin. Med. 2019, 8, 1241. [Google Scholar] [CrossRef]
  46. Sutradhar, R.; Barbera, L. Comparing an artificial neural network to logistic regression for predicting ED visit risk among patients with cancer: A population-based cohort study. J. Pain Symptom Manag. 2020, 60, 1–9. [Google Scholar] [CrossRef] [PubMed]
  47. Browning, A.P.; Haridas, P.; Simpson, M.J. A Bayesian sequential learning framework to parameterise continuum models of melanoma invasion into human skin. Bull. Math. Biol. 2019, 81, 676–698. [Google Scholar] [CrossRef] [PubMed]
  48. Xu, Z.; Sheykhahmad, F.R.; Ghadimi, N.; Razmjooy, N. Computer-aided diagnosis of skin cancer based on soft computing techniques. Open Med. 2020, 15, 860–871. [Google Scholar] [CrossRef] [PubMed]
  49. Razmjooy, N.; Ashourian, M.; Karimifard, M.; Estrela, V.V.; Loschi, H.J.; Do Nascimento, D.; França, R.P.; Vishnevski, M. Computer-aided diagnosis of skin cancer: A review. Curr. Med. Imaging 2020, 16, 781–793. [Google Scholar] [CrossRef]
  50. Alquran, H.; Qasmieh, I.A.; Alqudah, A.M.; Alhammouri, S.; Alawneh, E.; Abughazaleh, A.; Hasayen, F. The melanoma skin cancer detection and classification using support vector machine. In Proceedings of the 2017 IEEE Jordan Conference on Applied Electrical Engineering and Computing Technologies (AEECT), Amman, Jordan, 11–13 October 2017; pp. 1–5. [Google Scholar]
  51. Hosny, K.M.; Kassem, M.A.; Foaud, M.M. Skin cancer classification using deep learning and transfer learning. In Proceedings of the 2018 9th Cairo International Biomedical Engineering Conference (CIBEC), Cairo, Egypt, 20–22 December 2018; pp. 90–93. [Google Scholar]
  52. Monika, M.K.; Vignesh, N.A.; Kumari, C.U.; Kumar, M.N.V.S.S.; Lydia, E.L. Skin cancer detection and classification using machine learning. Mater. Today Proc. 2020, 33, 4266–4270. [Google Scholar] [CrossRef]
  53. Kumar, M.; Alshehri, M.; AlGhamdi, R.; Sharma, P.; Deep, V. A DE-ANN inspired skin cancer detection approach using fuzzy c-means clustering. Mob. Netw. Appl. 2020, 25, 1319–1329. [Google Scholar] [CrossRef]
  54. Tang, G.; Xie, Y.; Li, K.; Liang, R.; Zhao, L. Multimodal emotion recognition from facial expression and speech based on feature fusion. Multimed. Tools Appl. 2023, 82, 16359–16373. [Google Scholar] [CrossRef]
  55. Zafar, M.; Sharif, M.I.; Sharif, M.I.; Kadry, S.; Bukhari, S.A.C.; Rauf, H.T. Skin lesion analysis and cancer detection based on machine/deep learning techniques: A comprehensive survey. Life 2023, 13, 146. [Google Scholar] [CrossRef]
  56. Chilamkurthy, S.; Ghosh, R.; Tanamala, S.; Biviji, M.; Campeau, N.G.; Venugopal, V.K.; Mahajan, V.; Rao, P.; Warier, P. Deep learning algorithms for detection of critical findings in head CT scans: A retrospective study. Lancet 2018, 392, 2388–2396. [Google Scholar] [CrossRef]
  57. Manna, A.; Kundu, R.; Kaplun, D.; Sinitca, A.; Sarkar, R. A fuzzy rank-based ensemble of CNN models for classification of cervical cytology. Sci. Rep. 2021, 11, 14538. [Google Scholar] [CrossRef] [PubMed]
  58. Liu, Y.; Zhang, L.; Hao, Z.; Yang, Z.; Wang, S.; Zhou, X.; Chang, Q. An xception model based on residual attention mechanism for the classification of benign and malignant gastric ulcers. Sci. Rep. 2022, 12, 15365. [Google Scholar] [CrossRef] [PubMed]
  59. Bozkurt, A.; Gale, T.; Kose, K.; Alessi-Fox, C.; Brooks, D.H.; Rajadhyaksha, M.; Dy, J. Delineation of skin strata in reflectance confocal microscopy images with recurrent convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 25–33. [Google Scholar]
  60. Chen, W.; Feng, J.; Lu, J.; Zhou, J. Endo3D: Online workflow analysis for endoscopic surgeries based on 2018, 3D CNN and LSTM. In OR 2.0 Context-Aware Operating Theaters, Computer Assisted Robotic Endoscopy, Clinical Image-Based Procedures, and Skin Image Analysis: First International Workshop, OR 2.0 2018, 5th International Workshop, CARE 2018, 7th International Workshop, CLIP 2018, Third International Workshop, ISIC 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, 16 September and 20 September 2018; Proceedings 5; Springer International Publishing: Berlin/Heidelberg, Germany, 2018; pp. 97–107. [Google Scholar]
  61. Attia, M.; Hossny, M.; Nahavandi, S.; Yazdabadi, A. Skin melanoma segmentation using recurrent and convolutional neural networks. In Proceedings of the 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), Melbourne, Australia, 18–21 April 2017; pp. 292–296. [Google Scholar]
  62. Alom, M.Z. Improved Deep Convolutional Neural Networks (DCNN) Approaches for Computer Vision and Bio-Medical Imaging. Ph.D. Thesis, University of Dayton, Dayton, OH, USA, 2018. [Google Scholar]
  63. Miotto, R.; Wang, F.; Wang, S.; Jiang, X.; Dudley, J.T. Deep learning for healthcare: Review, opportunities and challenges. Brief. Bioinform. 2018, 19, 1236–1246. [Google Scholar] [CrossRef] [PubMed]
  64. Elashiri, M.A.; Rajesh, A.; Pandey, S.N.; Shukla, S.K.; Urooj, S. Ensemble of weighted deep concatenated features for the skin disease classification model using modified long short term memory. Biomed. Signal Process. Control 2022, 76, 103729. [Google Scholar] [CrossRef]
  65. Victor, A.; Ghalib, M.R. Automatic Detection and Classification of Skin Cancer. Int. J. Intell. Eng. Syst. 2017, 10, 444–451. [Google Scholar] [CrossRef]
  66. Pham, T.C.; Tran, G.S.; Nghiem, T.P.; Doucet, A.; Luong, C.M.; Hoang, V.D. A comparative study for classification of skin cancer. In Proceedings of the 2019 International Conference on System Science and Engineering (ICSSE), Dong Hoi, Vietnam, 20–21 July 2019; pp. 267–272. [Google Scholar]
  67. Saba, T.; Khan, M.A.; Rehman, A.; Marie-Sainte, S.L. Region extraction and classification of skin cancer: A heterogeneous framework of deep CNN features fusion and reduction. J. Med. Syst. 2019, 43, 289. [Google Scholar] [CrossRef] [PubMed]
  68. Ghiasi, M.M.; Zendehboudi, S. Application of decision tree-based ensemble learning in the classification of breast cancer. Comput. Biol. Med. 2021, 128, 104089. [Google Scholar] [CrossRef] [PubMed]
  69. Tanaka, T.; Voigt, M.D. Decision tree analysis to stratify risk of de novo non-melanoma skin cancer following liver transplantation. J. Cancer Res. Clin. Oncol. 2018, 144, 607–615. [Google Scholar] [CrossRef]
  70. Sun, J.; Huang, Y. Computer aided intelligent medical system and nursing of breast surgery infection. Microprocess. Microsyst. 2021, 81, 103769. [Google Scholar] [CrossRef]
  71. Quinn, P.L.; Oliver, J.B.; Mahmoud, O.M.; Chokshi, R.J. Cost-effectiveness of sentinel lymph node biopsy for head and neck cutaneous squamous cell carcinoma. J. Surg. Res. 2019, 241, 15–23. [Google Scholar] [CrossRef]
  72. Chin, C.K.; Binti Awang Mat, D.A.; Saleh, A.Y. Skin Cancer Classification using Convolutional Neural Network with Autoregressive Integrated Moving Average. In Proceedings of the 2021 4th International Conference on Robot Systems and Applications, Chengdu, China, 9–11 April 2021; pp. 18–22. [Google Scholar]
  73. Kumar, N.; Kumari, P.; Ranjan, P.; Vaish, A. ARIMA model based breast cancer detection and classification through image processing. In Proceedings of the 2014 Students Conference on Engineering and Systems, Allahabad, India, 29 May 2014; pp. 1–5. [Google Scholar]
  74. Verma, N.; Kaur, H. Traffic Analysis and Prediction System by the Use of Modified Arima Model. Int. J. Adv. Res. Comput. Sci. 2017, 8, 58–63. [Google Scholar] [CrossRef]
  75. Goyal, M.; Knackstedt, T.; Yan, S.; Hassanpour, S. Artificial intelligence-based image classification methods for diagnosis of skin cancer: Challenges and opportunities. Comput. Biol. Med. 2020, 127, 104065. [Google Scholar] [CrossRef] [PubMed]
  76. Li, H.; Pan, Y.; Zhao, J.; Zhang, L. Skin disease diagnosis with deep learning: A review. Neurocomputing 2021, 464, 364–393. [Google Scholar] [CrossRef]
  77. Alphonse, A.S.; Starvin, M.S. A novel and efficient approach for the classification of skin melanoma. J. Ambient Intell. Humaniz. Comput. 2021, 12, 10435–10459. [Google Scholar] [CrossRef]
  78. Kabari, L.G.; Bakpo, F.S. Diagnosing skin diseases using an artificial neural network. In Proceedings of the 2009 2nd International Conference on Adaptive Science & Technology (ICAST), Accra, Ghana, 14–16 December 2009; pp. 187–191. [Google Scholar]
  79. Chakraborty, S.; Mali, K.; Chatterjee, S.; Banerjee, S.; Mazumdar, K.G.; Debnath, M.; Basu, P.; Bose, S.; Roy, K. Detection of skin disease using metaheuristic supported artificial neural networks. In Proceedings of the 2017 8th Annual Industrial Automation and Electromechanical Engineering Conference (IEMECON), Bangkok, Thailand, 16–18 August 2017; pp. 224–229. [Google Scholar]
  80. Hameed, N.; Shabut, A.M.; Hossain, M.A. Multi-class skin diseases classification using deep convolutional neural network and support vector machine. In Proceedings of the 2018 12th International Conference on Software, Knowledge, Information Management & Applications (SKIMA), Phnom Penh, Cambodia, 3–5 December 2018; pp. 1–7. [Google Scholar]
  81. Maduranga, M.W.P.; Nandasena, D. Mobile-based skin disease diagnosis system using convolutional neural networks (CNN). Int. J. Image Graph. Signal Process. 2022, 3, 47–57. [Google Scholar] [CrossRef]
  82. Nawaz, M.; Mehmood, Z.; Nazir, T.; Naqvi, R.A.; Rehman, A.; Iqbal, M.; Saba, T. Skin cancer detection from dermoscopic images using deep learning and fuzzy k-means clustering. Microsc. Res. Tech. 2022, 85, 339–351. [Google Scholar] [CrossRef] [PubMed]
  83. Alani, S.; Zakaria, Z.; Ahmad, A. Miniaturized UWB elliptical patch antenna for skin cancer diagnosis imaging. Int. J. Electr. Comput. Eng. 2020, 10, 1422–1429. [Google Scholar] [CrossRef]
  84. Aladhadh, S.; Alsanea, M.; Aloraini, M.; Khan, T.; Habib, S.; Islam, M. An effective skin cancer classification mechanism via medical vision transformer. Sensors 2022, 22, 4008. [Google Scholar] [CrossRef] [PubMed]
  85. Vijayakumar, G.; Manghat, S.; Vijayakumar, R.; Simon, L.; Scaria, L.M.; Vijayakumar, A.; Sreehari, G.K.; Kutty, V.R.; Rachana, A.; Jaleel, A. Incidence of type 2 diabetes mellitus and prediabetes in Kerala, India: Results from a 10-year prospective cohort. BMC Public Health 2019, 19, 140. [Google Scholar] [CrossRef]
  86. Zhang, D.; Li, A.; Wu, W.; Yu, L.; Kang, X.; Huo, X. CR-Conformer: A fusion network for clinical skin lesion classification. Med. Biol. Eng. Comput. 2024, 62, 85–94. [Google Scholar] [CrossRef]
  87. Hao, J.; Tan, C.; Yang, Q.; Cheng, J.; Ji, G. Leveraging Data Correlations for Skin Lesion Classification. In Proceedings of the Chinese Conference on Pattern Recognition and Computer Vision (PRCV), Shenzhen, China, 14–17 October 2023; pp. 61–72. [Google Scholar]
  88. Li, Z.; Zhao, C.; Han, Z.; Hong, C. TUNet and domain adaptation based learning for joint optic disc and cup segmentation. Comput. Biol. Med. 2023, 163, 107209. [Google Scholar] [CrossRef] [PubMed]
  89. Hasan, M.; Barman, S.D.; Islam, S.; Reza, A.W. Skin cancer detection using convolutional neural network. In Proceedings of the 2019 5th International Conference on Computing and Artificial Intelligence, Bali, Indonesia, 17–20 April 2019; pp. 254–258. [Google Scholar]
  90. Yang, Y. Medical multimedia big data analysis modeling based on DBN algorithm. IEEE Access 2020, 8, 16350–16361. [Google Scholar] [CrossRef]
  91. Wang, S.; Hamian, M. Skin cancer detection based on extreme learning machine and a developed version of thermal exchange optimization. Comput. Intell. Neurosci. 2021, 2021, 9528664. [Google Scholar] [CrossRef] [PubMed]
  92. Naeem, A.; Farooq, M.S.; Khelifi, A.; Abid, A. Malignant melanoma classification using deep learning: Datasets, performance measurements, challenges and opportunities. IEEE Access 2020, 8, 110575–110597. [Google Scholar] [CrossRef]
  93. Haggenmüller, S.; Maron, R.C.; Hekler, A.; Utikal, J.S.; Barata, C.; Barnhill, R.L.; Beltraminelli, H.; Berking, C.; Betz-Stablein, B.; Blum, A.; et al. Skin cancer classification via convolutional neural networks: Systematic review of studies involving human experts. Eur. J. Cancer 2021, 156, 202–216. [Google Scholar] [CrossRef] [PubMed]
  94. Mazhar, T.; Haq, I.; Ditta, A.; Mohsan, S.A.H.; Rehman, F.; Zafar, I.; Gansau, J.A.; Goh, L.P.W. The role of machine learning and deep learning approaches for the detection of skin cancer. Healthcare 2023, 11, 415. [Google Scholar] [CrossRef] [PubMed]
  95. Chaturvedi, S.S.; Tembhurne, J.V.; Diwan, T. A multi-class skin Cancer classification using deep convolutional neural networks. Multimed. Tools Appl. 2020, 79, 28477–28498. [Google Scholar] [CrossRef]
  96. Subramanian, R.R.; Achuth, D.; Kumar, P.S.; Kumar Reddy, K.N.; Amara, S.; Chowdary, A.S. Skin cancer classification using Convolutional neural networks. In Proceedings of the 2021 11th International Conference on Cloud Computing, Data Science & Engineering (Confluence), Noida, India, 28–29 January 2021; pp. 13–19. [Google Scholar]
  97. Zhang, N.; Cai, Y.X.; Wang, Y.Y.; Tian, Y.T.; Wang, X.L.; Badami, B. Skin cancer diagnosis based on optimized convolutional neural network. Artif. Intell. Med. 2020, 102, 101756. [Google Scholar] [CrossRef]
  98. Fu’adah, Y.N.; Pratiwi, N.C.; Pramudito, M.A.; Ibrahim, N. Convolutional neural network (CNN) for automatic skin cancer classification system. In Proceedings of the IOP Conference Series: Materials Science and Engineering, Chennai, India, 16–17 September 2020; Volume 982, p. 012005. [Google Scholar]
  99. Waweru, N. Business ethics disclosure and corporate governance in Sub-Saharan Africa (SSA). Int. J. Account. Inf. Manag. 2020, 28, 363–387. [Google Scholar] [CrossRef]
  100. Çakmak, M.; Tenekecı, M.E. Melanoma detection from dermoscopy images using Nasnet Mobile with Transfer Learning. In Proceedings of the 2021 29th Signal Processing and Communications Applications Conference (SIU), Istanbul, Turkey, 9–11 June 2021; pp. 1–4. [Google Scholar]
  101. Fujisawa, Y.; Funakoshi, T.; Nakamura, Y.; Ishii, M.; Asai, J.; Shimauchi, T.; Fujii, K.; Fujimoto, M.; Katoh, N.; Ihn, H. Nation-wide survey of advanced non-melanoma skin cancers treated at dermatology departments in Japan. J. Dermatol. Sci. 2018, 92, 230–236. [Google Scholar] [CrossRef]
  102. Yu, X.; Zheng, H.; Chan, M.T.; Wu, W.K. Immune consequences induced by photodynamic therapy in non-melanoma skin cancers: A review. Environ. Sci. Pollut. Res. 2018, 25, 20569–20574. [Google Scholar] [CrossRef] [PubMed]
  103. Haenssle, H.A.; Fink, C.; Schneiderbauer, R.; Toberer, F.; Buhl, T.; Blum, A.; Kalloo, A.; Hassen, A.B.H.; Thomas, L.; Enk, A.; et al. Man against machine: Diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Ann. Oncol. 2018, 29, 1836–1842. [Google Scholar] [CrossRef] [PubMed]
  104. Pengpid, S.; Peltzer, K. Sun protection use behaviour among university students from 25 low, middle income and emerging economy countries. Asian Pac. J. Cancer Prev. 2015, 16, 1385–1389. [Google Scholar] [CrossRef] [PubMed]
  105. Ragab, M.; Choudhry, H.; Al-Rabia, M.W.; Binyamin, S.S.; Aldarmahi, A.A.; Mansour, R.F. Early and accurate detection of melanoma skin cancer using hybrid level set approach. Front. Physiol. 2022, 13, 2536. [Google Scholar] [CrossRef] [PubMed]
  106. Qureshi, A.S.; Roos, T. Transfer learning with ensembles of deep neural networks for skin cancer detection in imbalanced data sets. Neural Process. Lett. 2023, 55, 4461–4479. [Google Scholar] [CrossRef]
  107. Hurtado, J.; Reales, F. A machine learning approach for the recognition of melanoma skin cancer on macroscopic images. TELKOMNIKA 2021, 19, 1357–1368. [Google Scholar] [CrossRef]
  108. Nida, N.; Irtaza, A.; Javed, A.; Yousaf, M.H.; Mahmood, M.T. Melanoma lesion detection and segmentation using deep region based convolutional neural network and fuzzy C-means clustering. Int. J. Med. Inform. 2019, 124, 37–48. [Google Scholar] [CrossRef] [PubMed]
  109. Zhang, L.; Zhang, J.; Gao, W.; Bai, F.; Li, N.; Ghadimi, N. A deep learning outline aimed at prompt skin cancer detection utilizing gated recurrent unit networks and improved orca predation algorithm. Biomed. Signal Process. Control 2024, 90, 105858. [Google Scholar] [CrossRef]
  110. Li, Y.; Shen, L. Skin lesion analysis towards melanoma detection using deep learning network. Sensors 2018, 18, 556. [Google Scholar] [CrossRef]
  111. Bi, L.; Kim, J.; Ahn, E.; Kumar, A.; Feng, D.; Fulham, M. Step-wise integration of deep class-specific learning for dermoscopic image segmentation. Pattern Recognit. 2019, 85, 78–89. [Google Scholar] [CrossRef]
  112. Nazi, Z.A.; Abir, T.A. Automatic skin lesion segmentation and melanoma detection: Transfer learning approach with U-Net and DCNN-SVM. In Proceedings of the International Joint Conference on Computational Intelligence: IJCCI 2018, Dhaka, Bangladesh, 14–15 December 2018; pp. 371–381. [Google Scholar]
  113. Sivaraj, S.; Malmathanraj, R.; Palanisamy, P. Detecting anomalous growth of skin lesion using threshold-based segmentation algorithm and Fuzzy K-Nearest Neighbor classifier. J. Cancer Res. Ther. 2020, 16, 40–52. [Google Scholar] [CrossRef] [PubMed]
  114. Rahman, Z.; Hossain, M.S.; Islam, M.R.; Hasan, M.M.; Hridhee, R.A. An approach for multiclass skin lesion classification based on ensemble learning. Inform. Med. Unlocked 2021, 25, 100659. [Google Scholar] [CrossRef]
  115. Albahar, M.A. Skin lesion classification using convolutional neural network with novel regularizer. IEEE Access 2019, 7, 38306–38313. [Google Scholar] [CrossRef]
  116. Rodrigues, D.D.A.; Ivo, R.F.; Satapathy, S.C.; Wang, S.; Hemanth, J.; Reboucas Filho, P.P. A new approach for classification skin lesion based on transfer learning, deep learning, and IoT system. Pattern Recognit. Lett. 2020, 136, 8–15. [Google Scholar] [CrossRef]
  117. Hosny, K.M.; Kassem, M.A.; Foaud, M.M. Classification of skin lesions using transfer learning and augmentation with Alex-net. PLoS ONE 2019, 14, e0217293. [Google Scholar] [CrossRef] [PubMed]
  118. Adegun, A.A.; Viriri, S. Deep learning-based system for automatic melanoma detection. IEEE Access 2019, 8, 7160–7172. [Google Scholar] [CrossRef]
  119. Majtner, T.; Yildirim-Yayilgan, S.; Hardeberg, J.Y. Optimised deep learning features for improved melanoma detection. Multimed. Tools Appl. 2019, 78, 11883–11903. [Google Scholar] [CrossRef]
  120. Khan, M.A.; Sharif, M.; Akram, T.; Bukhari, S.A.C.; Nayak, R.S. Developed Newton-Raphson based deep features selection framework for skin lesion recognition. Pattern Recognit. Lett. 2020, 129, 293–303. [Google Scholar] [CrossRef]
  121. Jayapriya, K.; Jacob, I.J. Hybrid fully convolutional networks-based skin lesion segmentation and melanoma detection using deep feature. Int. J. Imaging Syst. Technol. 2020, 30, 348–357. [Google Scholar] [CrossRef]
  122. Ahmed, S.G.; Zeng, F.; Alrifaey, M.; Ahmadipour, M. Skin Cancer Classification Utilizing a Hybrid Model of Machine Learning Models Trained on Dermoscopic Images. In Proceedings of the 2023 3rd International Conference on Emerging Smart Technologies and Applications (eSmarTA), Taiz, Yemen, 10–11 October 2023; pp. 1–7. [Google Scholar]
  123. Hamida, S.; Lamrani, D.; El Gannour, O.; Saleh, S.; Cherradi, B. Toward enhanced skin disease classification using a hybrid RF-DNN system leveraging data balancing and augmentation techniques. Bull. Electr. Eng. Inform. 2024, 13, 538–547. [Google Scholar] [CrossRef]
  124. Rahman, M.A.; Paul, B.; Mahmud, T.; Fattah, S.A. CIFF-Net: Contextual image feature fusion for Melanoma diagnosis. Biomed. Signal Process. Control 2024, 88, 105673. [Google Scholar] [CrossRef]
  125. Adepu, A.K.; Sahayam, S.; Jayaraman, U.; Arramraju, R. Melanoma classification from dermatoscopy images using knowledge distillation for highly imbalanced data. Comput. Biol. Med. 2023, 154, 106571. [Google Scholar] [CrossRef]
  126. Hu, B.; Zhou, P.; Yu, H.; Dai, Y.; Wang, M.; Tan, S.; Sun, Y. LeaNet: Lightweight U-shaped architecture for high-performance skin cancer image segmentation. Comput. Biol. Med. 2024, 169, 107919. [Google Scholar] [CrossRef]
  127. Zhang, Z.; Liu, Q.; Wang, Y. Road extraction by deep residual U-Net. IEEE Geosci. Remote Sens. Lett. 2018, 15, 749–753. [Google Scholar] [CrossRef]
  128. Chanda, D.; Onim, M.S.H.; Nyeem, H.; Ovi, T.B.; Naba, S.S. DCENSnet: A new deep convolutional ensemble network for skin cancer classification. Biomed. Signal Process. Control 2024, 89, 105757. [Google Scholar] [CrossRef]
Figure 1. Subdivision of supervised and unsupervised ML algorithms.
Figure 1. Subdivision of supervised and unsupervised ML algorithms.
Symmetry 16 00366 g001
Figure 2. Melanoma skin cancer images from ISIC dataset.
Figure 2. Melanoma skin cancer images from ISIC dataset.
Symmetry 16 00366 g002
Figure 3. Flowchart for skin cancer detection by CNN.
Figure 3. Flowchart for skin cancer detection by CNN.
Symmetry 16 00366 g003
Table 1. List of openly available datasets for the detection of skin cancer.
Table 1. List of openly available datasets for the detection of skin cancer.
DatasetsCharacteristics
YearNameTestingTraining
2016ISBI900273
2013PH220040
2017ISBI2000374
2000Dermis397146
2018ISBI10,0001113
2021MED NODE170100
2019ISBI25,3334522
2003Dermot Fit130076
2016–2020ISIC23,90621,659
2016TUPAC321500
2018HAM1000010,015
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hussain, S.I.; Toscano, E. An Extensive Investigation into the Use of Machine Learning Tools and Deep Neural Networks for the Recognition of Skin Cancer: Challenges, Future Directions, and a Comprehensive Review. Symmetry 2024, 16, 366. https://doi.org/10.3390/sym16030366

AMA Style

Hussain SI, Toscano E. An Extensive Investigation into the Use of Machine Learning Tools and Deep Neural Networks for the Recognition of Skin Cancer: Challenges, Future Directions, and a Comprehensive Review. Symmetry. 2024; 16(3):366. https://doi.org/10.3390/sym16030366

Chicago/Turabian Style

Hussain, Syed Ibrar, and Elena Toscano. 2024. "An Extensive Investigation into the Use of Machine Learning Tools and Deep Neural Networks for the Recognition of Skin Cancer: Challenges, Future Directions, and a Comprehensive Review" Symmetry 16, no. 3: 366. https://doi.org/10.3390/sym16030366

APA Style

Hussain, S. I., & Toscano, E. (2024). An Extensive Investigation into the Use of Machine Learning Tools and Deep Neural Networks for the Recognition of Skin Cancer: Challenges, Future Directions, and a Comprehensive Review. Symmetry, 16(3), 366. https://doi.org/10.3390/sym16030366

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop