Next Article in Journal
Harnessing Machine Learning in Vocal Arts Medicine: A Random Forest Application for “Fach” Classification in Opera
Next Article in Special Issue
MSRNet: Multiclass Skin Lesion Recognition Using Additional Residual Block Based Fine-Tuned Deep Models Information Fusion and Best Feature Selection
Previous Article in Journal
Arrhythmic Mitral Valve Prolapse: A Comprehensive Review
Previous Article in Special Issue
Precision in Dermatology: Developing an Optimal Feature Selection Framework for Skin Lesion Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

SkinNet-INIO: Multiclass Skin Lesion Localization and Classification Using Fusion-Assisted Deep Neural Networks and Improved Nature-Inspired Optimization Algorithm

1
Department of CS, HITEC University, Taxila 47080, Pakistan
2
Department of Computer Science and Mathematics, Lebanese American University, Beirut 13-5053, Lebanon
3
Department of Computer Science, HITEC University, Taxila 47080, Pakistan
4
Center of Excellence Forest 4.0, Faculty of Informatics, Kaunas University of Technology, 51368 Kaunas, Lithuania
5
College of Computer Science, King Khalid University, Abha 61413, Saudi Arabia
6
Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh 11564, Saudi Arabia
7
Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology (NTNU), 7034 Trondheim, Norway
*
Authors to whom correspondence should be addressed.
Diagnostics 2023, 13(18), 2869; https://doi.org/10.3390/diagnostics13182869
Submission received: 11 August 2023 / Revised: 30 August 2023 / Accepted: 1 September 2023 / Published: 6 September 2023

Abstract

:
Background: Using artificial intelligence (AI) with the concept of a deep learning-based automated computer-aided diagnosis (CAD) system has shown improved performance for skin lesion classification. Although deep convolutional neural networks (DCNNs) have significantly improved many image classification tasks, it is still difficult to accurately classify skin lesions because of a lack of training data, inter-class similarity, intra-class variation, and the inability to concentrate on semantically significant lesion parts. Innovations: To address these issues, we proposed an automated deep learning and best feature selection framework for multiclass skin lesion classification in dermoscopy images. The proposed framework performs a preprocessing step at the initial step for contrast enhancement using a new technique that is based on dark channel haze and top–bottom filtering. Three pre-trained deep learning models are fine-tuned in the next step and trained using the transfer learning concept. In the fine-tuning process, we added and removed a few additional layers to lessen the parameters and later selected the hyperparameters using a genetic algorithm (GA) instead of manual assignment. The purpose of hyperparameter selection using GA is to improve the learning performance. After that, the deeper layer is selected for each network and deep features are extracted. The extracted deep features are fused using a novel serial correlation-based approach. This technique reduces the feature vector length to the serial-based approach, but there is little redundant information. We proposed an improved anti-Lion optimization algorithm for the best feature selection to address this issue. The selected features are finally classified using machine learning algorithms. Main Results: The experimental process was conducted using two publicly available datasets, ISIC2018 and ISIC2019. Employing these datasets, we obtained an accuracy of 96.1 and 99.9%, respectively. Comparison was also conducted with state-of-the-art techniques and shows the proposed framework improved accuracy. Conclusions: The proposed framework successfully enhances the contrast of the cancer region. Moreover, the selection of hyperparameters using the automated techniques improved the learning process of the proposed framework. The proposed fusion and improved version of the selection process maintains the best accuracy and shorten the computational time.

1. Introduction

Skin cancer is one of the most prevalent cancers. For example, more than 5 million new cases are recorded annually in the United States, and it is anticipated that one in five persons may experience this illness at some point during their life [1]. It is a common malignancy that poses a major threat to human health and whose prevalence is rising annually around the globe [2]. Basal cell (BCC), squamous cell (SCC), and malignant melanoma are the most common skin malignancies, where the five year survival rate for BCC and SCC are above 95% [3]. Melanoma, a type of skin cancer, develops in the skin cells, and primarily is situated outside of the body, which is mainly exposed to ultraviolet rays from sunshine [4]. The World Health Organization estimates that 2–3 million new instances of skin cancer are diagnosed worldwide each year [5]. According to the facts, more than two people in the U.S. die with skin cancer every hour. In the U.S., the estimated number of new melanoma cases will decrease by 5–6%. The percentage of deaths expected in 2023 is 4.4%. The new estimated diagnosed cases of melanoma in the U.S. during 2023 will be 186,680. Moreover, the number of deaths in 2023 will be 7990, which have increased by 27% more than last year’s figures [6]. The typical age to be affected by this cancer is younger than 40; mainly women. It is hard to cure if it has spread to the other parts of the human body [7,8]. However, the early-stage diagnosis of melanoma can be treated quickly and has a good recovery rate [6]. Several techniques have been implemented to assist with diagnosis.
Conventional skin cancer diagnosis techniques entail a thorough process that includes a physical examination, medical history-based evaluation, dermatoscopy, imaging examination, and a pathology report. Several approaches have been introduced in the literature to differentiate malignant and benign skin lesions such as the ABCD rule [9], seven-points checklist, and three-point checklist [10]. The ABCD rule of dermatoscopy characterizes the geometrical and organizational lesion properties. The three-point and seven-point approaches identify melanoma and BCCs based on three and seven characteristics [9]. All these steps end with patient treatment [11]. In addition, the consumption of more time, costs, locations, and healthcare providers are other factors that can delay the diagnosis. Therefore, it is important to diagnose skin cancer early, which can help decrease the mortality rate and increase the survival percentage. Hence, an automated computer-aided diagnosis (CAD) system is required to accurately and efficiently diagnose skin cancer from dermatoscopy images [12]. The dermatoscope is a new non-invasive diagnosis tool for skin diseases, but it depends on the expertise of the person (doctor). Therefore, employing the concept of artificial intelligence (CAD) shows the success of addressing the above problems [13]. The AI-based CAD system can be useful at home or abroad to recognize skin cancer from the dermatoscopy images [14].
Early CAD systems of skin cancer were based on traditional features [15] such as texture, shape, and color; however, due to increased training images, these techniques fail to provide better results [16]. In addition, the CAD systems based on the handcrafted features faced several challenges, such as similarity in lesion shape, color, and texture, as shown in Figure 1 [17]. With the advancement of deep learning, AI-based computerized techniques show much greater success in medical imaging (detection and recognition) [18,19].
In medical imaging, the convolutional neural network (CNN) shows improved recognition performance [15]. By employing the deep backbone of CNN, the deeper layer is selected for the deep feature extraction [21]. Much research has been conducted in this domain in the last couple of years incorporating deep learning methods [22]. Despite this, many challenges still exist in this domain, including low-contrast infected lesions, variations in the shape of lesions, similarities in the colors of different skin lesion classes, imbalanced skin classes, and a few more. Based on these challenges, there is room to enhance lesion detection and multiclass classification accuracy. Hence, in this article, the following challenges are addressed: (i) imbalanced skin classes increase the probability rate of a higher number of image classes that impact the prediction performance of other classes; (ii) the low-contrast skin lesions impact the lesion localization accuracy; (iii) variations in lesion shape and texture may segment the incorrect region that later extracted the irrelevant features (incorrect region features, healthy region features, and extra features that are not required for the classification purpose). In addition, multiclass skin lesions have a high similarity in shape, color, and appearance; therefore, it is also difficult to recognize a true class correctly.
Major Contributions: Our major contributions are as follows:
  • Proposal of a hybrid contrast enhancement technique using the fusion of top–bottom filtering and haze reduction technique.
  • Fine-tuning of three pre-trained CNN architectures and training using transfer learning. For the training of deep learning models, a genetic algorithm is employed for the selection of hyperparameters instead of manual selection.
  • Proposal of a serial-controlled positive correlation approach for the fusion of trained neural nets feature.
  • Development of an improved optimization algorithm named Antlion for the feature selection.
The manuscript is organized so that Section 2 describes the related work based on skin lesion approaches. Section 3 describes proposed methodology, followed by Section 4, which elaborates on and discusses the experimental setup, results, and comparisons with existing methods. Finally, the conclusion is given in Section 5.

2. Related Work

It has been extensively investigated how to automatically diagnose skin cancer [23,24]. Deep learning algorithms show significant success in the area of medical imaging, especially for the identification of skin cancer [25]. The main components of traditional automated skin cancer diagnosis approaches are developing handcrafted features and using machine learning classifiers for classification [26]. A CAD system consists of a few important steps, such as preprocessing of original dermoscopy images, lesion detection using segmentation techniques, handcrafted feature extraction, feature selection, and classification using machine learning classifiers. Recently, CNNs that can learn hierarchical features have had considerable success with medical image processing, especially for skin cancer recognition [27].
Kassem et al. [28] discussed the importance of deep learning for the classification of skin cancer using deep learning techniques. They discussed extensively the importance of deep learning for better skin lesion classification, the complexity of deep learning techniques, and the most current stage of development. Hauser et al. [29] presented an explainable AI framework for skin lesion diagnosis. Zhang et al. [30] presented an attention mechanism CNN model for skin lesion recognition. Each attention block jointly used residual learning to improve representation learning. The experiments were conducted on the ISIC2017 dataset and showed improved recognition accuracy. Anand et al. [31] presented a U-NET and CNN architecture fusion for skin lesion detection and classification. They used U-NET architecture to detect lesions from the input dermoscopy images; however, CNN architecture was employed for the classification. The HAM10000 dataset was employed to validate the proposed framework and obtained accuracy above 97%. Fayadh et al. [32] introduced a wavelet transform and CNN-based architecture to diagnose skin lesions. The unwanted information was removed by employing the concept of wavelet and max pooling. Then, a residual neural network is proposed and features are extracted by employing the concept of transfer learning. The extracted features are classified using an ELM classifier and obtained improved accuracy on ISIC2017 and HAM10000 datasets.
Simon et al. [33] provided an interpretable deep-learning framework for skin lesion segmentation and classification. The main strength of this work was categorizing the tissues into 12 dermatological classes. After that, they trained a deep CNN using these characteristics for final classification. They tested the introduced framework on dermatoscopy images and compared it with clinical accuracy. During the comparison phase, the clinical method achieved an accuracy of 93.6%, whereas the computerized method attained 97.9%. This shows that the computerized methods have better performance than the clinical techniques. Javeria et al. [34] introduced an integrated model of preprocessing, segmentation, feature extraction, and deep feature fusion. Firstly, they resized the images and converted RGB into a luminance channel, then they used the Otsu algorithm and biorthogonal 2-D wavelet transform to segment the affected part of the skin. After that, pre-trained AlexNet and VGG16 were used to extract the deep features. Then, the optimal feature set was obtained using PCA for further classification. Al-Masni et al. [35] devised an integrated diagnostic paradigm encompassing skin lesions’ segmentation and classification. Inception-v3, ResNet-50, Inception-ResNet-v2, and DenseNet-201 were deployed in the DL FRCN framework using dermatoscopic images to segment regions of interest, followed by classifier over segmentation results. The proposed integrated DL model works acceptably on different types of skin lesions. The model was evaluated on a balanced, segmented, and augmented dataset, including the International Skin Imaging Collaboration (ISIC) and its variants in 2016, 2017, and 2018. Overall weighted prediction accuracy for Inception-v3, ResNet-50, Inception-ResNet-v2, and DenseNet-201 classifiers is 77.04%, 79.95%, 81.79%, and 81.27% for two ISIC2016 classes, 81.29%, 81.57%, 81.34%, and 73.44% for three ISIC2017, as well as 88.05%, 89.28%, 87.74%, and 88.70 for four ISIC2018 classes. Pacheco et al. [36] used the thirteen best deep-learning networks. Finally, they concluded that the SE Net convolutional neural network and Adam optimization were the perfect architecture among all neural networks. The proposed model obtained 91% performance on the ISIC2019 dataset. Farooq et al. [37] introduced a model to enhance the classification performance by up to 86% by incorporating Mobile Net and Inception Net. For these models, Kaggle’s updated dataset of skin cancer was utilized to check their performances. Esteva et al. [38] conducted a pioneering CNN-based research work to detect and classify skin lesion datasets. Lui et al. [39] defined a deep learning model with Dense Net and Resnet using the MFL module. The proposed work generated an effective accuracy of 87% on the ISIC2017 database for skin lesion classification. Pedro et al. [40] proposed a Feedforward Neural Network (FNN) classification model and Linear SVM on the dermo fit dataset. Their setup produced an accuracy level of 90% on the selected dataset. Milton et al. [41] depicted a comprehensive study of multiple deep-learning techniques for skin cancer. They conducted the experiments on the publicly available ISIC2018 dataset, fed to multiple neural networks, including Inception Resnet-V2, PNASNet-5, SENet-154, and Inception-V4. The PNASNet-5 model is the best performer at 76% accuracy level.
Khatib et al. [42] presented Resnet-101 architecture for the skin lesions classification. They fine-tuned the architecture by employing transfer learning (TL) to differentiate the various forms of skin lesions and achieved an accuracy level of 90% on a well-known PH2 database. Alizadeh et al. [43] deployed the Vgg19 NN model using kernel principal components analysis (KPCA) and attained 85.2% accuracy using the ISIC2016 dataset. Almaraz et al. [44] used the ABCD rule-based technique after extracting handcrafted features’ color, shapes, and texture. These features were then given to Mobile NetV2 neural network melanoma categorization. The proposed technique achieved 92.4% accuracy using the HAM10000 dataset. Reis et al. [45] employed a DL approach for skin lesion identification and segmentation. The suggested technique was investigated on three widely accessible datasets, ISIC2018, ISIC2019, and ISIC2020, where prediction accuracy was enhanced to 90.1, 90.2, and 91.3%. Khan et al. [8] presented an improved subdivision combinatorial architecture (IMFO) consisting of moth+flame and DL Classification for skin lesion classification. Furthermore, they extended the model to minimize the time taken in diagnosing skin cancer. The IMFO architecture was tested on PH2, ISBI 2016, 2017, and 2018 datasets and obtained an accuracy level of 98.70%, 95.38%, 95.79%, and 92.69%, respectively. The architecture was also tested on the dataset Ham10000, where it reflected a precision level of 90.67% which represents an improvement. Khan et al. [46] presented another intelligent system based on deep neural networks for complex skin cancer categories. The authors suggested a two-stream DNN information fusion framework for classifying multiclass skin cancer. Firstly, a contrast enhancement technique based on fusion was suggested in which magnified images were fed to the pre-formed DenseNet201 architecture. These features were modified utilizing the skewness-controlled moth + flame optimization approach. After that, stream deep features were captured and down-sampled using fine-tuned MobileNetV2 pre-trained systems and a proposed feature selection structure. The proposed technique was tested on three unbalanced datasets named as HAM10000, ISBI2018, and ISIC2019, that produced accuracy levels of 96.5%, 98%, and 89%, respectively. These discussed methods focused on detection and classification using deep learning and machine learning classifiers. They did not focus on the fusion of different source features. Also, they ignore the process of best feature selection that can help in reducing the computational time. To address these important challenges, a new AI-based fully automated framework is proposed for skin lesion classification.

3. Proposed Methodology

The proposed methodology is illustrated in Figure 2. Figure 2 reflects that firstly, the dataset is preprocessed, and then the enhanced dataset is fed to fine-tune the DL model for training based on transfer learning to extract deep features. Secondly, the extracted features are passed through the feature fusion process. Finally, an updated Antlion optimization approach was employed to obtain an optimized feature vector.

3.1. Datasets Description

This paper uses two variants of ISIC datasets, including 2018 and 2019, for the experimental process.
ISIC2018: This dataset was generated in the year 2018 by ISIC. It is a collection of 10,014 training images and 55,834 testing images. The dermoscopy technology is employed for capturing images RGB images. This dataset has seven classes: Akiec, Bcc, Bkl, Df, Mel, Nv, and Vasc. Table 1 summarizes and highlights the overall class distribution within the dataset.
ISIC2019: This dataset was generated in the year 2019 by ISIC. It is a collection of 20,685 training images and 47,514 testing images. The dermatoscopy technology is employed for capturing images RGB images. This dataset has seven classes: AK, BCC, BKL, DF, MEL, NV, and VASC. Table 2 summarizes and highlights the overall class distribution within the dataset.

3.2. Novelty 1: Lesion Enhancement

In this work, a hybrid technique is employed for contrast enhancement. In the first step, a haze reduction technique is employed, where the input image is refined, followed by applying top–bottom filtering to improve local and global contrast [47]. The step-wise haze reduction process is given below.
Step 1: The haze image model is given below:
I x = J x T x + L 1 T x
where I , J , L , and T represent the intensity, scene radiance, atmospheric light, and map transmission, respectively. The scene radiance is recovered using the algorithm [48]; however, other factors, including J from the estimated light of the atmosphere and the map transmission, are computed as follows:
J x = I x A m a x t x , t 0 + A
Step 2: Consider λ ( x , y ) is an input image of dimension N × M × K where N = M = 256 and   K = 3 . Let, λ ~ n z x , y determine the haze reduction image having the same dimensions. The top hat filtering is proposed and computed using the following mathematical formulation:
λ T o p ( a , b ) = λ ( a , b ) s λ ( a , b )
λ B o t a , b = λ a , b s λ a , b
λ ~ a , b = i = 1 λ T o p , λ B o t λ B o t a , b
T = M a x λ ~ a , b
F = λ ~ a , b                   f o r       λ ~ a , b   T λ l o s a , b               f o r     λ ~ a , b < T
The visual output of this process is illustrated in Figure 3.

3.3. Data Augmentation

This is a process in which the data/data points are artificially increased using the existing data for better training, identification, and classification in the later stages. The advantage of data augmentation is that it improves model learning by providing a huge amount of data. Also, the cost of operations related to data collection will be reduced. The detail given in Table 2 and Table 3 shows that the total number of original images are 20,685. Before data augmentation, the contrast of the real images is improved using the proposed contrast-enhanced technique. After applying augmentation, the selected datasets were updated and are shown in Table 3 and Table 4. A few sample augmented images are illustrated in Figure 4.

3.4. Modified Models

In this work, different DL models were fine-tuned to obtain high-performance accuracy. These are explained in detail:
Fine-Tuned DarkNet19: The model is fine-tuned by eliminating linked, softmax, classification, and final four average-pool layers. The original model is shown in Figure 5. It is all because it is pre-trained on 1000 classes belonging to the ImageNet dataset. Hence, during the fine-tuning process, four new layers are added, including the average-pooling 2D-layer, fully connected layer, softmax layer, and classification layer. The Darknet19 model is trained through transfer learning. In the training process, several hyper-parameters are adjusted, i.e., the learning rate is 0.001, the minimum-batch size is 20, the momentum is 0.07, the optimizer is stochastic gradient descent, and the maximum epochs are 100. Finally, the trained model extracts features adopting the gap layer.
Fine-Tuned ResNet18: The ResNet18 DL model consists of 18 layers. The architecture has a fully connected combination of softmax, convolutional, pooling, and classification layers. This model uses a pooling layer named ‘pool5’ for feature extraction. From the ImageNet dataset, more than a million images will be trained on the network when you load the pre-trained version. The architecture of ResNet18 is depicted in Figure 6. The last four layers, termed the average—layer, are deleted during the fine-tuning phase, along with the fully connected, softmax, and classification layers. The previous fully connected layer was trained on an ImageNet dataset with 1000 item types.
Furthermore, four more layers are added in a fine-tuning process. These are average-pooling 2D-layer, fully connected-layer, softmax layer, and classification layer. The ResNet18 model is trained using TL. Numerous hyper-parameters are initialized and adjusted during training, such as learning rate to 0.001, mini-batch size to 20, momentum to 0.07, stochastic gradient descent optimizer, and a maximum number of epochs to 100. Finally, the trained model extracts features from the pool5 layer.
Fine-tuned InceptionV3: The InceptionV3 DL model consists of 48 layers. The architecture contains a fully connected combination of softmax, convolutional, pooling, and classification layers. A pooling layer named the ‘avg-pool layer’ was used for feature extraction. This model was previously trained on over one million photos in the ImageNet dataset. This model is mostly used for image recognition and has a 78.1% accuracy rate. The architecture of the fine-tuned InceptionV3 is depicted in Figure 7. The last four average pool layers, along with the fully connected, softmax, and classification layers, are deleted during the fine-tuning phase. The previous fully connected-layer was trained on an ImageNet dataset with 1000 item types. Next, in a fine-tuning procedure, four new layers are added. These are the average-pooling 2D-layer, a fully connected layer, a softmax layer, and a classification layer followed by TL to train the Inceptionv3 model. Numerous hyper-parameters are initialized and adjusted during the training process, such as learning rate to 0.001, minimum batch size to 20, momentum to 0.07, optimizer of stochastic gradient descent, and maximum number of epochs to 100. Finally, the trained model is used to extract features from the avg-pool-layer.
Transfer Learning: In this section, TL [50] is discussed for this work. The domain, denoted by   F = Z , R ( Z ) , is made up of two parts, i.e., a feature space Z and a marginal probability distribution   R ( Z ) , whereas Z = z z i Z , i = 1 , , M   and M is a dataset containing M occurrences.
After that, the task is defined; when presented with a particular domain F, the task is represented as T = W , f ( . )   including two factors such as label-space W and a mapping function f ( . ) , whereby   W = w w i W , i = 1 , , M , and M is a label set for the relevant instances in F. The mapping function f ( . ) , generally known as   f z = R w z , is a non-linear indirect function that could bridge the gap between the anticipated judgment derived from the proposed datasets and the input instance. The label spaces between these tasks also allow for the specification of different goals. Different fault classes and categories might be conceived of as distinct tasks.
Transfer learning, supplied with a source domain F s = Z s , R s ( Z s ) with the source task T s = W s , f s ( . ) and a target domain F T = Z T , R T ( Z T ) with the target task   T T = W T , f T ( . ) , is looking for a better mapping function f T ( . ) for the target task T T utilizing transferable knowledge from the source domain D s and task   T s . Unlike traditional ML and DL, where the domain and job of the source and target situations are identical, i.e., F s = F T and T s = T T , TL solves challenges where the source and destination situations’ domains and/or tasks diverge, i.e., F s F T and/or T s T T .
Deep TL may be defined as follows based on the above concept: Deep TL aims to comprehend the mapping function f S T ( . ) Given a transfer learning challenge, leverage the sophisticated DL model that is DNN f S T ( . ) : Z T W T based on [ F S , F T , T S , T T ].
Proposed Work Process: The process of transfer learning for feature extraction of this work is depicted in Figure 8. Al three selected fine-tuned models are trained on the skin datasets using the concept of transfer learning. Deep features are extracted from the global average pooling layer of each model and obtained different dimensional feature vectors. During the training of the deep models, the hyperparameters such as learning rate, momentum, L2RegularizationFactor, and mini-batch size are selected through GA. The resultant values are given in the above section. The extracted features are further fused using a novel fusion technique (presented in the next Section 3.5).

3.5. Novelty: Features Fusion and Optimization

Deep extracted features are fused using a serial correlation-based approach in this work. The main purpose of this approach is to first serially fuse all the features and then find the correlation based on the pairs. A total of four steps were performed for the fusion of this approach:
Serially fused all vectors, as shown in Figure 3
Obtained a combined vector of dimension N × K
Find the correlation of each row feature vector and consider the most highly correlated features
Check the fitness of each row using the Fine-KNN classifier
In the end, the positively correlated and weakly correlated features are again serially fused in separate vectors. Both vectors are analyzed in terms of fitness function and the best one with better accuracy. This complete process is defined under the following Algorithm 1:
Algorithm 1. Input: Original feature vectors
ϕ 1   Fine-tune DarkNet features
ϕ 2 Fine-tune Resnet18 features
ϕ 3 Fine-tune InceptionV3 features
Step 1: Fused all vectors in a serial-based fashion
             ϕ 4 = ϕ 1 ϕ 2 ϕ 3 ( N × k 1 + N × k 2 + N × k 3 )
Step 2: Make sets of ϕ 4 using 2 × 2 window size.
Step 3: Find the correlation of each set using the following equation:
r = n ϕ i ϕ j ϕ i ϕ j n   ϕ i 2 ϕ i 2 n   ϕ j 2 ϕ j 2
Step 4: Consider features of positive correlation in a feature vector ϕ 5 k and
weak correlation in ϕ 6 k
Step 7: Fuse ϕ 5 k and ϕ 6 k separately in two new feature vectors and find the fitness of each.
Step 8: Based on the fitness, consider the highest accuracy feature set for further process.
Output: Positive correlation vector (higher accuracy value in this work) ϕ 5 k
The fused feature vector is further refined using a nature-inspired improved algorithm.
Antlion Optimization with Mean Deviation(ALO-MD).
Mirjalili [36] developed a novel enacted optimization approach called antlion optimization (ALO). The ALO algorithm is constructed around the inherent hunting mechanism of ant lions.
Motivation: Antlions (doodlebugs) are classified as Myrmeleontidae and Neuroptera [51]. They often hunt as larvae, while the adult stage is used for reproduction. As they dive deep into the sand, antlion larvae move in a circular motion and spew sand from their large lips. After excavating the trap, larvae sleep under the cone’s bottom, waiting for bugs, particularly ants, to be entrapped in it. The antlion attempts to seize any prey it discovers in the snare.
On the other hand, insects try to avoid captivity and are occasionally not immediately captured. Antlions expertly pour sand towards the hole’s edge, enabling the prey to sink to the bottom. A victim trapped in the mouth is eaten underneath. Antlions fling the victim’s remains outside the hole after devouring the victim and prepare the hole for their subsequent hunt. A further interesting aspect of antlion conduct is the relationship involving trap size and two variables: hunger level and moon shape.
Antlion optimization (ALO): Mirjalili [36] developed a novel enacted optimization approach called antlion optimization (ALO). The ALO algorithm is constructed around the inherent hunting mechanism of antlions.
Artificial Antlion
Using the prior depiction of antlions, Mirjalili devised the following criterion throughout optimization:
  • Ants, as prey, wander across the search space utilizing various random walks.
  • Antlion traps influence random walks.
  • Antlions may dig holes in accordance to their size. The greater the fitness, the larger the hole.
  • Antlions are more likely to capture ants if their holes are wider.
  • An antlion with the highest fitness level in each cycle can catch any ant.
  • The random walk’s span is adaptively reduced to simulate ants sliding toward antlions.
Input
A searchable area, a fitness feature, a quantity of ants, antlions, iterations, and antlions (T)
Output
The fitness of the elitist antlion:
  • Make an irregular population of n ant positions and n antlion positions
  • Determine the fitness of each ant and antlion.
  • Find the elite that is the finest antlion.
  • t = 0
  • while(t ≤ Τ)
for each Ant I, do
  • Choose an opponent using a roulette wheel (making trap).
  • Bring the ants nearer to the antlion; considering Equations (2) and (3).
  • For this Ant I, build and balance a random walk; check Equations (5) and (6) for model trapping, Equation (7) for the random walk, and Equation (9) for walk normalization.
  • end
6.
Evaluate each ant’s fitness
7.
If an antlion grows fitter (catching prey), replace it with its equivalent ant 7 .
8.
If an antlion becomes fitter than the elite, update it.
9.
end while
Method 1: Antlion Optimization Algorithm (ALO)
  • If an ant grows stronger than an antlion, the antlion will grab it and drag it beneath the sand.
  • After each hunt, an antlion repositions itself near the most recently caught prey and digs a hole to maximize its chances of catching new prey.
  • Under the conditions above, an antlion optimizer can be built in the following.
  • Method 1.
Building trap: The hunting skill of antlions is modeled using a roulette wheel. Ants are believed to be restricted to a single antlion. The ALO algorithm must select antlions throughout optimization depending on their fitness using a roulette wheel operator. This technique increases the likelihood of stronger antlions catching ants.
Catching prey and re-building the hole:
In the final step of the hunt, the antlion consumes the ant. It is thought that when ants increase physical fitness in relation to their comparable antlion, they penetrate the sand and attempt to catch prey. An antlion must modify its posture to match the latest whereabouts of the chasing ant in order to maximize its potential for finding new victims. In this sense, sentence (1) is proposed.
A n t l i o n t j = A n t t i is better than f ( a n t i l o n t i ) , 1 where t indicates the most recent revision, A n t l i o n t j represents the position of the choose j t h antlion at t t h iteration, and A n t t i represents the location of the I t h Ant at the t t h iteration.
Antlion optimizer, according to the algorithm, performs the following stages on each particular ant:
Sliding ants towards Antlion:
Sand is thrown from the hole’s center when an antlion finds an ant inside the trap. The imprisoned ant’s attempt to escape is impacted by this action. The radius of the ants’ random walk hyper-sphere is reduced adaptively to represent this behavior numerically; see Equations (8)–(10).
a s = a s I ,
where a s is the component that has the least impact on t t h iteration and i is a ratio.
b s = b s I ,
where b s is the highest value for all variables at t t h iteration and I is a ratio that is defined as:
I = 10 u s S ,
where s is the latest iteration; S the highest number of iterations; and u a constant specified by the current iteration u = 2   f o r   s > 0.1 S ,   u = 3   o t h e r w i s e . When s exceeds 0.5 S , w equals 4 . When s > 0.75 S and u = 5 , when s > 0.9 S ,   u = 6 , and when s > 0.95 S ,   u = 6 . Essentially, the constant u can vary the amount of exploitation precision.
Trapping in Antlion’s holes
The slide ant is captured by simulating the food movement towards the targeted antlion’s hole. Alternatively, the location of the selected antlion now determines how far the ant can travel. Adjusting the range of the ant’s random journey to the antlion’s location in five equations can be depicted using Equations (11) and (12):
a s i = a s + A n t l i o n s j
b s i = b s + A n t l i o n s j
where a s is the least significant variable at the t t h iteration; b s is the vector containing all variables with the highest values at the t t h iteration; a s i is the least significant factor for the i t h ant; b s j is the maximum of all variables for the i t h ant; and A n t l i o n s j shows the location of the chosen j t h antlion at the t t h iteration.
Random walks of ants:
Equation 13 underpins all random walks.
y s = 0 ,   c u m s u m 2 p s 1 1 ; c u m s u m 2 p s 2 1 ; ; c u m s u m 2 p s S 1
where the cumulative amount is calculated by cumsum; S is the maximum number of iterations, where iteration here refers to the random walk step; and p s is Equation (14) for a stochastic function (8).
p s = 1   i f   r a n d > 0.5 0   i f   r a n d 0.5 ,
where s is the random walk step iteration in this research and rand is a random integer produced with a homogenous distribution in the range 0,1 as per Equation (15):
V s i = v s i x i × ( b i a s i ) z s i x i + x i
where x i is the random walk in the least of the i t h variable; b i is the random walk’s maximum value in the i t h variable; a s i is the lowest of the i t h variable at the t t h iteration; and b s i is the peak of the i t h variable at the t t h iteration.
Elitism
The best solution(s) should be maintained throughout iterations by employing elitism. The chosen antlion and the elite antlion lead the ant’s random walk in this scenario; therefore, moving a given ant takes the form of the average of both random walks; see Equation (16).
A n t s i = P s A + P s F 2
where P s A is the picked antlion’s random stroll about the roulette table, and P s F is the shambling around the roulette wheel of the elite antlion.

4. Results and Discussion

With an emphasis on the inefficiency of other classifiers, test design, data collection, recall value, quantitative data, graphical representations, and tables, this section will analyze and show the findings based on various performance indicators.

4.1. Experimental Setup

On the dataset, 10-fold cross-validation was used to perform the calculations. The training rate is set to 0.05, the mini-batch range is restricted to 32, and 100 iterations are required for CNN architecture learning. The best among them is validated based on performance measurements such as accuracy, time taken, sensitivity rate, precision rate, number of observations, FNR, Fowlkes–Mallows index, and F1-Score. Several classifiers are utilized to validate the suggested approach with the greatest accuracy and minimum time consumed. MATLAB 2022a was employed to execute the simulation studies on a personal desktop pc Core-i7 having a memory of 16GB as well as an 8 Gigabyte graphics card.

4.2. Results and Analysis

ISIC2018 Dataset Results: Table 5 contains the classification outcomes for the ISIC2018 dataset using the DarkNet19 deep model. The fine-tuned model was trained using the enhanced dataset, which was also used to extract features from the second-to-last feature layer. Several classifiers were used, but Quadratic SVM outperformed them with an accuracy of 86.3%, a recall rate of 87.27%, a precision rate of 87.2%, F1 score of 87.24%, and an AUC value of 0.98%. Each classifier’s computational time is also calculated, as shown in Table 5. The Fine Tree classifier’s least recorded time is 108.46 s, while the Medium Neural Network’s greatest recorded time is 2978.1 (s).
The classification outcomes of the ISIC2018 dataset for the Resnet18 deep model are shown in Table 5 (second half). Numerous classifiers have been used for the classification process but Quadratic SVM performed better, achieving an accuracy of 88.3%, a recall rate of 89.39%, a precision rate of 89.13%, an F1 score of 89.26%, and an AUC value of 0.98%. Moreover, the computational time is also computed for each classifier, as shown in Table 5. Compared with experiment 1 (Table 5), it is observed that the maximum accuracy for this experiment is 88.3%, whereas for the first experiment, the maximum obtained accuracy was 86.3%. Hence, it can be summarized that the fine-tuned Resnet18 model gives better accuracy. The least noted time is 52.616 s for the Fine Tree classifier, whereas the maximum observed time is 1112.6 s for the medium neural network.
The classification outcomes for the ISIC2018 dataset for the InceptionV3 deep model are shown in Table 5 (third section). Although several classifiers were used, Quadratic SVM outperformed them all with an accuracy of 90.9%, a recall rate of 92.63%, a precision rate of 92.03%, an F1 score of 92.32%, and an AUC value of 0.99%. Each classifier’s computing time is also calculated, as shown in Table 5. It was found that the maximum accuracy for this experiment is 90.9%, compared to experiments 1 and 2. Meanwhile, for the first experiment, the maximum obtained accuracy was 86.3%, and for the second experiment, the maximum accuracy was 88.3%. Hence, it can be summarized that the fine-tuned InceptionV3 model provides better accuracy. The minimum noted time is 129.68 s for the Fine Tree classifier, whereas the maximum observed time is 4601.7 s for the medium neural network.
The classification outcomes of the proposed fusion technique on the enhanced ISIC2018 skin dataset are given in Table 6. Many classifiers were used; however, Quadratic SVM outperformed them all with an accuracy of 96.1%, a recall rate of 96.93%, a precision rate of 96.33%, an F1 score of 96.62%, and an AUC value of 0.98. Each classifier’s computational time is also calculated, as shown in Table 6. Compared with previous experiments (Table 5), it is observed that the maximum accuracy for this experiment is 96.1%, whereas for the first experiment, the maximum obtained accuracy was 86.3%, for the second experiment, the maximum accuracy was 88.3%, and for the third experiment the maximum accuracy was 90.9%. Hence, the fusion process increases accuracy more than individual deep model components. The minimum noted time is 290.756 s for the Fine Tree classifier, whereas the maximum observed time is 8693 s for the medium neural network. The confusion matrix of this experiment is also shown in Figure 9.
Table 7 shows the proposed feature selection technique results using the enhanced ISIC2018 dataset. The Quadratic SVM classifier outperformed them all with an accuracy of 96.0%, a recall rate of 96.86%, a precision rate of 96.3%, an F1 score of 96.56%, and an AUC value of 0.99%. Each classifier’s computational time is also calculated, as shown in Table 7. Moreover, the confusion matrix is illustrated in Figure 10, which shows the correct prediction rate for each class. The highest accuracy for this experiment is 96.0%, compared with experiments 1, 2, and 3 (Table 5). Although the maximum accuracy for the first experiment was 86.3%, the maximum accuracy for the second experiment was 88.3%, the maximum accuracy for the third experiment was 90.9%, and the maximum accuracy for the fourth experiment was 96.1%.
In conclusion, it can be said that when comparing Table 5, it is shown that the optimization time increases accuracy and decreases computing time; however, it can be seen that the accuracy only changed a little, but the computational time changed significantly compared to the previous experiment. Hence, overall, the proposed framework and the optimization process show improvement. The least noted time is 130.94 s for the Fine Tree classifier, whereas the maximum observed time is 2525.7 (s) for medium KNN.
ISIC2019 Dataset Results: The classification outcomes of the ISIC2019 dataset using the DarkNet19 deep model are shown in Table 8. The fine-tuned model was trained using the supplemented dataset, which was also used to extract features from the second-to-last feature layer. Weighted KNN outperformed other classifiers used for classification, achieving an accuracy of 99.7%, a recall rate of 99.73%, a precision rate of 99.71%, an F1 score of 99.72%, and an AUC value of 1.00%. Each classifier’s computing time is also calculated, as shown in Table 8. The Fine Tree classifier’s minimum noted time is 245.51 s, whereas the bi-layer neural network’s highest recorded time is 2123.6 (s).
The classification outcomes of the ISIC2019 dataset for the Resnet18 deep model are shown in Table 8 (second half). Several classifiers have been used; however, Weighted KNN outperformed them all with an accuracy of 99.5%, a recall rate of 99.53%, a precision rate of 99.59%, an F1 score of 99.56%, and an AUC value of 1.00%. Each classifier’s processing time is also calculated. This experiment obtained a maximum accuracy of 99.5% compared to the previous experiment. The classification outcomes of the ISIC2019 dataset for the InceptionV3 deep model are shown in Table 8 (third section). Many classifiers have been used; however, Weighted KNN outperformed them all with an accuracy of 99.7%, a recall rate of 99.66%, a precision rate of 99.69%, an F1 score of 99.36%, and an AUC value of 1.00%. Overall, this experiment’s performance is better than previous experiments.
The classification outcomes for the enhanced ISIC2018 skin dataset are given in Table 9. Many classifiers have been used; however, Medium KNN outperformed them all with an accuracy of 99.9%, a recall rate of 99.86%, a precision rate of 99.88%, an F1 score of 99.88%, and an AUC value of 1.00%. Each classifier’s processing time is also calculated, as shown in Table 9. Moreover, Figure 11 shows the Medium KNN’s confusion matrix to verify the correct prediction rate. Compared with the previous three experiments of the proposed fusion process, it is observed that the accuracy of this experiment is significantly improved. After the fusion process, we employed the proposed feature selection technique.
Several classifiers have been used; however, Weighted KNN outperformed them all with an accuracy of 99.9%, a recall rate of 99.89%, a precision rate of 99.89%, an F1 score of 99.88%, and an AUC value of 1.00%. Each classifier’s computing time is also calculated, as shown in Table 10. Moreover, Figure 12 also shows the Weighted KNN confusion matrix. By employing Figure 12, we can verify the correct prediction rate of each cancer class. In contrast to Experiment 1, Experiment 2, Experiment 3, and Experiment 4 (Table 8 and Table 9), it is noted that the maximum accuracy for this experiment is 99.9%. In contrast, the maximum accuracy for the first experiment was 99.7%, the maximum accuracy for the second experiment was 99.5%, the maximum accuracy for the third experiment was 99.7%, and the maximum accuracy for the fourth experiment was 99.9%. Overall, it is noted that the accuracy of the fusion process is improved, but computational time is significantly reduced for the feature selection technique.
In the end, the comparison is conducted regarding time for the middle steps on selected datasets. Table 11 presents the computational time-based comparison of the ISIC2018 dataset. This table shows that the time noted by the Resnet18 model is less than the Darknet19 and InceptionV3, except for the Bagged Tree classifier. However, after the fusion process, it jumped and almost doubled this time, which is a drawback of this framework. This drawback was resolved through a proposed optimization approach that maintains accuracy and reduces the computational time significantly compared to the fusion process. For Darknet19, the minimum time is 108.46 s for the Fine Tree classifier, and the maximum time is 2978.7 s for the Medium Neural Network. For Resnet18, the minimum time is 52.616 s for the Fine Tree classifier, and the maximum time is 1112.6 s for the Medium Neural Network. For InceptionV3, the minimum time is 129.68 s for the Fine Tree classifier, and the maximum time is 4601.7 s for the Medium Neural Network. For fusion, the minimum time is 290.756 s for the Fine Tree classifier, and the maximum is 8693 s for the Medium Neural Network. For optimization, the minimum time is 130.94 s for the Fine Tree classifier, and the maximum time is 2525.7 s for the Medium KNN.
Table 12 presents the computational time-based comparison of the ISIC2019 dataset. This table shows that the time noted by the Resnet18 model is less than the Darknet19 and InceptionV3 except for Medium KNN, Weighted KNN, Medium Neural Network, and Bi-Layered Neural Network classifier. However, after the fusion process, it jumped and almost doubled this time, which is a drawback of this framework. This drawback was resolved through a proposed optimization approach that maintains accuracy and reduces the computational time significantly compared to the fusion process. For Darknet19, the minimum time is 245.51 s for the Fine Tree classifier, and the maximum time is 8373 s for Bagged Tree. For Resnet18, the minimum time is 102.59 s for the Fine Tree classifier, and the maximum time is 7982.8 s for Bagged Tree. For InceptionV3, the minimum time is 460.25 s for the Fine Tree classifier, and the maximum time is 5605.7 s for the Medium Neural Network. For fusion, the minimum time is 460.25 s for the Fine Tree classifier, and the maximum is 13082.3 s for the Bi-Layered Neural Network. For optimization, the minimum time is 33.252 s for the Fine Tree classifier, and the maximum is 1018.1 s for Bagged Tree. Finally, the proposed framework’s accuracy is compared with several recent studies, as presented in Table 13. Based on this table, it is observed that the proposed framework accuracy is significantly improved. In addition, a few AI-based dermatoscopy techniques (publicly available) are compared with the proposed method. In [52], they obtained an AUC value of 0.970 on ISIC2019 and 0.932 on ISIC2018 dataset using the ADAE technique. However, our method obtained 0.99. In [53], they obtained an accuracy of 96.10%, whereas the proposed method obtained 99.8%.
Table 14 presents the summary of all best results based on the additional performance measures such as Fowlkes–Mallows index, MCC, and Kappa. Overall, the proposed method shows the improved accuracy.

5. Conclusions

Today, serious issues include the deaths of patients due to the late or incorrect diagnosis of cancer cases. Early diagnosis of cancer cases using a CAD system can help in the reduction in the death rate. When an appropriate CAD system is employed, this can complement the work of dermatologists in classifying skin lesions (benign or melanoma). This work proposes a deep learning- and optimization-based end-to-end framework for multiclass skin lesion classification. Initially, a contrast enhancement technique was proposed based on the dark channel haze and top–bottom filtering that improved image quality and the strength of deep features. Hyperparameters of the fine-tuned model were initialized using a genetic algorithm instead of manual initialization. After that, deep features were extracted and fused with the information using a serial correlation approach. The fusion process improved the accuracy, but computational time increased. A selection technique called improved antlion optimization was developed to make the framework more efficient in terms of time. The best features are selected using this approach and classified using machine learning classifiers. The experimental process was conducted on two publicly available datasets, ISIC2018 and ISIC2019, and obtained improved accuracy of 96.1% and 99.9%, respectively.

5.1. Limitations

-
A detailed analysis is required for the max pooling operation of sizes 2 × 2, 3 × 3, and 4 × 4 of the weights preprocessing process.
-
The augmentation process improved the accuracy, but on the other hand, it significantly increased the redundant features.
-
KNN classifiers drop the classification accuracy that needs the proper analysis.
-
The fusion process improved the accuracy, but computational time also increased due to the enlarged number of predictors.

5.2. Future Directions

A residual block-based attention network will be designed in the future, and more layers will be added based on the GradCAM approach. This will allow max-pooling layer weights to be analyzed to help improve the proposed model. In addition, the experimental process will be conducted on the ISIC2020 dataset.

Author Contributions

Conceptualization, M.H., M.A.K., R.D. and A.A.; Methodology, M.H., M.A.K., R.D., M.M. and A.M.; Software, M.H., M.A.K., M.M., M.A. and A.M.; Validation, R.D., A.A. and M.A.; Formal analysis, M.H., R.D., A.A. and M.A.; Investigation, M.A.K., M.M. and A.M.; Resources, M.A.; Data curation, A.A.; Writing—original draft, M.H. and M.A.K.; Writing—review & editing, R.D., M.M., M.A. and A.M.; Visualization, M.M.; Supervision, M.A.K.; Project administration, R.D., A.A., M.M., M.A. and A.M.; Funding acquisition, A.A. and A.M. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through large group Research Project under grant number RGP2/249/44.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets used in this work are publicly available.

Acknowledgments

The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through large group Research Project under grant number RGP2/249/44.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Haggenmüller, S.; Maron, R.C.; Hekler, A.; Utikal, J.S.; Barata, C.; Barnhill, R.L.; Beltraminelli, H.; Berking, C.; Betz-Stablein, B.; Blum, A.; et al. Skin cancer classification via convolutional neural networks: Systematic review of studies involving human experts. Eur. J. Cancer 2021, 156, 202–216. [Google Scholar] [CrossRef] [PubMed]
  2. Wang, P.; Yang, W.; Shen, S.; Wu, C.; Wen, L.; Cheng, Q.; Zhang, B.; Wang, X. Differential Diagnosis and Precision Therapy of Two Typical Malignant Cutaneous Tumors Leveraging Their Tumor Microenvironment: A Photomedicine Strategy. ACS Nano 2019, 13, 11168–11180. [Google Scholar] [CrossRef] [PubMed]
  3. Curti, B.D.; Faries, M.B. Recent Advances in the Treatment of Melanoma. N. Engl. J. Med. 2021, 384, 2229–2240. [Google Scholar] [CrossRef] [PubMed]
  4. Nikolouzakis, T.K.; Falzone, L.; Lasithiotakis, K.; Krüger-Krasagakis, S.; Kalogeraki, A.; Sifaki, M.; Spandidos, D.A.; Chrysos, E.; Tsatsakis, A.; Tsiaoussis, J. Current and future trends in molecular biomarkers for diagnostic, prognostic, and predictive purposes in non-melanoma skin cancer. J. Clin. Med. 2020, 9, 2868. [Google Scholar] [CrossRef]
  5. Rahimi, A.; Esmaeili, Y.; Dana, N.; Dabiri, A.; Rahimmanesh, I.; Jandaghian, S.; Vaseghi, G.; Shariati, L.; Zarrabi, A.; Javanmard, S.H.; et al. A comprehensive review on novel targeted therapy methods and nanotechnology-based gene delivery systems in melanoma. Eur. J. Pharm. Sci. 2023, 187, 106476. [Google Scholar] [CrossRef]
  6. Cancer Statistics. 2023. Available online: https://www.cancer.org/content/dam/cancer-org/research/cancer-facts-and-statistics/annual-cancer-facts-and-figures/2023/2023-cancer-facts-and-figures.pdf (accessed on 12 January 2023).
  7. Gururaj, H.L.; Manju, N.; Nagarjun, A.; Aradhya, V.N.M.; Flammini, F. DeepSkin: A Deep Learning Approach for Skin Cancer Classification. IEEE Access 2023, 11, 50205–50214. [Google Scholar] [CrossRef]
  8. Khan, M.A.; Sharif, M.; Akram, T.; Damaševičius, R.; Maskeliūnas, R.J.D. Skin lesion segmentation and multiclass classification using deep learning features and improved moth flame optimization. Diagnostics 2021, 11, 811. [Google Scholar] [CrossRef]
  9. Kasmi, R.; Mokrani, K. Classification of malignant melanoma and benign skin lesions: Implementation of automatic ABCD rule. IET Image Process. 2016, 10, 448–455. [Google Scholar] [CrossRef]
  10. Unlu, E.; Akay, B.N.; Erdem, C. Comparison of dermatoscopic diagnostic algorithms based on calculation: The ABCD rule of dermatoscopy, the seven-point checklist, the three-point checklist and the CASH algorithm in dermatoscopic evaluation of melanocytic lesions. J. Dermatol. 2014, 41, 598–603. [Google Scholar] [CrossRef]
  11. Zhu, B.; Chen, S.; Wang, H.; Yin, C.; Han, C.; Peng, C.; Liu, Z.; Wan, L.; Zhang, X.; Zhang, J.; et al. The protective role of DOT1L in UV-induced melanomagenesis. Nat. Commun. 2018, 9, 259. [Google Scholar] [CrossRef]
  12. Xin, C.; Liu, Z.; Zhao, K.; Miao, L.; Ma, Y.; Zhu, X.; Zhou, Q.; Wang, S.; Li, L.; Yang, F.; et al. An improved transformer network for skin cancer classification. Comput. Biol. Med. 2022, 149, 105939. [Google Scholar] [CrossRef] [PubMed]
  13. Keerthana, D.; Venugopal, V.; Nath, M.K.; Mishra, M. Hybrid convolutional neural networks with SVM classifier for classification of skin cancer. Biomed. Eng. Adv. 2023, 5, 100069. [Google Scholar] [CrossRef]
  14. Sethanan, K.; Pitakaso, R.; Srichok, T.; Khonjun, S.; Thannipat, P.; Wanram, S.; Boonmee, C.; Gonwirat, S.; Enkvetchakul, P.; Kaewta, C.; et al. Double AMIS-Ensemble Deep Learning for Skin Cancer Classification Expert Systems with Applications. Expert Syst. Appl. 2023, 234, 121047. [Google Scholar] [CrossRef]
  15. Wu, Q.-E.; Yu, Y.; Zhang, X. A Skin Cancer Classification Method Based on Discrete Wavelet Down-Sampling Feature Reconstruction. Electronics 2023, 12, 2103. [Google Scholar] [CrossRef]
  16. Shah, A.; Shah, M.; Pandya, A.; Sushra, R.; Sushra, R.; Mehta, M.; Patel, K.; Patel, K. A Comprehensive Study on Skin Cancer Detection using Artificial Neural Network (ANN) and Convolutional Neural Network (CNN). Clin. eHealth 2023, 6, 76–84. [Google Scholar] [CrossRef]
  17. Satheesha, T.Y.; Satyanarayana, D.; Prasad, M.N.G.; Dhruve, K.D. Melanoma Is Skin Deep: A 3D Reconstruction Technique for Computerized Dermoscopic Skin Lesion Classification. IEEE J. Transl. Eng. Health Med. 2017, 5, 1–17. [Google Scholar] [CrossRef] [PubMed]
  18. Alenezi, F.; Armghan, A.; Polat, K. A Novel Multi-Task Learning Network Based on Melanoma Segmentation and Classification with Skin Lesion Images. Diagnostics 2023, 13, 262. [Google Scholar] [CrossRef]
  19. Akilandasowmya, G.; Nirmaladevi, G.; Suganthi, S.; Aishwariya, A. Skin cancer diagnosis: Leveraging deep hidden features and ensemble classifiers for early detection and classification. Biomed. Signal Process. Control 2023, 105306. [Google Scholar] [CrossRef]
  20. Tschandl, P.; Rosendahl, C.; Kittler, H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci. Data 2018, 5, 180161. [Google Scholar] [CrossRef]
  21. Jaisakthi, S.M.; Mirunalini, P.; Aravindan, C.; Rajagopal Appavu, R. Classification of skin cancer from dermoscopic images using deep neural network architectures. Multimed. Tools Appl. 2023, 82, 15763–15778. [Google Scholar]
  22. Gilani, S.Q.; Syed, T.; Umair, M.; Marques, O. Skin Cancer Classification Using Deep Spiking Neural Network. J. Digit. Imaging 2023, 36, 1137–1147. [Google Scholar] [CrossRef] [PubMed]
  23. Hasan, M.K.; Ahamad, M.A.; Yap, C.H.; Yang, G. A survey, review, and future trends of skin lesion segmentation and classification. Comput. Biol. Med. 2023, 155, 106624. [Google Scholar] [CrossRef] [PubMed]
  24. Tembhurne, J.V.; Hebbar, N.; Patil, H.Y.; Diwan, T. Skin cancer detection using ensemble of machine learning and deep learning techniques. Multimedia Tools Appl. 2023, 82, 27501–27524. [Google Scholar] [CrossRef]
  25. Sharma, A.K.; Tiwari, S.; Aggarwal, G.; Goenka, N.; Kumar, A.; Chakrabarti, P.; Chakrabarti, T.; Gono, R.; Leonowicz, Z.; Jasinski, M. Dermatologist-Level Classification of Skin Cancer Using Cascaded Ensembling of Convolutional Neural Network and Handcrafted Features Based Deep Neural Network. IEEE Access 2022, 10, 17920–17932. [Google Scholar] [CrossRef]
  26. Mridha, K.; Uddin, M.; Shin, J.; Khadka, S.; Mridha, M.F. An Interpretable Skin Cancer Classification Using Optimized Convolutional Neural Network for a Smart Healthcare System. IEEE Access 2023, 11, 41003–41018. [Google Scholar] [CrossRef]
  27. Riaz, L.; Qadir, H.M.; Ali, G.; Ali, M.; Raza, M.A.; Jurcut, A.D.; Ali, J. A Comprehensive Joint Learning System to Detect Skin Cancer. IEEE Access 2023, 11, 79434–79444. [Google Scholar] [CrossRef]
  28. Kassem, M.A.; Hosny, K.M.; Damaševičius, R.; Eltoukhy, M.M. Machine learning and deep learning methods for skin lesion classification and diagnosis: A systematic review. Diagnostics 2021, 11, 1390. [Google Scholar] [CrossRef]
  29. Hauser, K.; Kurz, A.; Haggenmüller, S.; Maron, R.C.; von Kalle, C.; Utikal, J.S.; Meier, F.; Hobelsberger, S.; Gellrich, F.F.; Sergon, M.; et al. Explainable artificial intelligence in skin cancer recognition: A systematic review. Eur. J. Cancer 2022, 167, 54–69. [Google Scholar] [CrossRef]
  30. Zhang, J.; Xie, Y.; Xia, Y.; Shen, C. Attention Residual Learning for Skin Lesion Classification. IEEE Trans. Med. Imaging 2019, 38, 2092–2103. [Google Scholar] [CrossRef]
  31. Anand, V.; Gupta, S.; Koundal, D.; Singh, K. Fusion of U-Net and CNN model for segmentation and classification of skin lesion from dermoscopy images. Expert Syst. Appl. 2023, 213, 119230. [Google Scholar] [CrossRef]
  32. Alenezi, F.; Armghan, A.; Polat, K. Wavelet transform based deep residual neural network and ReLU based Extreme Learning Machine for skin lesion classification. Expert Syst. Appl. 2023, 213, 119064. [Google Scholar] [CrossRef]
  33. Kampylafka, E.; Simon, D.; D’oliveira, I.; Linz, C.; Lerchen, V.; Englbrecht, M.; Rech, J.; Kleyer, A.; Sticherling, M.; Schett, G.; et al. Disease interception with interleukin-17 inhibition in high-risk psoriasis patients with subclinical joint inflammation—Data from the prospective IVEPSA study. Thromb. Haemost. 2019, 21, 1–9. [Google Scholar] [CrossRef] [PubMed]
  34. Amin, J.; Sharif, A.; Gul, N.; Anjum, M.A.; Nisar, M.W.; Azam, F.; Bukhari, S.A.C. Integrated design of deep features fusion for localization and classification of skin cancer. Pattern Recognit. Lett. 2019, 131, 63–70. [Google Scholar] [CrossRef]
  35. Al-Masni, M.A.; Al-Antari, M.A.; Park, H.M.; Park, N.H.; Kim, T.-S. A deep learning model integrating FrCN and residual convolutional networks for skin lesion segmentation and classification. In Proceedings of the 2019 IEEE Eurasia Conference on Biomedical Engineering, Healthcare and Sustainability (ECBIOS), Okinawa, Japan, 31 May–3 June 2019; pp. 95–98. [Google Scholar]
  36. Pacheco, A.G.; Ali, A.R.; Trappenberg, T. Skin cancer detection based on deep learning and entropy to detect outlier samples. arXiv 2019, arXiv:1909.04525. [Google Scholar]
  37. Naeem, A.; Farooq, M.S.; Khelifi, A.; Abid, A. Malignant melanoma classification using deep learning: Datasets, performance measurements, challenges and opportunities. IEEE Access 2020, 8, 110575–110597. [Google Scholar] [CrossRef]
  38. Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. J. Big Data 2021, 8, 53. [Google Scholar] [CrossRef]
  39. Liu, L.; Mou, L.; Zhu, X.X.; Mandal, M. Automatic skin lesion classification based on mid-level feature learning. Comput. Med. Imaging Graph. 2020, 84, 101765. [Google Scholar] [CrossRef]
  40. Ghahfarrokhi, S.S.; Khodadadi, H.; Ghadiri, H.; Fattahi, F. Malignant melanoma diagnosis applying a machine learning method based on the combination of nonlinear and texture features. Biomed. Signal Process. Control 2023, 80, 104300. [Google Scholar] [CrossRef]
  41. Chaturvedi, S.S.; Tembhurne, J.V.; Diwan, T. A multi-class skin Cancer classification using deep convolutional neural networks. Multimedia Tools Appl. 2020, 79, 28477–28498. [Google Scholar] [CrossRef]
  42. El-Khatib, H.; Popescu, D.; Ichim, L. Deep Learning–Based Methods for Automatic Diagnosis of Skin Lesions. Sensors 2020, 20, 1753. [Google Scholar] [CrossRef]
  43. Ghosh, P.; Azam, S.; Quadir, R.; Karim, A.; Shamrat, F.M.J.M.; Bhowmik, S.K.; Jonkman, M.; Hasib, K.M.; Ahmed, K. SkinNet-16: A deep learning approach to identify benign and malignant skin lesions. Front. Oncol. 2022, 12, 931141. [Google Scholar] [CrossRef] [PubMed]
  44. Almaraz-Damian, J.-A.; Ponomaryov, V.; Sadovnychiy, S.; Castillejos-Fernandez, H. Melanoma and Nevus Skin Lesion Classification Using Handcraft and Deep Learning Feature Fusion via Mutual Information Measures. Entropy 2020, 22, 484. [Google Scholar] [CrossRef] [PubMed]
  45. Reis, H.C.; Turk, V.; Khoshelham, K.; Kaya, S. InSiNet: A deep convolutional approach to skin cancer detection and segmentation. Med. Biol. Eng. Comput. 2022, 60, 643–662. [Google Scholar] [CrossRef] [PubMed]
  46. Khan, M.A.; Sharif, M.; Akram, T.; Kadry, S.; Hsu, C.-H. A two-stream deep neural network-based intelligent system for complex skin cancer types classification. Int. J. Intell. Syst. 2021, 37, 10621–10649. [Google Scholar] [CrossRef]
  47. He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 2341–2353. [Google Scholar]
  48. Park, D.; Park, H.; Han, D.K.; Ko, H. Single image dehazing with image entropy and information fidelity. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 4037–4041. [Google Scholar]
  49. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27 June–1 July 2016; pp. 770–778. [Google Scholar]
  50. Ahsan, M.; Uddin, M.R.; Ali, S.; Islam, K.; Farjana, M.; Sakib, A.N.; Al Momin, K.; Luna, S.A. Deep transfer learning approaches for Monkeypox disease diagnosis. Expert Syst. Appl. 2023, 216, 119483. [Google Scholar] [CrossRef]
  51. Zawbaa, H.M.; Emary, E.; Parv, B. Feature selection based on antlion optimization algorithm. In Proceedings of the 2015 Third World Conference on Complex Systems (WCCS), Marrakech, Morocco, 23–25 November 2015; pp. 1–7. [Google Scholar]
  52. Marchetti, M.A.; Cowen, E.A.; Kurtansky, N.R.; Weber, J.; Dauscher, M.; DeFazio, J.; Deng, L.; Dusza, S.W.; Haliasos, H.; Halpern, A.C.; et al. Prospective validation of dermoscopy-based open-source artificial intelligence for melanoma diagnosis (PROVE-AI study). NPJ Digit. Med. 2023, 6, 127. [Google Scholar] [CrossRef]
  53. Olayah, F.; Senan, E.M.; Ahmed, I.A.; Awaji, B. AI Techniques of Dermoscopy Image Analysis for the Early Detection of Skin Lesions Based on Combined CNN Features. Diagnostics 2023, 13, 1314. [Google Scholar] [CrossRef]
  54. Nawaz, M.; Nazir, T.; Khan, M.A.; Alhaisoni, M.; Kim, J.Y.; Nam, Y. MSeg-Net: A Melanoma Mole Segmentation Network Using CornerNet and Fuzzy-Means Clustering. Comput. Math. Methods Med. 2022, 2022, 7502504. [Google Scholar] [CrossRef]
  55. Alsaade, F.W.; Aldhyani, T.H.H.; Al-Adhaileh, M.H. Developing a Recognition System for Diagnosing Melanoma Skin Lesions Using Artificial Intelligence Algorithms. Comput. Math. Methods Med. 2021, 2021, 9998379. [Google Scholar] [CrossRef]
  56. Babu, G.N.K.; Peter, V.J. Skin cancer detection using support vector machine with histogram of oriented gradients features. ICTACT J. Soft Comput. 2021, 11, 2301–2305. [Google Scholar]
  57. Alizadeh, S.M.; Mahloojifar, A. Automatic skin cancer detection in dermoscopy images by combining convolutional neural networks and texture features. Int. J. Imaging Syst. Technol. 2020, 31, 695–707. [Google Scholar] [CrossRef]
  58. Ichim, L.; Popescu, D. Melanoma Detection Using an Objective System Based on Multiple Connected Neural Networks. IEEE Access 2020, 8, 179189–179202. [Google Scholar] [CrossRef]
  59. Monika, M.K.; Vignesh, N.A.; Kumari, C.U.; Kumar, M.; Lydia, E.L. Skin cancer detection and classification using machine learning. Mater. Today Proc. 2020, 33, 4266–4270. [Google Scholar] [CrossRef]
Figure 1. Categories of skin lesions: (a) melanoma; (b) actinic keratosis; (c) basal cell carcinoma; (d) Merkel cell carcinoma; (e) melanocytic nevus/mole; (f) squamous cell carcinoma [20].
Figure 1. Categories of skin lesions: (a) melanoma; (b) actinic keratosis; (c) basal cell carcinoma; (d) Merkel cell carcinoma; (e) melanocytic nevus/mole; (f) squamous cell carcinoma [20].
Diagnostics 13 02869 g001
Figure 2. Illustration of the proposed methodology for skin lesion classification.
Figure 2. Illustration of the proposed methodology for skin lesion classification.
Diagnostics 13 02869 g002
Figure 3. Visual results of hybrid contrast enhancement. (A) Original images; (B) enhanced images after applying the proposed approach.
Figure 3. Visual results of hybrid contrast enhancement. (A) Original images; (B) enhanced images after applying the proposed approach.
Diagnostics 13 02869 g003
Figure 4. Augmentation process such as Flip LR and Flip UD.
Figure 4. Augmentation process such as Flip LR and Flip UD.
Diagnostics 13 02869 g004
Figure 5. Architecture of DarkNet19 model.
Figure 5. Architecture of DarkNet19 model.
Diagnostics 13 02869 g005
Figure 6. Architecture of ResNet18 model [49].
Figure 6. Architecture of ResNet18 model [49].
Diagnostics 13 02869 g006
Figure 7. Architecture of InceptionV3 model [49]. * sign in employed for such kind of figures.
Figure 7. Architecture of InceptionV3 model [49]. * sign in employed for such kind of figures.
Diagnostics 13 02869 g007
Figure 8. Process of transfer learning for feature extraction of skin cancer.
Figure 8. Process of transfer learning for feature extraction of skin cancer.
Diagnostics 13 02869 g008
Figure 9. Confusion matrix of Quadratic SVM for augmented ISIC2018 dataset.
Figure 9. Confusion matrix of Quadratic SVM for augmented ISIC2018 dataset.
Diagnostics 13 02869 g009
Figure 10. Confusion matrix of Quadratic SVM for augmented ISIC2018 dataset.
Figure 10. Confusion matrix of Quadratic SVM for augmented ISIC2018 dataset.
Diagnostics 13 02869 g010
Figure 11. Confusion matrix of Quadratic SVM for augmented ISIC2019 dataset.
Figure 11. Confusion matrix of Quadratic SVM for augmented ISIC2019 dataset.
Diagnostics 13 02869 g011
Figure 12. Confusion matrix of Quadratic SVM for augmented ISIC2019 dataset using proposed feature selection technique.
Figure 12. Confusion matrix of Quadratic SVM for augmented ISIC2019 dataset using proposed feature selection technique.
Diagnostics 13 02869 g012
Table 1. ISIC2018 Skin dataset description.
Table 1. ISIC2018 Skin dataset description.
ClassNo. of Images
Akiec (Actinic keratosis)326
Bcc (Basal Cell Carcinoma)514
Bkl (Benign keratosis)1099
Df (Dermatofibroma)115
Mel (Melanoma)1113
Nv (Nevus)6705
Vasc (Vascular)142
Table 2. ISIC2019 Skin dataset description.
Table 2. ISIC2019 Skin dataset description.
ClassNo. of Images
AK (Actinic keratosis)3469
BCC (Basal Cell Carcinoma)3232
BKL (Benign keratosis)3200
DF (Dermatofibroma)3232
MEL (Melanoma)3072
NV (Nevus)2112
SCC (Squamous cell carcinoma)3200
VASC (Vascular)2240
Table 3. Updated ISIC2018 Skin dataset images after data augmentation.
Table 3. Updated ISIC2018 Skin dataset images after data augmentation.
ClassBefore AugmentationAfter Augmentation
Akiec (Actinic keratosis)3267821
Bcc (Basal Cell Carcinoma)5147201
Bkl (Benign keratosis)10996593
Df (Dermatofibroma)1157360
Mel (Melanoma)11136678
Nv (Nevus)670515,637
Vasc (Vascular)1424544
Table 4. Updated ISIC2019 Skin dataset after data augmentation.
Table 4. Updated ISIC2019 Skin dataset after data augmentation.
ClassBefore AugmentationAfter Augmentation
AK (Actinic keratosis)34696938
BCC (Basal Cell Carcinoma)32326464
BKL (Benign keratosis)32006400
DF (Dermatofibroma)32326464
MEL (Melanoma)30726144
NV (Nevus)21124224
SCC ()32006400
VASC (Vascular)22404480
Table 5. Classification results of fine-tuned darknet19 and ResNet18 models on augmented ISIC2018 skin dataset.
Table 5. Classification results of fine-tuned darknet19 and ResNet18 models on augmented ISIC2018 skin dataset.
ClassifierClassification Results of Fine-Tuned Darknet19 Model on Augmented ISIC2018 Skin Dataset
Sensitivity
(%)
Precision Rate (%)F1 Score (%)Area Under CurveFowlkes–Mallows Index Accuracy
(%)
Time
(s)
Fine Tree56.5358.857.640.8457.6559.1108.46
Quadratic SVM87.2787.287.240.9887.2386.32740.8
Medium KNN83.8481.7982.810.9882.8182.51923.7
Weighted KNN86.9185.4186.1540.9686.16852657.1
Bagged Tree80.8482.281.520.9681.5281.0604.7
Narrow Neural Network81.2180.2479.740.9480.7279.52701.3
Medium Neural Network84.383.8984.10.9584.0982.82978.7
Bi-Layered Neural Network81.6680.7181.180.9481.1879.92809.8
ClassifierClassification Results of Fine-Tuned Resnet18 Model on Augmented ISIC2018 Skin Dataset
Sensitivity
(%)
Precision Rate (%)F1 Score (%)Area Under CurveFowlkes–Mallows IndexAccuracy
(%)
Time
(s)
Fine Tree61.9162.6962.30.8762.3062.952.616
Quadratic SVM89.3989.1389.260.9889.2688.3868.55
Medium KNN87.1786.0186.580.9886.5985.4429.95
Weighted KNN89.388.3988.840.9788.8487.4428.64
Bagged Tree83.7684.3784.060.9784.0683.5145.55
Narrow Neural Network85.4684.56850.9685.0183.8985.98
Medium Neural Network85.8485.6385.740.9885.7384.51112.6
Bi-Layered Neural Network85.384.5184.90.9684.9083.61065
ClassifierClassification Results of Fine-Tuned inceptionV3 Model on Augmented ISIC2018 Skin Dataset
Sensitivity
(%)
Precision Rate (%)F1 Score (%)Area Under CurveFowlkes–Mallows IndexAccuracy
(%)
Time
(s)
Fine Tree89.61489.0489.320.9889.3387.9129.68
Quadratic SVM92.6392.0392.320.9992.3390.92425.5
Medium KNN92.1490.9991.560.9991.5690.01780.6
Weighted KNN91.2190.5490.880.9790.8789.21872.2
Bagged Tree90.5390.2990.40.9890.4189.1314.74
Narrow Neural Network90.8990.5790.720.9990.7389.33566.7
Medium Neural Network89.9989.9989.980.9989.9988.64601.7
Bi-Layered Neural Network91.3490.7491.040.9991.0489.53565.7
Table 6. Classification results of fusion on augmented ISIC2018 skin dataset.
Table 6. Classification results of fusion on augmented ISIC2018 skin dataset.
ClassifierSensitivity
(%)
Precision Rate (%)F1 Score (%)Area Under CurveFowlkes–Mallows IndexAccuracy
(%)
Time
(s)
Fine Tree91.0390.4190.720.9890.7289.8290.756
Quadratic SVM96.9396.3396.620.9896.6396.16034.85
Medium KNN95.6994.4495.060.9995.0693.84134.25
Weighted KNN96.2494.9495.580.9995.5994.34957.94
Bagged Tree94.2493.9394.080.9994.0893.51064.99
Narrow Neural Network95.295.0795.140,9895.1394.67253.98
Medium Neural Network95.595.3195.40.9995.4094.88693
Bi-Layered Neural Network95.0494.994.960.9994.9794.47440.5
Table 7. Classification results of proposed optimization algorithm on augmented ISIC2018 skin dataset.
Table 7. Classification results of proposed optimization algorithm on augmented ISIC2018 skin dataset.
ClassifierSensitivity Rate
(%)
Precision Rate (%)F1-Score (%)Area Under CurveFowlkes–Mallows IndexAccuracy
(%)
Time
(s)
Fine Tree90.6790.2190.440.9890.4489.7130.94
Quadratic SVM96.9296.3596.640.9996.6396.11464.4
Medium KNN95.7994.4795.120.9995.1393.92525.7
Weighted KNN96.3394.9995.660.9995.6694.42120
Bagged Tree93.9193.6693.780.9993.7893.2268.03
Narrow Neural Network94.5794.694.580.9894.5894.0963.51
Medium Neural Network95.1694.9795.060.9995.0694.5430.84
Bi-Layered Neural Network94.4194.3394.360.9894.3793.81562.8
Table 8. Classification results of the ISIC2019 skin dataset.
Table 8. Classification results of the ISIC2019 skin dataset.
ClassifierSensitivity
(%)
Precision Rate (%)F1 Score (%)Area Under CurveAccuracy
(%)
Fowlkes–Mallows IndexTime
(s)
Fine Tree80.9581.0681.00.9680.581.00245.51
Quadratic SVM99.3499.3699.341.0099.399.35547.26
Medium KNN99.1999.2499.221.0099.299.211951.8
Weighted KNN99.7399.7199.721.0099.799.721916.6
Bagged Tree99.1199.299.161.0099.299.158373
Narrow Neural Network99.5999.699.581.0099.699.591793.8
Medium Neural Network99.6199.6199.61.0099.699.613073
Bi-Layered Neural Network99.599.5499.521.0099.599.522123.6
ClassifierClassification Results of Fine-Tuned Resnet18 Model on Augmented ISIC2019 Skin Dataset
Sensitivity
(%)
Precision Rate (%)F1 Score (%)Area Under CurveAccuracy
(%)
Fowlkes–Mallows IndexTime
(s)
Fine Tree63.7866.5465.140.8863.965.15102.59
Quadratic SVM98.5398.698.561.0098.598.56466.99
Medium KNN98.5198.8198.661.0098.798.661675.7
Weighted KNN99.5399.5999.561.0099.599.561612.6
Bagged Tree96.8996.8996.881.0097.296.897982.8
Narrow Neural Network98.2698.3598.30.9998.398.302934.5
Medium Neural Network98.3498.4598.40.9998.498.394256.6
Bi-Layered Neural Network98.2998.498.340.9998.498.346263.8
ClassifierClassification Results of Fine-Tuned inceptionV3 model on augmented ISIC2019 Skin Dataset
Sensitivity
(%)
Precision Rate (%)F1 Score (%)Area Under CurveAccuracy
(%)
Fowlkes–Mallows IndexTime
(s)
Fine Tree96.296.2696.220.9996.396.23112.15
Quadratic SVM99.1999.2499.221.0099.299.21595.26
Medium KNN99.0588.6193.541.0099.093.681348.2
Weighted KNN99.6699.6999.681.0099.799.671361.8
Bagged Tree99.3599.3999.361.0099.499.374002.9
Narrow Neural Network98.4999.5198.980.9899.599.004175.2
Medium Neural Network99.4999.599.50.9899.599.495605.7
Bi-Layered Neural Network99.3399.3499.361.0099.399.334694.9
Table 9. Classification results of the proposed fusion on augmented ISIC2019 skin dataset.
Table 9. Classification results of the proposed fusion on augmented ISIC2019 skin dataset.
ClassifierSensitivity
(%)
Precision Rate (%)F1 Score (%)Area Under CurveAccuracy
(%)
Fowlkes–Mallows IndexTime
(s)
Fine Tree95.9996.0596.020.9996.196.02460.25
Quadratic SVM99.5499.5499.561.0099.699.541609.51
Medium KNN99.8699.8899.881.0099.999.874875.7
Weighted KNN99.9399.9499.941.0099.999.934891
Bagged Tree99.3899.4499.421.0099.499.418903.5
Narrow Neural Network99.7699.7899.761.0099.899.778903.5
Medium Neural Network99.8399.8499.841.0099.899.8312,935.3
Bi-Layered Neural Network99.7999.7999.781.0099.899.7913,082.3
Table 10. Classification results of proposed optimization algorithm on augmented ISIC2019 skin dataset.
Table 10. Classification results of proposed optimization algorithm on augmented ISIC2019 skin dataset.
ClassifiersSensitivity
(%)
Precision Rate (%)F1 Score (%)Area Under CurveAccuracy
(%)
Fowlkes–Mallows IndexTime
(s)
Fine Tree98.2994.896.520.9994.696.5333.252
Quadratic SVM99.699.699.61.0099.699.60142.73
Medium KNN99.899.8499.821.0099.899.82279.97
Weighted KNN99.8999.8999.881.0099.999.89283.46
Bagged Tree98.7898.8598.821.0098.898.811018.1
Narrow Neural Network99.6199.6599.641.0099.699.6361.565
Medium Neural Network99.7399.7199.721.0099.799.7262.441
Bi-Layered Neural Network99.5999.5999.581.0099.699.5983.467
Table 11. Computational time-based comparison for ISIC2018 skin dataset.
Table 11. Computational time-based comparison for ISIC2018 skin dataset.
ClassifierDarknet19Resnet18InceptionV3FusionOptimization
Fine Tree108.4652.616129.68290.756130.94
Quadratic SVM2740.8868.552425.56034.851464.4
Medium KNN1923.7429.951780.64134.252525.7
Weighted KNN2657.1428.641872.24957.942120
Bagged Tree604.7145.55314.741064.99268.03
Narrow Neural Network2701.3985.983566.77253.98963.51
Medium Neural Network2978.71112.64601.78693430.84
Bi-Layered Neural Network2809.810653565.77440.51562.8
Table 12. Computational time-based comparison for the ISIC2019 skin dataset.
Table 12. Computational time-based comparison for the ISIC2019 skin dataset.
ClassifierDarknet19Resnet18InceptionV3FusionOptimization
Fine Tree245.51102.59112.15460.2533.252
Quadratic SVM547.26466.99595.261609.51142.73
Medium KNN1951.81675.71348.24875.7279.97
Weighted KNN1916.61612.61361.84891283.46
Bagged Tree83737982.84002.98903.51018.1
Narrow Neural Network1793.82934.54175.28903.561.565
Medium Neural Network30734256.65605.712,935.362.441
Bi-Layered Neural Network2123.66263.84694.913,082.383.467
Table 13. Comparison of the proposed framework with recent computerized AI techniques.
Table 13. Comparison of the proposed framework with recent computerized AI techniques.
Authors/ReferenceMethodDatasetAccuracy
(%)
Time
(s)
ISIC18ISIC19
Nawaz, Marriam [54]A deep learning CornerNet and Fuzzy-Means Clustering Algorithm 99.63%
Alsaade [55]Deep Learning and Traditional Machine learning based AI system 98.35%
Babu [56]Support vector machine and HOG features-based AI system 76%
Alizadeh [57]Combining CNN and Traditional Features of AI System 97.5%
Ichim [58]Multiple Connected Neural Network Architecture 97.5%
El-Khatib [42]Simple Deep Learning Method 93%
Monika [59]Machine learning-based system 96.25%
Our Proposed System 96.1%1464.4
Our Proposed System 99.9283.46
Table 14. Proposed classification results for ISIC2018 and ISIC2019 datasets based on all performance measures including MCC, Kappa, and Fowlkes–Mallows index.
Table 14. Proposed classification results for ISIC2018 and ISIC2019 datasets based on all performance measures including MCC, Kappa, and Fowlkes–Mallows index.
Classification results of fine-tuned darknet19 model on augmented ISIC2018 skin dataset
Quadratic SVMSensitivity
(%)
Precision Rate (%)F1 Score (%)Area Under CurveFowlkes–Mallows indexAccuracy
(%)
MCCKappa
87.2787.287.240.9887.2386.384.9344.33
Classification results of fine-tuned resnet18 model on augmented ISIC2018 skin dataset
Quadratic SVM89.3989.1389.260.9889.2688.387.1651.92
Classification results of fine-tuned inceptionV3 model on augmented ISIC2018 skin dataset
Quadratic SVM92.6392.0392.320.9992.3390.990.4461.92
Classification results of fusion on augmented ISIC2018 skin dataset.
Quadratic SVM96.9396.3396.620.9896.6396.195.9384.09
Classification results of proposed optimization algorithm on augmented ISIC2018 skin dataset.
Quadratic SVM96.9296.3596.640.9996.6396.195.9484.10
Proposed classification results for ISIC2019 dataset
Classification results of fine-tuned darknet19 model on augmented ISIC2019 skin dataset
Weighted KNNRecall
(%)
Precision Rate (%)F1 Score (%)Area Under CurveAccuracy
(%)
Fowlkes–Mallows indexMCCKappa
99.7399.7199.721.0099.799.7399.6898.63
Classification results of fine-tuned Resnet18 model on augmented ISIC2019 skin dataset
Weighted KNN99.5399.5999.561.0099.599.5399.4597.78
Classification results of fine-tuned InceptionV3 model on augmented ISIC2019 skin dataset
Weighted KNN99.6699.6999.681.0099.799.6699.6298.37
Classification results of fusion on augmented ISIC2019 skin dataset.
Weighted KNN99.9399.9499.941.0099.999.9399.9199.63
Classification results of proposed optimization algorithm on augmented ISIC2019 skin dataset.
Weighted KNN99.8999.8999.881.0099.999.8999.9099.60
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hussain, M.; Khan, M.A.; Damaševičius, R.; Alasiry, A.; Marzougui, M.; Alhaisoni, M.; Masood, A. SkinNet-INIO: Multiclass Skin Lesion Localization and Classification Using Fusion-Assisted Deep Neural Networks and Improved Nature-Inspired Optimization Algorithm. Diagnostics 2023, 13, 2869. https://doi.org/10.3390/diagnostics13182869

AMA Style

Hussain M, Khan MA, Damaševičius R, Alasiry A, Marzougui M, Alhaisoni M, Masood A. SkinNet-INIO: Multiclass Skin Lesion Localization and Classification Using Fusion-Assisted Deep Neural Networks and Improved Nature-Inspired Optimization Algorithm. Diagnostics. 2023; 13(18):2869. https://doi.org/10.3390/diagnostics13182869

Chicago/Turabian Style

Hussain, Muneezah, Muhammad Attique Khan, Robertas Damaševičius, Areej Alasiry, Mehrez Marzougui, Majed Alhaisoni, and Anum Masood. 2023. "SkinNet-INIO: Multiclass Skin Lesion Localization and Classification Using Fusion-Assisted Deep Neural Networks and Improved Nature-Inspired Optimization Algorithm" Diagnostics 13, no. 18: 2869. https://doi.org/10.3390/diagnostics13182869

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop