Next Article in Journal
Interactivity Recognition Graph Neural Network (IR-GNN) Model for Improving Human–Object Interaction Detection
Next Article in Special Issue
Evaluating the Performance of Fuzzy-PID Control for Lane Recognition and Lane-Keeping in Vehicle Simulations
Previous Article in Journal
A 10 GHz Compact Balun with Common Inductor on CMOS Process
Previous Article in Special Issue
The Role of ML, AI and 5G Technology in Smart Energy and Smart Building Management
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Framework for Classification of Different Alzheimer’s Disease Stages Using CNN Model

by
Gowhar Mohi ud din dar
1,
Avinash Bhagat
1,
Syed Immamul Ansarullah
2,
Mohamed Tahar Ben Othman
3,*,
Yasir Hamid
4,
Hend Khalid Alkahtani
5,
Inam Ullah
6,* and
Habib Hamam
7,8,9,10
1
School of Computer Applications, Lovely Professional University, Phagwara 144411, India
2
Kwintech-R Labs, Jammu & Kashmir, Srinagar 193501, India
3
Department of Computer Science, College of Computer, Qassim University, Buraydah 51452, Saudi Arabia
4
Abu Dhabi Polytechnic, Abu Dhabi 111499, United Arab Emirates
5
Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia
6
BK21 Chungbuk Information Technology Education and Research Center, Chungbuk National University, Cheongju 28644, Republic of Korea
7
Faculty of Engineering, Uni de Moncton, Moncton, NB E1A3E9, Canada
8
Spectrum of Knowledge Production & Skills Development, Sfax 3027, Tunisia
9
International Institute of Technology and Management, Commune d’Akanda, Libreville BP 1989, Gabon
10
School of Electrical Engineering, Department of Electrical and Electronic Engineering Science, University of Johannesburg, Johannesburg 2006, South Africa
*
Authors to whom correspondence should be addressed.
Electronics 2023, 12(2), 469; https://doi.org/10.3390/electronics12020469
Submission received: 4 November 2022 / Revised: 2 January 2023 / Accepted: 3 January 2023 / Published: 16 January 2023
(This article belongs to the Special Issue Advances in Fuzzy and Intelligent Systems)

Abstract

:
Background: Alzheimer’s, the predominant formof dementia, is a neurodegenerative brain disorder with no known cure. With the lack of innovative findings to diagnose and treat Alzheimer’s, the number of middle-aged people with dementia is estimated to hike nearly to 13 million by the end of 2050. The estimated cost of Alzheimer’s and other related ailments is USD321 billion in 2022 and can rise above USD1 trillion by the end of 2050. Therefore, the early prediction of such diseases using computer-aided systems is a topic of considerable interest and substantial study among scholars. The major objective is to develop a comprehensive framework for the earliest onset and categorization of different phases of Alzheimer’s. Methods: Experimental work of this novel approach is performed by implementing neural networks (CNN) on MRI image datasets. Five classes of Alzheimer’s disease subjects are multi-classified. We used the transfer learning determinant to reap the benefits of pre-trained health data classification models such as the MobileNet. Results: For the evaluation and comparison of the proposed model, various performance metrics are used. The test results reveal that the CNN architectures method has the following characteristics: appropriate simple structures that mitigate computational burden, memory usage, and overfitting, as well as offering maintainable time. The MobileNet pre-trained model has been fine-tuned and has achieved 96.6 percent accuracy for multi-class AD stage classifications. Other models, such as VGG16 and ResNet50 models, are applied tothe same dataset whileconducting this research, and it is revealed that this model yields better results than other models. Conclusion: The study develops a novel framework for the identification of different AD stages. The main advantage of this novel approach is the creation of lightweight neural networks. MobileNet model is mostly used for mobile applications and was rarely used for medical image analysis; hence, we implemented this model for disease detection andyieldedbetter results than existing models.

1. Introduction

Alzheimer’s is a condition of the brain’s central partthat causes gradual memory loss, cognitive impairment, and emotional distress. Around 46.8 million individuals worldwide have dementia, with Alzheimer’s disease accounting for 60–70% of cases and costing more than USD 818 billion globally [1]. Given that growing old is the foremost basis of dementia, the figure of persons exaggerated is expected to rise to 151 million by 2050 [2]. Extensive neuronal impairments are caused by the formation of extracellular amyloid plaques and intracellular accretion of neurofibrillary tangles composed of hyperphosphorylated tau [3]. Because Alzheimer’s disease is presently incurable [4,5,6,7], disease-modifying medications are being explored that address not only amyloid Beta (Aβ) build-up and the tau pathway [8,9] and neuroinflammation [10,11,12] and nutritional pathways [13,14]. Recent medications’ inability to cure could be partly attributable to late-stage delivery and insensitive techniques of identifying cognitive changes, underscoring the clinical necessity for rigorous diagnostics [15]. The National Institute of Neurological and Communicative Disorders (NINCDS) and the Alzheimer’s Disease and Related Disorders Association currently set the standard to classify a person’s mental well-being (ADRDA). They discovered that the biochemical alterations that occur in Alzheimer’s disease last for decades. Differentiating between disease stages will demand the establishment of innovative biomarkers:
  • Early-symptomatic aliment;
  • Prevenient aliments, i.e., those with apparent early dementia AD;
  • AD or common variants of AD [16].
Cognitive assessments and imaging examinations, such as Neuro-imaging, rule out alternative factors of memory impairment, such as tumors, which are ruled out by cognitive assessments and imaging examinations, such as Neuro-imaging [17]. When integrated with these approaches and professional clinical expertise, medical criteria have an 80 percent good prognosis and a 60 percent diagnostic accuracy for clinical diagnosis. Recent visualization advances, such as Neuro-imaging scans (MRI) [18], PET scans [19,20], and single-photon emission computed tomography (SPECT) [21], have empowered the tracking of neurodegeneration, malformations in neuronal development, and edema.
The number of AD patients is expected to rise substantially, necessitating the use of a computer-aided diagnosis (CAD) system for early and precise AD diagnosis [22]. Furthermore, mild cognitive impairment (MCI) is an interim phase between sound perception and dementia. As per a previous study [23], MCI participants advance to clinical AD at a rate of 10–15 percent yearly. In recent years, research in detecting MCI patients who will develop clinical dementia has received much attention. Conversion of one stage into another is vital to identifying the different stages of Alzheimer’s disease.
The primary emphasis of this study is on identifying different stages of AD based on an image dataset. Deep learning techniques are frequently employed for time series classification, image identification, and multidimensional data processing [24]. These methods are widely applied to neuroimaging data to identify Alzheimer’s disease (AD) [25]. Potential genetic indicators of Alzheimer’s disease have also been explored using these methods [26].
In order to diagnose Alzheimer’s, Zhang et al. [27] combined neuroimaging data with clinical and neuropsychological evaluations using a multimodal deep learning model. The idea put forth by Spasov et al. [28] emphasizes the significance of deep learning designs in patients at risk of AD in stopping the progression of mild cognitive impairment. Deep learning-based models also use extensive genomic and DNA methylation data to anticipate AD, mitigating the symptoms of usual neurodegeneration (falls, memory loss, etc.).
Our work progresses these approaches (neuroimaging and clinical analysis) by applying a technique that uses an ensemble of CNN models to identify the different stages of AD. Different CNN models are applied to the same dataset and comparing results, and it is found that the MobileNet model is efficient for medical image analysis. The rest of the work is presented in this paper in the following subsections. Section 2 discusses previous findings for diagnosing Alzheimer’s disease. Our CNN-based method for determining the stage of Alzheimer’s disease based on MobileNet and image weights is described in Section 3. The experimental findings of our model are shown in Section 4. The research’s findings are discussed in Section 5, and in Section 6, we provided a research paper conclusion.

2. Related Work

AD detection is extensively investigated and encompasses many problems and complexities [29]. Payan et al. [30] used a sparse autoencoder and 3D convolutional neural networks. They developed an algorithm that analyses a brain MRI scan (MRI) to determine a person’s disease status. The primary innovation was 3D convolutions, which outperformed 2D convolutions in terms of results. The auto-encoder was used to train the convolutional layer, but it was not fine-tuned. Efficiency is supposed to enhance with fine-tuning [24].
Sarraf et al. [31] classified AD from the NC brain using a widely used CNN architecture, LeNet-5 (binary classification). The work presented in [30] was developed by Hosseini et al. [32]. A deeply supervised adaptive 3D-CNN (DSA-3D-CNN) classifier was used to predict AD. Three-layered autoencoder (3D-CAE) architectures are pre-trained without any skull stripping pre-processing on a CAD-Dementia dataset. Performance is analyzed using ten-fold cross-validation.
Gupta et al. [33] devised a sparse autoencoder model for the categorization of Alzheimer’s disease (AD), mild cognitive impairment (MCI), and healthy controls (HC). Payan et al. [34] used sparse autoencoders and a CNN architecture to diagnose Alzheimer’s. They also devised a two-dimensional CNN model that performed similarly. Brosch et al. [34] employed a deep belief network model with manifold learning to diagnose Alzheimer’s disease in MRI images, etc. [35,36,37,38].
Liu and Shen [39] used unsupervised and supervised technology to develop a deep-learning model that categorized AD and MCI patients. Korolev et al. [40] showed that a corresponding finding might be accomplished. When the control neural network and basic 3D CNN designs were deployed to three-dimensional MRIs, the outcomes revealed that the deepness and complexity of the two networks were remarkably similar. They did not perform as well as they originally anticipated.
Sarraf and Tofghi [41] employed functional MRI data and the deep LeNet model for AD diagnosis. Suk et al. [42,43,44,45] used multiple complex SVM kernels for classification in an autoencoder network-based model for AD diagnosis. They used a multi-kernel classifier to classify small- to semi-characteristics extracted from magnetic current imaging, MCI-converter structural MRI, and PET data.
Wang et al. [46] developed a novel CNN approach that is based on a multimodal MRI analysis approach that involves diffusion tensor images or functional brain imaging data. Patients with Alzheimer’s ailment, dementia, and other related conditions were classified utilizing the framework. Despite the excellent classification accuracy, it is anticipated that employing 3D convolution rather than 2D convolution would enhance efficiency. A 3D multi-scale CNN (3DMSCNN) model was devised by Ge et al. [47]. The 3DMSCNN was a novel architecture for AD diagnosis. They also devised a multi-scale feature augmentation technique as well as a feature fusion.
Song et al. [48] postulated a Graph Convolutional Neural Network (GCNN) classifier based on graph techniques. By using structural connection graphs, which indicate a multi-class model, training and architecture evaluation divides the AD spectrum into four categories. Xu Y et al. [49] proposed a medical image segmentation method based on multi-dimensional statistical features. The main purpose of this paper is to integrate CNN and transformers to detect and diagnose brain tumors. This research has efficient results. Based on such techniques, this research can be further enhanced by integrating both data modalities and models to detect and diagnose brain disorders. We are also currently working on the detection and diagnosis of brain disorders using different data modalities.

3. Problem Description and Solution Strategy

As highlighted in the Section 2, numerous paradigms encompassing AD prognosis and clinical image assessment have recently been presented in the literature. However, most do not use transfer learning algorithms, multi-class clinical object detection, or an Alzheimer’s disease monitoring cloud service to assess AD distinct phases and provide faraway guidance. These issues have received insufficient attention in literary works. Thus, the novelties of this research can be organized as follows following other cutting-edge techniques discussed in the Section 2:
A novel framework is devised to identify various Alzheimer’s ailment phases and the classification of medical images. The suggested method relies on CNN architectures for structural MRI images of the brain. Transfer learning is utilized to grab efficiencies of already trained architectures, such as VGG19, ResNet50, and DenseNet121.
Unbalanced datasets and notional size are the most problematic aspects of medical image analysis. Resampling techniques are employed to balance the datasets, while data expansion methods are utilized to enhance the dataset size and overcome the over-fitting issues. According to performance indicators, the experimental results predict a positive outcome.

4. Methods and Materials

The early diagnosis of Alzheimer’s ailment is critical for precluding and managing its progression. The data used in this research are taken from ADNI (Alzheimer’s Disease Neuroimaging Initiative). The Alzheimer’s Disease Neuroimaging Initiative (ADNI) is a longitudinal study designed to develop clinical, imaging, genetic, and biochemical biomarkers for the early detection and tracking of Alzheimer’s disease (AD). This initiative started in 2004 and is supported by multiple companies. Complete 1-year data were taken from ADNI to design this novel research. The main purpose of this research is to design a novel approach for the initial identification and tracking of Alzheimer’s stages. The proposed framework workflow, data preparation algorithms, and medical image classification techniques are thoroughly explained below.

5. The Proposed Framework

The proposed approach consists of below mentioned four steps:
Step1. Data Attainment
This approach uses the ADNI dataset in the T2w MRI format. It offers medical images in jpeg formats, such as Axial Coronal and Sagittal. The dataset comprises data from 300 subjects alienated into five classes that are cognitively normal, mild cognitive impairment, early mild cognitive impairment, late mild cognitive impairment (CN, MCI, EMCI, LMCI), and NC. The patient with LMCI symptoms can acquire more AD than the EMCI subject because the patient has undergone severe neuron damage in this stage. The total number of images available here is 1101. AD class comprises 145 images, EMCI comprises 204 images, LMCI comprises 61 images, MCI has 198, and NC has 493 images. We re-sized all the images into 224 × 224 pixels, and three channels (RGB) were used. A batch size of 32 images was transferred at each iteration during training to reduce the computational power batch size of 32 images transferred at each iteration of training to minimize the height = 224 width = 224 channels = 3 batch size = 32.
Step2. Pre-processing
Data are unbalanced, and training on an unbalanced dataset leads us toward data underfitting or overfitting issues, and in the end, our model would not be able to classify the images correctly. The solution is to balance the data, for which we use the upsampling technique, in which the labels with a smaller number of images are increased or unsampled. After resampling, all the classes become 580 MRI images, and thus the entire dataset size is 2900. The data are refined, standardized, scaled, denoised, and formatted appropriately. In the future, other techniques, such as downsampling, will be used. Figure 1 illustartes the resampling technique of MRI images.
Step3. Data Augmentation
The primary objective of employing data augmentation methods is to (1) expand the data size and (2) solve the issue of overfitting. Data augmentation approaches are used in the following way.
The input images are pre-processed by using the pre-processing function of the pre-trained model, horizontal flipping of the images, rotation of images by 5 degrees, and width and shift in the images.
We used the Keras API of the image data generator to apply the data augmentation. We can observe that some images are 5 degrees rotated, and some are flipped, as shown in Figure 2.
As a result, the dataset expands to 2900 images, partitioned into 580 images in each class. After that, the balanced dataset of 2900 MRI scans is reconfigured and randomly fragmented into training, validation, and test groups, with an 80:10:10 split ratio for each class. The division of data testing, training, and validation groups for 5-way classification is summarized in Table 1 (CN vs. MCI vs. EMCI vs. LMCI vs. AD).
Step4. Pre-processing Techniques
  • Data normalization: Data normalization is beneficial for removing different redundancies from the datasets, such as varied contrasts and varied subject poses, to simplify subtle difference detection. It rescales the attributes with a mean value of 0 and a standard deviation of 1. Different types of normalization techniques, such as Z normalization, called standardization; min–max normalization; and unit vector normalization, are applied to the dataset. We applied unit vector normalization to our dataset.
  • Unit vector normalization: It shrinks/stretches a vector and scales it to a unit length. We applied it to the whole dataset, and the transformed data are viewed as a cluster of vectors with distant trajectories on the d-dimensional unit sphere. The general formulae for unit vector normalization are U ^ = U | U | , where U ^ = normalized vector, U = non-Zero vector, and | U | = length of U.

6. Proposed Classification Methods and Techniques

The three critical components of machine learning algorithms are feature extraction, reduction, and classification. All three steps are performed manually or separately while implementing machine learning algorithms. The beauty of deep learning algorithms such as CNNs is that there is no need for manual feature extraction. These three stages are performed in combination with CNN architectures. CNN architectures have high classification performance than traditional models. The three layers of CNN architectures are the convolution layer, the pooling layer, and the entire connected layer [54]. Extraction of features is the responsibility of the convolution layer, dimension reduction by the pooling layer, and classification by fully connected layers. Conversion of two-dimensional metrics into one-dimensional vectors is also performed by a fully connected layer [55].

6.1. Convolution Layer

It acts as a base for the CNN architecture. It comprises a set of filters, also called kernels, which are learned through the training process. The filter dimensions are smaller than the real image. Filters convolve with images and create activation maps. The convolution layer extracts all the features. A learnable filter that retrieves features out of a given image is represented by the convolutional layer. For a three-dimensional image with the dimensions H × W × C, H denotes height, W width, and C is the total count of channels. Applying a 3D filter-sized F × H F × W F × C, where FC is the number of filter channels, FW denotes the filter width, and FH denotes the filter height. Hence, the output activation map size must be AH AW, where AH stands for activation height and AW activation width. The following equations are used to calculate activation height and width values.
A H   = 1 + H FH + 2 P S
A w   = 1 + W Fw + 2 P S
P signifies padding, S represents stride, and there are n filters, so the activation map dimensions must turn out to be AH × AW × n. Figure 3 illustrates the complete convolution.

6.2. Polling Layer

The pooling layer’s primary purpose is to lower the size of the feature maps. Therefore, there are fewer parameters to learn and fewer computations to be made by the network. The different polling layers are max pooling, average pooling, and global pooling. By applying a non-linear conversion to the given inputs, the activation function addresses non-linearity in the network. Our proposed multi-classifier uses the SoftMax activation function in the output layer. The main function of the SoftMax function is to calculate relative probabilities. The general equation of the SoftMax equation function is given below in Equation (3).
Softmax   ( z i ) = exp ( z i ) ε j e x p ( Z j )
In this case, Z stands for the values of the output layer neurons, with the exponent being a non-linear function. These values are then normalized and transformed into probabilities by dividing them by the sum of exponent values. For all hidden layers, we applied the ReLU activation function, the most familiar implicated function in CNNs. There are different variants of ReLU activation functions, such as parametricReLU, leaky ReLU, exponential linear (ELU, SELU), and concatenated ReLU (CReLU). We applied leaky ReLU since it has some benefits over other variants, such as it fixes the problem of “dying ReLU” because it has no zero-slope parts. It also speeds the training process because it is more balanced and therefore learns faster. However, it should be kept in mind that leaky ReLU is not superior to simple ReLU and should be considered as an alternative. The general equation for the ReLU and leaky ReLU activation functions are given below in Equation (4) and Equation (5), respectively.
F(x) = x+ = max(0,x)
{ Z Z > 0 Z Z 0 }
When z is less than 0, leaky ReLU allows a small nonzero, constant gradient α. Generally, the value of α = 0.01. in our study for medical image analysis, we ensemble Mobile net with ImageNet weights along with transfer learning for image classification. These architectures can easily handle two-dimension and three-dimension brain neuroimages built on 2D, 3D, and convolutions. The general flow of our novel framework is shown in Table 2.
We applied the MobileNet model and ImageNet weights to categorize distant phases of Alzheimer’s. It uses depth-wise separable convolutions. The main advantage of this model is that it reduces the parameter number compared to other networks and generates lightweight deep neural networks [58,59]. It is a class of CNN and gives us the optimum initial point for training our classifier to be insanely small and fast. Mobile Nets are built on depthwise separable convolution layers. Each dw Conv consists of depthwise convolution and pointwise convolution [60,61,62]. There are almost 4.2 million parameters in a MobileNet architecture. The size of the input image is 224 × 224 × 3. The convolution kernel shape is 3 × 3 × 3 × 32, with Avg pool size of 7 × 7 × 1024. Dropout layers are succeeded by a flattened layer and entirely connected layers. The final fully connected layer with SoftMax as the activation function is implemented to manage five classes of Alzheimer’s (AD), while ReLU is the activation function for hidden layers.
The general architecture of the MobileNet model we applied is shown below in Figure 4. The total trainable parameters are 25,958,917, and the non-trainable are 3,231,936. We used the RMSProp as our optimizer with a learning rate of 0.00001 and used the loss as categorical cross entropy for multi-class classification and keeping the metrics such as accuracy, which give the results of training and authentication, loss, and accuracy values during the training. The evaluation of the novel approach with other developed approaches is revealed in the tables below.

7. Experimental Findings and Model Evaluation

The novelmodelconsiders various scenarios. We examined the empirical results in terms of many performance benchmarks, including confusion matrix, accuracy, loss, F1 score, precession, recall, ROC, sensitivity, and AUC. Table 3 below provides a summary of the novel model.

Model Evaluation

For the multi-classification, we used MobileNet architecture, a version of CNN networks. The efficacy of the planned model is equated with prevailing models; as depicted in Table 3, the proposed model shows better accuracy results than existing models. Our model achieves an accuracy of 96.22%, as shown in Figure 5, while Juan Ruiz et al. [49] achieved 66.67%, Spasov et al. [55] achieved 88%, and Sahumbaiev et al. [56] achieved 89.47%. The training and validating accuracy and loss of the suggested approach are shown in Figure 6.
The number of patients diagnosed with each type of AD stage (NC/MCI/AD/LMCI/EMCI) is shown in the confusion matrix. The normalized confusion matrix for the suggested framework is shown in Figure 6.
The comparative analysis of the proposed method with existing approaches is shown graphically in the below figure. Our approach shows better results than existing results. Figure 7 shows the assessment of the devised architecture with other.

8. Conclusions

This study presents a system for medical image categorization and Alzheimer’s ailment recognition. Deep-learning CNN architectures support the proposed approach. Alzheimer’s disease has five stages. We employ the MobileNet model with ImageNet weights. The beauty of MobileNet architecture is that it uses depthwise separable convolutions, which reduces parameter numbers compared to other models with regular convolutions and results in lightweight neural networks. The other important characteristic of MobileNet architecture is that instead of having a single 3 × 3 convolution layer in traditional networks, it has batch norm and ReLU. Additionally, it divides convolution into 1 × 1 pointwise convolutions and 3 × 3 depth-wise convolutions. For detection, embedding segmentation, and classification, MobileNet models are essential. ImageNet is a standard for image classification. It provides a standard measure of how efficient a model is for classification. Different performance metrics are implemented for the assessment of the model. Our model achieves an accuracy of 96.22%. In the impending, it is premeditated to implement other pre-trained models for classification purposes and check whether a patient can convert from one AD stage into another. In the imminent, the dataset’s size will also be increased to improve accuracy, and different augmentation approaches, such as downsampling, will be used. The author is currently working on multi-modal fusion data fusion techniques to detect and diagnose AD.

Author Contributions

Conceptualization A.B. and G.M.u.d.d.; methodology, A.B.; software, G.M.u.d.d.; validation, S.I.A., Y.H. and I.U.; formal analysis, Y.H., I.U. and M.T.B.O.; investigation, Y.H. and H.K.A.; resources, I.U.; data curation, G.M.u.d.d. and H.K.A.; writing—original draft preparation, G.M.u.d.d.; writing—review and editing, A.B. and I.U.; visualization, I.U. and H.H.; supervision, A.B.; project administration, M.T.B.O. and H.H.; funding acquisition, M.T.B.O. and H.H. All authors have read and agreed to the published version of the manuscript.

Funding

The researchers would like to thank the Deanship of Scientific Research, Qassim University for funding the publication of this project.

Data Availability Statement

Not applicable.

Acknowledgments

The researchers would like to thank the Deanship of Scientific Research, Qassim University for funding the publication of this project.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Prince, M.J.; Comas-Herrera, A.; Knapp, M.; Guerchet, M.M.; Karagiannidou, M. World Alzheimer Report 2016—Improving Healthcare for People Living with Dementia: Coverage, Quality and Costs Now and in the Future; Alzheimer’s Disease International (ADI): London, UK, 2016. [Google Scholar]
  2. Prince, M.; Wimo, A.; Guerchet, M.; Ali, G.; Wu, Y.; Prina, M. World Alzheimer Report 2015; Alzheimer’s Disease International(ADI): London, UK, 2015; pp. 1–92. Available online: https://www.alz.co.uk/research/WorldAlzheimerReport2015.pdf (accessed on 14 March 2022).
  3. Armstrong, R.A. The molecular biology of senile plaques and neurofibrillary tangles in Alzheimer’s disease. Folia Neuropathol. 2009, 47, 289–299. [Google Scholar] [PubMed]
  4. Bin Tufail, A.; Ullah, K.; Khan, R.A.; Shakir, M.; Khan, M.A.; Ullah, I.; Ma, Y.-K.; Ali, S. On Improved 3D-CNN-Based Binary and Multiclass Classification of Alzheimer’s Disease Using Neuroimaging Modalities and Data Augmentation Methods. J. Healthc. Eng. 2022, 2022, 1302170. [Google Scholar] [CrossRef] [PubMed]
  5. Ahmad, S.; Ullah, T.; Ahmad, I.; Al-Sharabi, A.; Ullah, K.; Khan, R.A.; Rasheed, S.; Ullah, I.; Uddin, N.; Ali, S. A Novel Hybrid Deep Learning Model for Metastatic Cancer Detection. Comput. Intell. Neurosci. 2022, 2022, 8141530. [Google Scholar] [CrossRef] [PubMed]
  6. Bin Tufail, A.; Ullah, I.; Khan, W.U.; Asif, M.; Ahmad, I.; Ma, Y.-K.; Khan, R.; Kalimullah; Ali, S. Diagnosis of Diabetic Retinopathy through Retinal Fundus Images and 3D Convolutional Neural Networks with Limited Number of Samples. Wirel. Commun. Mob. Comput. 2021, 2021, 6013448. [Google Scholar] [CrossRef]
  7. Ahmad, I.; Ullah, I.; Khan, W.U.; Rehman, A.U.; Adrees, M.S.; Saleem, M.Q.; Cheikhrouhou, O.; Hamam, H.; Shafiq, M. Efficient algorithms for E-healthcare to solve multiobject fuse detection problem. J. Healthc. Eng. 2021, 2021, 9500304. [Google Scholar] [CrossRef]
  8. Porter, J.H.; Prus, A.J. The Discriminative Stimulus Properties of Drugs Used to Treat Depression and Anxiety. Brain Imag. Behav. Neurosci. 2012, 5, 289–320. [Google Scholar]
  9. Noble, W.; Europe PMC Funders Group. Advances in tau-based drug discovery. Expert. Opin. Drug. Discov. 2011, 6, 797–810. [Google Scholar] [CrossRef] [PubMed]
  10. Ferrera, P.; Arias, C. Differential effects of COX inhibitors against b -amyloid-induced neurotoxicity in human neuroblastoma cells. Neurochem. Int. 2005, 47, 589–596. [Google Scholar] [CrossRef]
  11. Gasparini, L.; Ongoing, E.; Wenk, G. Non-steroidal anti-inflammatory drugs (NSAIDs) in Alzheimer’s disease: Old and new mechanisms of action. J. Neurochem. 2004, 91, 521–536. [Google Scholar] [CrossRef]
  12. Reitz, C. Alzheimer’s disease and the amyloid cascade hypothesis: A critical review. Int. J. Alzheimer’s Dis. 2012, 2012, 369808. [Google Scholar] [CrossRef] [Green Version]
  13. Gustafson, D.R.; Morris, M.C.; Scarmeas, N.; Shah, R.C.; Sijben, J.; Yaffe, K.; Zhu, X. New Perspectives on Alzheimer’s Disease and Nutrition. J. Alzheimer’s Dis. 2015, 46, 1111–1127. [Google Scholar] [CrossRef] [PubMed]
  14. Shah, R.C. Medical foods for Alzheimer’s disease. Drugs Aging 2011, 28, 421–428. [Google Scholar] [CrossRef] [PubMed]
  15. Cummings, J. Alzheimer’s disease diagnostic criteria: Practical applications. Alzheimer’s Res. Ther. 2012, 4, 35. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Dubois, B.; Feldman, H.; Jacova, C.; Dekosky, S.; Barberger-Gateau, P.; Cummings, J.; Delacourte, A.; Galasko, D.; Gauthier, S.; Jicha, G.; et al. Research criteria for the diagnosis of Alzheimer’s disease: Revising the NINCDS-ADRDA criteria. Lancet Neurol. 2007, 6, 734–746. [Google Scholar] [CrossRef]
  17. Viola, K.L.; Sbarboro, J.; Sureka, R.; De, M.; Bicca, M.A.; Wang, J.; Vasavada, S.; Satpathy, S.; Wu, S.; Joshi, H.; et al. Towards non-invasive diagnostic imaging of early-stage Alzheimer’s disease. Nat. Nanotechnol. 2015, 10, 91–98. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Grundman, M.; Petersen, R.; Ferris, S.; Thomas, R.; Aisen, P.; Bennett, D.; Foster, N.; Galasko, D.; Doody, R.; Kaye, J.; et al. Mild Cognitive Impairment Can Be Distinguished from Alzheimer Disease and Normal Aging for Clinical Trials. Arch. Neurol. 2004, 61, 59–66. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Ito, H.; Shimada, H.; Shinotoh, H.; Takano, H.; Sasaki, T.; Nogami, T.; Suzuki, M.; Nagashima, T.; Takahata, K.; Seki, C.; et al. Quantitative analysis of amyloid deposition in Alzheimer’s disease using PET and the radiotracer 11 C-AZD2184. J. Nucl. Med. 2014, 55, 932–938. [Google Scholar] [CrossRef] [Green Version]
  20. Bron, E.E.; Smits, M.; van der Flier, W.M.; Vrenken, H.; Barkhof, F.; Scheltens, P.; Papma, J.M.; Steketee, R.M.; Orellana, C.M.; Meijboom, R.; et al. Standardized evaluation of algorithms for computer-aided diagnosis of dementia based on structural MRI: The CADDementia challenge. NeuroImage 2015, 111, 562–579. [Google Scholar] [CrossRef] [Green Version]
  21. Janousova, E.; Vounou, M.; Wolz, R.; Gray, K.R.; Rueckert, D.; Montana, G.; the Alzheimer’s Disease Neuroimaging Initiative. Biomarker discovery for sparse classification of brain images in Alzheimer’s disease. Ann. BMVA 2012, 2012, 1–11. [Google Scholar]
  22. Payan, A.; Montana, G. Predicting Alzheimer’s disease a neuroimaging study with 3D convolutional neural networks. In Proceedings of the ICPRAM 2015—4th International Conference on Pattern Recognition Applications and Methods, Lisbon, Portugal, 10–12 January 2015; Volume 2, pp. 355–362. [Google Scholar]
  23. Bengio, Y.; Courville, A.; Vincent, P. Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1798–1828. [Google Scholar] [CrossRef] [Green Version]
  24. Ebrahimighahnavieh, A.; Luo, S.; Chiong, R. Deep learning to detect Alzheimer’s disease from neuroimaging: A systematic literature review. Comput. Methods Programs Biomed. 2020, 187, 105242. [Google Scholar] [CrossRef] [PubMed]
  25. Pan, D.; Huang, Y.; Zeng, A.; Jia, L.; Song, X. Early Diagnosis of Alzheimer’s Disease Based on Deep Learning and Was. In Human Brain and Artificial Intelligence; Zeng, A., Pan, D., Hao, T., Zhang, D., Shi, Y., Song, X., Eds.; Springer: Singapore, 2019; pp. 52–68. [Google Scholar]
  26. Zhang, F.; Li, Z.; Zhang, B.; Du, H.; Wang, B.; Zhang, X. Multimodal deep learning model for auxiliary diagnosis of Alzheimer’s disease. Neurocomputing 2019, 361, 185–195. Available online: http://www.sciencedirect.com/science/article/pii/S0169260719310946 (accessed on 6 January 2023). [CrossRef]
  27. Spasov, S.; Passamonti, L.; Duggento, A.; Liò, P.; Toschi, N. A parameter-efficient deep learning approach to predict conversion from mild cognitive impairment to Alzheimer’s disease. Neuroimage 2019, 189, 276–287. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Park, C.; Ha, J.; Park, S. Prediction of Alzheimer’s disease based on the deep neural network by integrating gene expression and DNA methylation dataset. Expert Syst. Appl. 2020, 140, 112873. [Google Scholar] [CrossRef]
  29. Sarraf, S.; Tofghi, G. Classification of Alzheimer’s disease structural MRI data by deep learning convolutional neural networks. arXiv 2016, arXiv:1607.06583. [Google Scholar]
  30. Hosseini-asl, E.; Kenton, R.; El-baz, A. Alzheimer’s Disease Diagnostics by Adaptation of 3d Convolutional Network. Electrical and Computer Engineering Department. University of Louisville: Louisville. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; Volume 502. [Google Scholar]
  31. Gupta, A.; Ayhan, M.; Maida, A. Natural Image Bases to Represent Neuroimaging Data. In Proceedings of the 30th International Conference on International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013; pp. 987–994. [Google Scholar]
  32. Brosch, T.; Tam, R. Manifold learning of brain MRIs by deep learning. In Lecture Notes in Computer Science, Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Nagoya, Japan, 22–26 September 2013; Springer Nature Switzerland AG.: Cham, Switzerlands, 2013; pp. 633–640. [Google Scholar]
  33. Tufail, A.B.; Ma, Y.-K.; Zhang, Q.-N.; Khan, A.; Zhao, L.; Yang, Q.; Adeel, M.; Khan, R.; Ullah, I. 3D convolutional neural networks-based multiclass classification of Alzheimer’s and Parkinson’s diseases using PET and SPECT neuroimaging modalities. Brain Inform. 2021, 8, 23. [Google Scholar] [CrossRef]
  34. Bilal, A.; Shafiq, M.; Fang, F.; Waqar, M.; Ullah, I.; Ghadi, Y.Y.; Long, H.; Zeng, R. IGWO-IVNet3: DL-Based Automatic Diagnosis of Lung Nodules Using an Improved Gray Wolf Optimization and InceptionNet-V3. Sensors 2022, 22, 9603. [Google Scholar] [CrossRef]
  35. Mazhar, T.; Nasir, Q.; Haq, I.; Kamal, M.M.; Ullah, I.; Kim, T.; Mohamed, H.G.; Alwadai, N. A Novel Expert System for the Diagnosis and Treatment of Heart Disease. Electronics 2022, 11, 3989. [Google Scholar]
  36. Tufail, A.B.; Ullah, I.; Rehman, A.U.; Khan, R.A.; Khan, M.A.; Ma, Y.K.; Khokhar, N.H.; Sadiq, M.T.; Khan, R.; Shafiq, M.; et al. On Disharmony in Batch Normalization and Dropout Methods for Early Categorization of Alzheimer’s Disease. Sustainability 2022, 14, 14695. [Google Scholar] [CrossRef]
  37. Liu, F.; Shen, C. Learning deep convolutional features for MRI based Alzheimer’s disease classification. arXiv 2014, arXiv:1404.3366. [Google Scholar]
  38. Korolev, S.; Safullin, A.; Belyaev, M.; Dodonova, Y. Residual, and plain convolutional neural networks for 3d brain MRI classification. In Proceedings of the 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), Melbourne, VIC, Australia, 18–21 April 2017; pp. 835–838. [Google Scholar]
  39. Sarraf, S.; Tofghi, G. Classification of Alzheimer’s disease using fMRI data and deep learning convolutional neural networks. arXiv 2016, arXiv:1603.08631. [Google Scholar]
  40. Suk, H.I.; Lee, S.W.; Shen, D.; the Alzheimers Disease Neuroimaging Initiative. Hierarchical feature representation and multimodal fusion with deep learning for AD/MCI diagnosis. Neuroimage 2014, 101, 569–582. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  41. Suk, H.I.; Shen, D. Deep learning-based feature representation for ad/MCI classification. Med Image Comput. Comput. Assist. Interv. 2013, 16, 583–590. [Google Scholar] [PubMed] [Green Version]
  42. Suk, H.I.; Lee, S.-W.; Shen, D.; the Alzheimer’s Disease Neuroimaging Initiative. Latent feature representation with stacked auto-encoder for AD/MCI diagnosis. Brain Struct Funct. 2015, 220, 841–859. [Google Scholar] [CrossRef]
  43. Suk, H.I.; Shen, D.; the Alzheimer’s Disease Neuroimaging Initiative. Deep Learning in the Diagnosis of Brain Disorders. In Recent Progress in Brain and Cognitive Engineering; Springer Nature Switzerland AG.: Chem, Switzerland, 2015; pp. 203–213. [Google Scholar]
  44. Wang, Y.; Yang, Y.; Guo, X.; Ye, C.; Gao, N.; Fang, Y.; Ma, H.T. A novel multimodal MRI analysis for Alzheimer’s disease based on convolutional neural network. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; pp. 754–757. [Google Scholar]
  45. Song, T.-A.; Chowdhury, S.R.; Yang, F.; Jacobs, H.; El Fakhri, G.; Li, Q.; Johnson, K.; Dutta, J. Graph convolutional neural networks for Alzheimer’s disease. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 414–417. [Google Scholar]
  46. Jain, R.; Jain, N.; Aggarwal, A.; Hemanth, D.J. ScienceDirect Convolutional neural network-based Alzheimer’s disease classification from magnetic resonance brain images. Cogn. Syst. Res. 2019, 57, 147–159. [Google Scholar] [CrossRef]
  47. Spasov, S.E.; Passamonti, L.; Duggento, A.; Lio, P.; Toschi, N. A Multi-modal Convolutional Neural Network Framework for the Prediction of Alzheimer’s Disease. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; pp. 1271–1274. [Google Scholar] [CrossRef]
  48. Bin Tufail, A.; Anwar, N.; Ben Othman, M.T.; Ullah, I.; Khan, R.A.; Ma, Y.-K.; Adhikari, D.; Rehman, A.U.; Shafiq, M.; Hamam, H. Early-Stage Alzheimer’s Disease Categorization Using PET Neuroimaging Modality and Convolutional Neural Networks in the 2D and 3D Domains. Sensors 2022, 22, 4609. [Google Scholar] [CrossRef]
  49. Haq, I.; Mazhar, T.; Malik, M.A.; Kamal, M.M.; Ullah, I.; Kim, T.; Hamdi, M.; Hamam, H. Lung Nodules Localization and Report Analysis from Computerized Tomography (CT) Scan Using a Novel Machine Learning Approach. Appl. Sci. 2022, 12, 12614. [Google Scholar] [CrossRef]
  50. Bin Tufail, A.; Ullah, I.; Khan, R.; Ali, L.; Yousaf, A.; Rehman, A.U.; Alhakami, W.; Hamam, H.; Cheikhrouhou, O.; Ma, Y.-K. Recognition of Ziziphus lotus through Aerial Imaging and Deep Transfer Learning Approach. Mob. Inf. Syst. 2021, 2021, 4310321. [Google Scholar] [CrossRef]
  51. Khan, R.; Yang, Q.; Ullah, I.; Rehman, A.U.; Bin Tufail, A.; Noor, A.; Rehman, A.; Cengiz, K. 3D convolutional neural networks based automatic modulation classification in the presence of channel noise. IET Commun. 2021, 16, 497–509. [Google Scholar] [CrossRef]
  52. Bin Tufail, A.; Ma, Y.-K.; Kaabar, M.K.A.; Martínez, F.; Junejo, A.R.; Ullah, I.; Khan, R. Deep Learning in Cancer Diagnosis and Prognosis Prediction: A Minireview on Challenges, Recent Trends, and Future Directions. Comput. Math. Methods Med. 2021, 2021, 9025470. [Google Scholar] [CrossRef]
  53. Sahumbaiev, I.; Popov, A.; Ramirez, J.; Gorriz, J.M.; Ortiz, A. 3D-CNN HadNet classification of MRI for Alzheimer’s Disease diagnosis. In Proceedings of the 2018 IEEE Nuclear Science Symposium and Medical Imaging Conference Proceedings (NSS/MIC), Sydney, NSW, Australia, 10–17 November 2018; pp. 3–6. [Google Scholar]
  54. Ansarullah, S.I.; Saif, S.M.; Andrabi, S.A.B.; Kumhar, S.H.; Kirmani, M.M.; Kumar, P. An Intelligent and Reliable Hyperparameter Optimization Machine Learning Model for Early Heart Disease Assessment Using Imperative Risk Attributes. J. Healthc. Eng. 2022, 2022, 9882288. [Google Scholar] [CrossRef] [PubMed]
  55. Ansarullah, S.I.; Saif, S.M.; Kumar, P.; Kirmani, M.M. Significance of Visible Non-Invasive Risk Attributes for the Initial Prediction of Heart Disease Using Different Machine Learning Techniques. Comput. Intell. Neurosci. 2022, 2022, 9580896. [Google Scholar] [CrossRef] [PubMed]
  56. Ansarullah, S.I.; Kumar, P. A systematic literature review on cardiovascular disorder identification using knowledge mining and machine learning method. Int. J. Recent Technol. Eng. 2019, 7, 1009–1015. [Google Scholar]
  57. Saif, S.M.; Ansarullah, S.I.; Ben Othman, M.T.; Alshmrany, S.; Shafiq, M.; Hamam, H. Impact of ICT in Modernizing the Global Education Industry to Yield Better Academic Outreach. Sustainability 2022, 14, 6884. [Google Scholar] [CrossRef]
  58. Li, Y.; Wang, Z.; Yin, L.; Zhu, Z.; Qi, G.; Liu, Y. X-Net: A dual encoding–Decoding method in medical image segmentation. Vis. Comput. 2021, 1–11. [Google Scholar] [CrossRef]
  59. Xu, Y.; He, X.; Xu, G.; Qi, G.; Yu, K.; Yin, L.; Yang, P.; Yin, Y.; Chen, H. A medical image segmentation method based on multi-dimensional statistical features. Front. Neurosci. 2022, 16, 1009581. [Google Scholar] [CrossRef]
  60. Sharma, A.; Singh, P.; Dar, G. Artificial Intelligence and Machine Learning for Healthcare Solutions. In Data Analytics in Bioinformatics: A Machine Learning Perspective; Scrivener Publishing LLC: Beverly, MA, USA, 2021; pp. 281–291. [Google Scholar]
  61. Mohiuddin, G.; Sharma, A.; Singh, P. Deep Learning Models for Detection and Diagnosis of Alzheimer’s Disease. In Machine Learning and Data Analytics for Predicting, Managing, and Monitoring Disease; IGI Global: Hershey, PA, USA, 2021; pp. 140–149. [Google Scholar]
  62. Zhu, Z.; He, X.; Qi, G.; Li, Y.; Cong, B.; Liu, Y. Brain tumor segmentation based on the fusion of deep semantics and edge information in multimodal MRI. Inf. Fusion 2023, 91, 376–387. [Google Scholar] [CrossRef]
Figure 1. Example of resampling technique applied on MRI image [50,51].
Figure 1. Example of resampling technique applied on MRI image [50,51].
Electronics 12 00469 g001
Figure 2. The different augmentation techniques are rotation, flipping, shift, etc. In our approach, some images are 5 degrees rotated, some are flipped, etc. [52].
Figure 2. The different augmentation techniques are rotation, flipping, shift, etc. In our approach, some images are 5 degrees rotated, some are flipped, etc. [52].
Electronics 12 00469 g002
Figure 3. The complete convolution process is shown in the figure above, where different filters are applied to images before obtaining the final output [56].
Figure 3. The complete convolution process is shown in the figure above, where different filters are applied to images before obtaining the final output [56].
Electronics 12 00469 g003
Figure 4. Shows the basic design of the proposed model [59].
Figure 4. Shows the basic design of the proposed model [59].
Electronics 12 00469 g004
Figure 5. Shows the accuracy and loss values during training over 100 Epochs.
Figure 5. Shows the accuracy and loss values during training over 100 Epochs.
Electronics 12 00469 g005
Figure 6. Depicts the normalized confusion matrix generated by the proposed model.
Figure 6. Depicts the normalized confusion matrix generated by the proposed model.
Electronics 12 00469 g006
Figure 7. Assessment of the devised architecture with the existing frameworks. Our model shows better performance results than existing architectures [61,62].
Figure 7. Assessment of the devised architecture with the existing frameworks. Our model shows better performance results than existing architectures [61,62].
Electronics 12 00469 g007
Table 1. Shows the images inputting the model from different AD classes. Two thousand images (400 from each class) are used for training purposes, 450 for validation (90 from each class), and 450 for testing (90 from each class) [53].
Table 1. Shows the images inputting the model from different AD classes. Two thousand images (400 from each class) are used for training purposes, 450 for validation (90 from each class), and 450 for testing (90 from each class) [53].
ClassTraining DataValidation DataTesting DataTotal
CN4009090580
MCI4009090580
EMCI4009090580
LMCI4009090580
AD4009090580
TOTAL20004504502900
Table 2. Shows the general architecture of the MobileNet model with total trainable and non-trainable parameters [57].
Table 2. Shows the general architecture of the MobileNet model with total trainable and non-trainable parameters [57].
LayerOutput ShapeParam
input1 (Input Layer)(None, 224, 224, 3)0
convu1 (Conv2D) (None, 112, 112, 32)864
convolution1 (Batch Normalization) (None, 112, 112, 32)128
convolution1 (ReLU) (None, 112, 112, 32)0
convolution1_dw (DepthwiseConv2D)(None, 112, 112, 32)288
convolution1_ bn (Batch Normalization) (None, 112, 112, 32)128
convolution1_dw_relu (ReLU) (None, 112, 112, 32)0
convolution1_pw (Conv2D) (None, 112, 112, 64) 2048
convolution1_pw_1_bn (Batch Normalization) (None, 112, 112, 64) 256
convolution1_pw_relu (ReLU) (None, 112, 112, 64) 0
convolution2_pad (ZeroPadding2D) (None, 113, 113, 64)0
convolution2_dw (DepthwiseConv2D) (None, 56, 56, 64)576
convolution2_dw_bn (Batch Normalization) (None, 56, 56, 64)256
convolution2_dw_relu (ReLU) (None, 56, 56, 64)0
convolution2_pw (Conv2D) (None, 56, 56, 128) 8192
convolution2_pw_bn (Batch Normalization) (None, 56, 56, 128)512
convolution2_pw_relu (ReLU) (None, 56, 56, 128)0
convolution3_dw (DepthwiseConv2D) (None, 56, 56, 128)1152
flatten (Flatten) (None, 50176) 0
dense (Dense)(None, 512)25,690,624
batch_normalization_1 (normalization)(Batch (None, 512))2048
dense1 (Dense) (None, 512) 262,656
dropout (Dropout) (None, 512) 0
dense2 (Dense) (None, 5) 2565
Total params: 29,190,853
Trainable params: 25,958,917
Non-trainable params: 3,231,936
Table 3. Shows us the evaluation of the presentation metrics of the devised model [60].
Table 3. Shows us the evaluation of the presentation metrics of the devised model [60].
MetricsPrecisionRecallF1-ScoreSupport
Final AD JPEG0.980.980.9890
Final CN JPEG0.950.900.9390
Final EMCI JPEG0.960.960.9690
Final LMCI JPEG1.001.001.0090
Final MCI JPEG0.930.980.9590
Accuracy 0.96450
Macro Avg.0.960.960.96450
Weighted Avg.0.960.960.96450
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mohi ud din dar, G.; Bhagat, A.; Ansarullah, S.I.; Othman, M.T.B.; Hamid, Y.; Alkahtani, H.K.; Ullah, I.; Hamam, H. A Novel Framework for Classification of Different Alzheimer’s Disease Stages Using CNN Model. Electronics 2023, 12, 469. https://doi.org/10.3390/electronics12020469

AMA Style

Mohi ud din dar G, Bhagat A, Ansarullah SI, Othman MTB, Hamid Y, Alkahtani HK, Ullah I, Hamam H. A Novel Framework for Classification of Different Alzheimer’s Disease Stages Using CNN Model. Electronics. 2023; 12(2):469. https://doi.org/10.3390/electronics12020469

Chicago/Turabian Style

Mohi ud din dar, Gowhar, Avinash Bhagat, Syed Immamul Ansarullah, Mohamed Tahar Ben Othman, Yasir Hamid, Hend Khalid Alkahtani, Inam Ullah, and Habib Hamam. 2023. "A Novel Framework for Classification of Different Alzheimer’s Disease Stages Using CNN Model" Electronics 12, no. 2: 469. https://doi.org/10.3390/electronics12020469

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop