Next Article in Journal
Refereeing the Sport of Squash with a Machine Learning System
Previous Article in Journal
VisFormers—Combining Vision and Transformers for Enhanced Complex Document Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Alzheimer’s Disease Detection Using Deep Learning on Neuroimaging: A Systematic Review

by
Mohammed G. Alsubaie
1,2,*,
Suhuai Luo
1 and
Kamran Shaukat
1,3,4,*
1
School of Information and Physical Sciences, The University of Newcastle, Newcastle, NSW 2308, Australia
2
Department of Computer Science, College of Khurma University College, Taif University, Taif 21944, Saudi Arabia
3
Centre for Artificial Intelligence Research and Optimisation, Design and Creative Technology Vertical, Torrens University, Ultimo, Sydney, NSW 2007, Australia
4
Department of Data Science, University of the Punjab, Lahore 54890, Pakistan
*
Authors to whom correspondence should be addressed.
Mach. Learn. Knowl. Extr. 2024, 6(1), 464-505; https://doi.org/10.3390/make6010024
Submission received: 29 December 2023 / Revised: 2 February 2024 / Accepted: 6 February 2024 / Published: 21 February 2024
(This article belongs to the Section Learning)

Abstract

:
Alzheimer’s disease (AD) is a pressing global issue, demanding effective diagnostic approaches. This systematic review surveys the recent literature (2018 onwards) to illuminate the current landscape of AD detection via deep learning. Focusing on neuroimaging, this study explores single- and multi-modality investigations, delving into biomarkers, features, and preprocessing techniques. Various deep models, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative models, are evaluated for their AD detection performance. Challenges such as limited datasets and training procedures persist. Emphasis is placed on the need to differentiate AD from similar brain patterns, necessitating discriminative feature representations. This review highlights deep learning’s potential and limitations in AD detection, underscoring dataset importance. Future directions involve benchmark platform development for streamlined comparisons. In conclusion, while deep learning holds promise for accurate AD detection, refining models and methods is crucial to tackle challenges and enhance diagnostic precision.

1. Introduction

Advancements in medical sciences and healthcare have led to improved health indicators and increased life expectancy, contributing to a global population projected to reach around 11.2 billion by 2100 [1]. With a substantial rise in the elderly population, projections suggest that by 2050, approximately 21% of the population will be over 60, resulting in a significant elderly demographic of two billion [2]. As the elderly population grows, age-related diseases, including Alzheimer’s disease (AD), have become more prevalent. AD, the most common form of dementia, is a progressive and incurable neurodegenerative disorder characterized by memory loss, cognitive decline, and difficulties in daily activities [3]. While the exact cause of AD remains unknown, genetic factors are believed to play a significant role [4]. Pathologically, the spread of neurofibrillary tangles and amyloid plaques in the brain disrupts neuronal communication and leads to the death of nerve cells, resulting in a smaller cerebral cortex and enlarged brain ventricles [5].
AD is an irreversible and progressive neurodegenerative disorder that gradually impairs memory, communication, and daily activities like speech and mobility [6]. It is the most prevalent form of dementia, accounting for approximately 60–80% of all dementia cases [7]. Mild cognitive impairment (MCI) represents an early stage of AD, characterized by mild cognitive changes that are noticeable to the affected individual and their loved ones while still allowing for the performance of daily tasks. However, not all individuals with MCI will progress to AD. Approximately 15–20% of individuals aged 65 or older have MCI, and within five years, around 30–40% of those with MCI will develop AD [8]. The conversion period from MCI to AD can vary between 6 to 36 months but typically lasts around 18 months. MCI patients are then classified as MCI converters (MCIc) or non-converters (MCInc) based on whether they transition to AD within 18 months. Additionally, there are other less commonly mentioned subtypes of MCI, such as early or late MCI [9].
The primary risk factors for AD include family history and the presence of specific genes in an individual’s genome [10]. The detection of AD relies on a comprehensive evaluation that includes clinical examinations and interviews with the patient and their family members. However, a definitive detection can only be confirmed through autopsy, which limits its clinical applicability. Autopsy-confirmed cases of AD have been utilized in some studies to establish reliable detection. In the absence of definitive diagnostic data, additional criteria are required to confirm the presence of AD and enable its detection in living patients. The National Institute of Neurological Disorders and Stroke (NINCDS) and the Alzheimer’s Disease and Related Disorders Association (ADRDA) established clinical diagnostic criteria for AD in 1984 [11], which were revised in 2007 to include memory impairment and additional features such as abnormal neuroimaging (MRI and PET) or abnormal cerebrospinal fluid biomarkers [12]. The National Institute on Aging-Alzheimer’s Association (NIA-AA) has also revised the diagnostic criteria, incorporating brain amyloid, neuronal damage, and degeneration measures. Regular updates to the criteria, approximately every 3–4 years, are suggested to incorporate new knowledge about the pathophysiology and progression of the disease [13].
Commonly used assessment tools for AD include the Mini-Mental State Examination (MMSE) [14,15,16] and the Clinical Dementia Rating (CDR) [17,18,19]. However, it is important to note that utilizing these tests as definitive benchmarks for AD may not provide complete accuracy. The accuracy of clinical detection compared to postmortem detection ranges between 70% and 90% [20,21,22,23]. Despite its limitations, clinical detection remains the best available reference standard, although the accessibility of recognized biomarkers is often limited.
Globally, dementia affects 35.6 million people over the age of 60 as of 2010, with projections indicating a doubling every 20 years, reaching 115 million by 2050 [24]. In Australia, dementia has become the second leading cause of death, leading to significant economic implications due to the rising nursing care costs for AD patients [25,26]. Despite various treatment strategies being explored, their success has been limited, underscoring the importance of early and accurate detection for appropriate interventions [27].
To address the need for unbiased clinical decision making and the ability to differentiate AD and its stages from normal controls (NCs), a multi-class classification system is necessary [28,29,30]. While predicting conversion to mild cognitive impairment (MCI) is more valuable than solely classifying AD patients from normal controls, research often focuses on distinguishing AD from normal controls, providing insights into the early signs of AD [31,32,33,34,35,36,37,38]. The key challenge lies in accurately determining MCI and predicting disease progression [34,39]. Although computer-aided systems cannot replace medical expertise, they can offer supplementary information to enhance the accuracy of clinical decisions. Furthermore, studies have also considered other stages of the disease, including early or late MCI [40,41].
Detecting AD using artificial intelligence presents several challenges for researchers. Firstly, there is often a limitation in the quality of medical image acquisition and errors in preprocessing and brain segmentation [42]. The quality of medical images can be compromised by noise, artefacts, and technical limitations [43], which can affect the accuracy of AD detection algorithms. Additionally, pre-processing and segmentation technique errors further hinder the reliable analysis of these images.
Another challenge lies in the unavailability of comprehensive datasets encompassing a wide range of subjects and biomarkers. Building robust AD detection models requires access to diverse datasets that cover different stages of the disease and include various biomarkers [44]. However, obtaining such comprehensive datasets with a large number of subjects can be difficult, limiting the ability to train and evaluate AI models effectively.
In AD detection, there is a low between-class variance in different stages of the disease. Distinguishing between these stages can be challenging due to limited variation in imaging characteristics. For example, certain signs associated with AD, such as brain shrinkage, can also be observed in the brains of normal, healthy older individuals. This similarity can lead to ambiguity in classification and make it harder to differentiate between AD and normal aging [28].
The ambiguity of boundaries between AD/MCI (mild cognitive impairment) and MCI/NC (normal control) based on AD diagnostic criteria is another obstacle [45]. The diagnostic criteria for AD and its transitional stage, MCI, can be subjective and open to interpretation. Determining the boundaries between AD/MCI and MCI/NC can be challenging, as there may be overlap and inconsistency in classification based on these criteria.
Moreover, the lack of expert knowledge, particularly in identifying regions of interest (ROIs) in the brain, poses a challenge [46,47]. Accurate identification of specific brain regions relevant to AD requires expertise, but expertise in identifying these ROIs can be limited. This limitation hampers the development of precise AI algorithms for AD detection.
Lastly, medical images used in AD detection are more complex compared to natural images [48,49]. Magnetic resonance imaging (MRI) and positron emission tomography (PET) scans often exhibit intricate structures, subtle variations, and imaging artefacts. Analyzing and interpreting these complex medical images requires specialized algorithms and techniques tailored to the unique characteristics of these imaging modalities.
Overcoming these challenges is crucial for advancing AI-based AD detection systems, as they hold significant potential for early and accurate disease detection. Addressing issues related to image quality, dataset availability, classification ambiguity, expert knowledge, and the complexity of medical images will contribute to the development of more reliable and effective AI algorithms for AD detection.
Numerous studies have been dedicated to detecting Alzheimer’s disease (AD) using machine learning techniques. These studies have extensively covered various aspects, including different classifiers [50,51,52,53,54,55], monomodal and multimodal models [56,57,58,59,60], feature extraction algorithms [61,62,63], feature selection methods [64,65,66], validation approaches, and dataset properties [67,68]. The findings from these studies have highlighted the effectiveness of machine learning approaches in analyzing AD and have been further complemented by competitions such as CADDementia (https://caddementia.grand-challenge.org/) (accessed on 7 July 2023), TADPOLE (https://tadpole.grand-challenge.org/) (accessed on 9 July 2023), and The Alzheimer’s Disease Big Data DREAM Challenge (https://www.synapse.org/#!Synapse:syn2290704/wiki/64632) (accessed on 9 July 2023). These competitions provide a valuable platform for unbiased comparisons of algorithms and tools using standardized data, engaging participants worldwide.
However, traditional machine learning approaches have faced limitations in dealing with the intricacies of AD detection [69,70,71]. Distinguishing specific features within similar brain image patterns is crucial but challenging. In recent years, significant advancements in deep learning algorithms, fueled by the enhanced processing capabilities of graphics processing units (GPUs), have brought about a paradigm shift in performance across various domains, including object recognition [72,73,74], detection [75,76,77], tracking [78,79,80], image segmentation [81,82,83], and audio classification [84,85]. Deep learning, a subfield of artificial intelligence that emulates the human brain’s data processing and pattern recognition mechanisms, holds great promise in medical image analysis.
This paper aims to comprehensively review the current landscape of Alzheimer’s disease (AD) detection using deep learning techniques. Specifically, our goal is to explore the application of deep learning in both supervised and unsupervised modes to gain deeper insights into AD. By examining the latest findings and emerging trends, we examine Alzheimer’s disease detection using deep learning.
This paper looks at the different methodologies and approaches employed in Alzheimer’s disease detection using deep learning. By analyzing recent research, we aim to comprehensively understand the progress made in this field. We investigate the use of deep learning models to discover valuable information about Alzheimer’s disease, shedding light on the current state of knowledge.
Through an extensive literature review, we collect and synthesize the most recent results regarding detecting Alzheimer’s disease using deep learning. Our analysis encompasses a range of supervised and unsupervised deep learning techniques, exploring their effectiveness and potential for improving the accuracy of Alzheimer’s disease detection.
In addition, we examine current trends in Alzheimer’s disease detection using deep learning, identifying key areas of interest and innovation. By understanding the current landscape, we aim to provide valuable insights into the direction of research and development in this rapidly evolving field.
The rest of this paper is organized as follows: Section 2 delves into the Alzheimer’s disease detection system. Section 3 discusses the review protocol, while Section 4 explores input modalities, input types, datasets, and prediction tasks: exploring variations in AD detection. Section 5 focuses on deep learning for Alzheimer’s disease detection, followed by Section 6, which highlights trending technologies in AD Studies. Section 7 discusses the heterogeneous nature of AD. Section 8 addresses the challenges encountered in this domain, and Section 9 provides insights into future perspectives and recommendations. Finally, in Section 10, we draw our conclusion.

2. Alzheimer’s Disease Detection System

Figure 1 illustrates the AD detection system, an intricate and comprehensive framework designed to facilitate the efficient detection of AD. This system harnesses the synergistic integration of essential components, including brain scans, preprocessing techniques, data management strategies, deep learning models, and evaluation. Together, these elements establish a robust foundation for the system, ensuring its effectiveness, reliability, and precision.

2.1. Brain Scans

Brain scans play a fundamental role in the AD detection system, as they provide critical information about structural and functional changes associated with AD [86].
Various imaging techniques are used to obtain detailed images of the brain, including magnetic resonance imaging (MRI), positron emission tomography (PET), and diffusion tensor imaging (DTI) [87]. MRI uses magnetic fields and radio waves to generate high-resolution images, revealing anatomical features of the brain [88]. PET involves injecting a radioactive tracer into the body, which highlights specific areas of the brain associated with AD pathology [89]. DTI measures the diffusion of water molecules in brain tissue, which allows for the visualization of white matter pathways and assessment of the integrity of neuronal connections [90].
Brain scans provide valuable information about structural changes, neurochemical abnormalities, and functional alterations in people with AD [91]. These scans can detect the presence of amyloid plaques and neurofibrillary tangles, the characteristic pathologies of AD, and reveal patterns of brain atrophy and synaptic dysfunction [12].
The data acquired by the brain scan serve as the basis for further analysis and interpretation [92]. However, it is important to note that interpreting brain scans requires expertise and knowledge of neuroimaging. Radiologists and neurologists often collaborate to ensure accurate and reliable interpretation of scans [93].
In the context of the AD detection system, brain scans serve as the primary input data, capturing the unique characteristics of each individual’s brain. These scans undergo further preprocessing and analysis to extract meaningful features and patterns that can contribute to the detection and classification of Alzheimer’s disease [94].

2.2. Preprocessing

Preprocessing plays a critical role in the AD detection system by applying essential steps to enhance the quality and reliability of data obtained from brain scans. This subsection focuses on the key preprocessing techniques used to prepare acquired imaging data before further analysis and interpretation.
One of the initial preprocessing steps is image registration, which involves aligning brain scans to a common reference space. This alignment compensates for variations in positioning and orientation, ensuring consistent analyses across different individuals and time points [95]. Commonly used techniques for image registration include affine and non-linear transformations.
Following image registration, intensity normalization techniques are applied to address variations in signal intensity between scans. These techniques aim to normalize intensity levels, facilitating more accurate and reliable comparisons among different brain regions and subjects [96]. Common normalization methods include z-score normalization and histogram matching.
Another important preprocessing step is noise reduction, which aims to minimize unwanted artefacts and noise that can interfere with subsequent analyses. Techniques such as Gaussian filtering and wavelet denoising are commonly employed to reduce noise while preserving important features in brain images [97].
Spatial smoothing is an additional preprocessing technique that involves applying a smoothing filter to the data. This process reduces local variations and improves the signal-to-noise ratio, facilitating the identification of relevant patterns and structures in brain scans [98]. Furthermore, motion correction is performed to address motion-related artefacts that may occur during brain scan acquisition. Motion correction algorithms can detect and correct head movements, ensuring that the data accurately represent the structural and functional characteristics of the brain [99].
It is important to note that preprocessing techniques may vary depending on the imaging modality used, such as MRI or PET. Each modality may require specific preprocessing steps tailored to its characteristics and challenges.

2.3. Data Management

Data management is a crucial component of the Alzheimer’s disease detection system, as it involves the efficient organization, storage, and handling of large quantities of imaging and clinical data. This subsection focuses on the key aspects of data management in Alzheimer’s disease research, including data acquisition, storage, integration, and quality control.
Data acquisition involves the collection of imaging data from various modalities such as MRI, PET, or CT scans, as well as clinical data, including demographic information, cognitive assessments, and medical history. Standardized protocols and validated assessment tools are used to ensure consistent data collection procedures [92].
Once data has been acquired, the storage of large-scale imaging and clinical datasets requires efficient and scalable storage solutions. Various database management systems, such as relational databases or NO Structured Query Language (NoSQL) databases, can be used to organize and store data securely and provide efficient retrieval and query capabilities [100].
The integration of heterogeneous data from different sources is crucial to enable comprehensive analysis and interpretation. Data integration techniques, such as data fusion or data harmonization, aim to combine data from multiple modalities or studies into a unified format to ensure compatibility and enable holistic analysis [101].
Data quality control is an essential step in guaranteeing the reliability and validity of the data collected. It involves identifying and correcting anomalies, missing values, outliers, or artefacts that could affect the accuracy and integrity of subsequent analyses. Quality control procedures, including data cleaning and validation checks, are applied to maintain data consistency and accuracy [102].
Effective data management also involves adherence to ethical and privacy guidelines to protect participant confidentiality and ensure data security. Compliance with regulatory requirements, such as obtaining informed consent and anonymizing data, is essential to protect participants’ rights and maintain data integrity.

2.4. Deep Learning Model

Deep learning models have emerged as powerful tools in Alzheimer’s disease detection, leveraging their ability to learn complex patterns and representations from large-scale imaging datasets. This subsection explores the application of deep learning models in Alzheimer’s disease detection, highlighting their architectures, training strategies, and performance evaluation.
Convolutional neural networks (CNNs) have been widely adopted in Alzheimer’s disease research due to their effectiveness in analyzing spatial relationships within brain images. CNNs consist of multiple convolutional and pooling operations layers, followed by fully connected layers for classification [103]. These architectures enable automatic feature extraction and hierarchical learning, capturing local and global brain scan patterns.
To train deep learning models, large annotated datasets are required. The Alzheimer’s Disease Neuroimaging Initiative (ADNI) and other publicly available datasets, such as the Open Access Series of Imaging Studies (OASIS) and the Australian Imaging, Biomarkers and Lifestyle (AIBL) study, have played crucial roles in facilitating the development and evaluation of deep learning models for Alzheimer’s disease detection [104].
Training deep learning models involves optimizing their parameters using labelled data. Stochastic gradient descent (SGD) and its variants, such as Adam and RMSprop, are commonly used optimization algorithms for deep learning [105]. Additionally, regularization techniques like dropout or batch normalization are employed to prevent overfitting and improve generalization performance [106].
The performance of deep learning models in Alzheimer’s disease detection is typically evaluated using metrics such as accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC-ROC). Cross-validation or independent test sets are used to assess the generalization ability of the models [107].
Moreover, transfer learning has shown promise in Alzheimer’s disease detection by leveraging pre-trained deep learning models on large-scale natural image datasets, such as ImageNet. By fine-tuning the pre-trained models on brain image data, transfer learning allows for effective knowledge transfer and improved performance, even with limited labelled training samples [108].

2.5. Evaluation

Evaluation plays a crucial role in assessing the performance and effectiveness of Alzheimer’s disease detection systems. This subsection focuses on the evaluation metrics and methodologies commonly employed in the assessment of these systems, providing insights into the accuracy and reliability of the detection results.
Evaluation metrics in Alzheimer’s disease detection often include accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC-ROC). Accuracy measures the overall correctness of the system’s predictions, while sensitivity and specificity assess the system’s ability to correctly identify positive and negative cases, respectively [109]. AUC-ROC provides a comprehensive measure of the system’s discrimination ability, capturing the trade-off between true positive rate and false positive rate [110]. Cross-validation is a widely used evaluation methodology to assess the generalization performance of Alzheimer’s disease detection systems. In k-fold cross-validation, the dataset is divided into k subsets, and the system is trained and tested k times, with each subgroup serving as the testing set once. This approach provides a more robust estimation of the system’s performance by utilizing the entire dataset for evaluation [111].
Independent test sets are also utilized for evaluation, where a separate dataset, not seen during training, is used to assess the system’s performance. These test sets provide an objective assessment of the system’s ability to generalize to unseen data, reflecting its real-world performance [29].
Furthermore, performance comparison against baseline methods and existing state-of-the-art algorithms is important to demonstrate the advancement and effectiveness of Alzheimer’s disease detection systems. These comparisons help researchers identify the strengths and limitations of their proposed approaches and highlight the progress made in the field [112].

3. The Review Protocol

This review aims to systematically analyze and synthesize the recent advancements in AD detection using CNNs, RNNs, and generative modeling techniques. This review will focus on papers published between 2018 and 2023, aiming to provide a comprehensive overview of the state-of-the-art methods, their performance, and potential contributions to AD detection.

3.1. Inclusion Criteria

The following inclusion criteria were be applied when selecting papers for this review:
  • Papers published between 2018 and 2023.
  • Papers that specifically address AD detection using CNNs, RNNs, or generative modeling techniques.
  • Papers that report on original research, including novel methodologies, experimental studies, or significant advancements in the field.
  • Papers published in peer-reviewed journals or presented at reputable conferences.

3.2. Search Strategy

A systematic search was conducted to identify relevant papers for inclusion in this review. The search was performed in major scientific databases, such as Scopus (https://scopus.com/), PubMed (https://pubmed.ncbi.nlm.nih.gov/), IEEE Xplore (https://ieeexplore.ieee.org/Xplore/home.jsp), ACM Digital Library (https://dl.acm.org/), and Google Scholar (https://scholar.google.com/). The search terms included variations of “Alzheimer’s disease”, “AD detection”, “CNN”, “Convolutional Neural Network”, “RNN”, “Recurrent Neural Network”, “Generative Modeling”, and their combinations. The search was limited to papers published between January 2018 and December 2023.

3.3. Selection Process

The selection process consisted of two stages: screening and eligibility assessment.
  • Screening: Titles and abstracts of the retrieved papers were screened independently by two reviewers to determine their relevance to the review topic. Papers that clearly did not meet the inclusion criteria were excluded at this stage.
  • Eligibility Assessment: The full texts of the remaining papers were obtained and independently assessed by two reviewers. Any discrepancies in eligibility assessment were resolved through discussion or consultation with a third reviewer if necessary.

3.4. Data Extraction and Synthesis

Data were extracted using a standardized form to collect relevant information from the selected papers. The extracted data included authors, publication year, study objectives, dataset characteristics, CNN, RNN, or generative modeling architectures, employed evaluation metrics, and key findings. The extracted data were synthesized to provide a comprehensive summary of the methodologies, performance, and advancements in AD detection using CNNs, RNNs, and generative modeling.

3.5. Data Analysis

The synthesized data were analyzed qualitatively to identify common trends, challenges, and advancements in AD detection using the specified techniques. Key findings, strengths, and limitations of the approaches were highlighted. If feasible, a quantitative analysis, such as meta-analysis or statistical comparisons, was conducted to assess the overall performance of the methods across the included papers.

3.6. Reporting

The results of this review were reported following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [113] guidelines. The findings are organized and presented in a coherent manner, providing a clear overview of the state-of the-art in Alzheimer’s disease detection using CNNs, RNNs, and generative modeling.

3.7. Limitations

This review may have certain limitations. Firstly, it relied on the availability and quality of published papers within the specified time frame. Secondly, the search strategy may not have captured all relevant papers, although efforts were made to include major databases and employ appropriate search terms. Lastly, this review focuses on AD detection using specific neural network architectures and may not cover other relevant approaches or techniques. By following this review protocol, we aimed to minimize bias and ensure a systematic and comprehensive analysis of the selected papers. We strived to address these limitations by conducting a thorough search, employing standardized screening and eligibility assessment processes, and reporting our findings transparently.

4. Input Modalities, Input Types, Datasets, and Prediction Tasks: Exploring Variations in AD Detection

In the realm of deep learning for AD detection, various modalities are employed to capture different aspects of the disease. Structural magnetic resonance imaging (sMRI) provides detailed anatomical information, aiding in brain atrophy detection through cross-sectional or longitudinal scans [114]. PET offers functional insights, with fluorodeoxyglucose-positron emission tomography (FDG-PET) detecting glucose hypometabolism and amyloid-PET identifying amyloid deposits associated with AD [114,115]. Resting-state fMRI measures functional connectivity [116], while EEG records brain electrical activity linked to AD degeneration [117,118,119,120]. Some studies employ diffusion tensor imaging (DTI) [121].
Cognitive assessments, e.g., MMSE and Alzheimer’s Disease Assessment Scale-Cognitive Subscale (ADAS-Cog), evaluate cognitive abilities. Genetic factors, notably the APOE gene (e2, e3, e4 forms), influence AD risk, with the e4 form increasing susceptibility [122]. Combining physiological, chemical, and cognitive data, the APOE genotype, and demographics provides a comprehensive AD detection approach. Deep learning architectures primarily use 3D MRI and PET scans [122].

4.1. Input Data Selection

In deep learning for AD detection, selecting input data is crucial. This section explores methods to convert medical images into suitable formats for training deep neural networks (DNNs), including 2D slicing, patch extraction, and feature selection/construction. To handle the computational demands, researchers split 3D neuroimages into 2D slices using established architectures designed for 2D input data [48,123,124,125]. Another approach extracts patches from 3D images, employing RoI-based or data-driven methods [47,126,127]. Feature selection/extraction techniques, such as PCA, mitigate computational costs, although they have limitations for images [128,129,130,131,132,133].
Despite these approaches, concerns persist about losing valuable relationships in neuroimages, and selection processes often rely on prior knowledge. Numerical features, demographic information, and cognitive measures are used alongside image-based features for AD detection [134,135]. Addressing the temporal nature of fMRI data, researchers use various approaches, including 3D whole-brain images, 2D slices along the temporal axis, and correlation matrices of pre-determined brain regions, [136] to enhance fMRI-based AD detection.

4.2. Datasets

In AD detection, data availability is crucial for deep learning model development. Collaborative efforts and publicly accessible datasets have enriched AD causes, symptoms, and early detection research.
One prominent example is the Alzheimer’s Disease Neuroimage Initiative (ADNI), which combines various data types to study cognitive impairment progression [137]. The Open Access Series of Imaging Studies (OASIS) offers freely accessible neuroimaging datasets for neuroscience advancements [138]. The Minimal Interval Resonance Imaging in Alzheimer’s Disease (MIRIAD) dataset provides longitudinal T1 MRI scans for AD clinical trial optimization [139]. The Australian Imaging, Biomarker & Lifestyle Flagship Study of Aging (AIBL) dataset aids biomarker and lifestyle research related to AD onset [140].
In addition to these, universities and research centers, such as Chosun University National Dementia Research Center, Davis Alzheimer’s Disease Center, and Dong-A University Korea, have their datasets [141,142]. These diverse resources empower AD researchers and enhance our understanding of the disease.

4.3. Prediction Tasks

In healthcare, diagnostic predictions encompass binary classification, multi-class classification, and regression-based predictions.
Binary classification is common in distinguishing healthy and diseased subjects, especially in diseases with limited data for each class or complex progression stages.
In AD detection, a 3-class classification separates subjects into AD, MCI, and NC categories. Within MCI, distinguishing progressive MCI (pMCI) from stable MCI (sMCI) adds complexity, resulting in a 4-class classification.
Regression-based predictions in AD research involve real-value output measures, such as cognitive tests [143], progression likelihood [144], and time-to-event prediction tasks [145], providing insights into quantifiable aspects of AD progression.
By considering these approaches, researchers gain valuable insights into different facets of AD detection and prognosis, enhancing our ability to effectively detect and monitor the disease.

5. Deep Learning for Alzheimer’s Disease Detection

Deep learning has become a prominent approach in AD detection using medical image data, incorporating modalities like PET and MRI [71]. Initially, unsupervised pre-training and network architectures like stacked autoencoders and Restricted Boltzmann machines were prevalent, but since 2013, there has been a surge in deep learning applications for AD detection, especially in neuroimaging [54,71,146].
Clinical research has identified critical AD-related changes: beta-amyloid plaque accumulation, tau protein tangles, and brain atrophy [147]. Imaging modalities like MRI, PET, and fMRI detect these changes, while CSF analysis quantifies specific proteins [148].
Brain tissue volume (grey matter, white matter, CSF) changes correlate with AD severity [149]. Some studies use GM volume, while others consider all tissue volumes for deep learning models [150,151,152,153]. Hippocampal atrophy, a risk factor for dementia progression, is also used, with various representations (whole volume, patches, slices, numerical features) employed for AD detection [28,146,154,155,156].
These advances in deep learning and multimodal imaging have improved AD detection accuracy and effectiveness, leveraging CNNs, RNNs, and generative modelling techniques. The following sections will explore specific methodologies and findings in deep learning approaches for AD detection.

5.1. Convolutional Neural Networks for AD Detection

Convolutional neural networks (CNNs) have gained prominence in medical imaging for AD detection, as they excel at learning hierarchical features from raw image data, enabling accurate predictions [157]. CNNs’ promising results in AD detection have garnered attention from researchers and clinicians.
This systematic review focuses on CNNs in AD detection, providing a comprehensive overview of the literature and summarizing key findings. Examining various approaches, input types/modalities, techniques, evaluation metrics, and additional insights, we aim to highlight CNNs’ potential as a diagnostic tool for AD [157].

5.1.1. Neuroimaging and CNNs for AD Analysis

CNNs have been employed for AD diagnosis, classification, prediction, and image generation. Neuroimaging data, including T1-weighted MRI scans and PET images, serve as foundational inputs [158]. CNNs adeptly extract features from these images, enabling precise AD classification and prediction.

5.1.2. CNN-3D Architecture for AD Classification

CNN-3D architectures, designed to harness the 3D nature of neuroimaging data, excel in capturing spatial relationships and fine details. Across various datasets, CNN-3D models exhibit robust performance in AD classification, capitalizing on their ability to extract discriminative features from 3D brain images.

5.1.3. GANs for Data Augmentation and Enhancement

Generative adversarial networks (GANs) generate synthetic brain images resembling real ones, alleviating limited labelled data availability. By enhancing downstream tasks like AD classification through realistic image synthesis, GANs contribute to improved model generalization and performance.

5.1.4. Transfer Learning and Multimodal Fusion

Transfer learning fine-tunes pre-trained CNN models from general image datasets to AD tasks, compensating for limited AD-specific data. Multimodal approaches, merging data from diverse imaging modalities, enhance classification by capturing complementary AD pathology facets.

5.1.5. Temporal Convolutional Networks (TCNs)

TCNs are another type of neural network architecture that can capture temporal dependencies in sequential data. Unlike LSTMs, TCNs utilize 1D convolutional layers with dilated convolutions to extract features from the temporal sequences. TCNs have been applied to various AD-related tasks, including disease classification, progression prediction, and anomaly detection. They offer computational efficiency and can capture both short-term and long-term temporal dependencies in the data.

5.1.6. Dataset Quality and Interpretable Models

Dataset size and diversity critically impact a CNN’s performance. Standardized, broad AD datasets are pivotal for robust models. Addressing interpretability, efforts should focus on unveiling learned features and decision-making processes to enhance trustworthiness and clinical applicability.

5.1.7. Overview of Convolutional Neural Network (CNN) Studies for AD Detection

Table 1 offers an overview of major studies that have utilized CNNs for AD detection from 2018 to 2023. The table includes relevant information such as the study name, date, input type/modalities, technique, evaluation metric, and additional notes. The aim of this table is to provide a concise reference for researchers and practitioners interested in CNN-based approaches for AD detection.
Convolutional neural networks (CNNs) have demonstrated remarkable achievements in tasks such as organ segmentation [193,194,195] and disease detection [196,197,198] within the field of medical imaging. By leveraging neuroimaging data, these models can uncover hidden representations, establish connections between different components of an image, and identify patterns related to diseases [199]. They have been successfully applied to diverse medical imaging modalities, encompassing structural MRI [200], functional MRI [201,202] (fMRI), PET [203,204], and diffusion tensor imaging (DTI) [205,206,207]. Consequently, researchers have begun exploring the potential of deep learning models in detecting AD using medical images [61,208,209,210]. In recent years, CNNs in Alzheimer’s disease (AD) research have garnered substantial interest. This discussion provides a condensed overview of the diverse applications of CNNs in AD analysis, offering insights for future exploration.

5.1.8. Performance Comparison

When assessing the performance of different CNN-based approaches in Alzheimer’s disease (AD) research, their remarkable potential becomes evident. Traditional CNN architectures, such as LeNet-5 and AlexNet, offer robust feature extraction capabilities, enabling them to capture intricate AD-related patterns. However, it is essential to note that LeNet-5′s relatively shallow architecture may limit its ability to discern complex features, while the high parameter count in AlexNet can lead to overfitting concerns.
Transfer learning, a strategy where pre-trained models like VGG16 are fine-tuned for AD detection, has emerged as a highly effective approach. By leveraging the insights gained from extensive image datasets, transfer learning significantly enhances AD detection accuracy.
The introduction of 3D CNNs has further expanded the capabilities of CNN-based methods, particularly in the analysis of volumetric data, such as MRI scans. These models excel at learning nuanced features, a critical advantage given the temporal progression of AD.
In terms of performance evaluation, CNN-based methods are typically assessed using various metrics, including accuracy, sensitivity, specificity, precision, and the F1-score. While these metrics effectively gauge performance, interpretability remains a challenge. Nevertheless, ongoing efforts, such as attention mechanisms and visualization tools, aim to address this issue.
Despite their promise, CNNs face limitations, primarily related to data availability. To ensure the generalization of CNN-based AD detection models across diverse populations, acquiring and curating large, representative datasets remains a priority for future research. In summary, CNN-based methodologies have demonstrated their mettle in AD research, showcasing strengths across traditional and 3D architectures, transfer learning, and ongoing interpretability enhancements. To realize their full potential for real-world clinical applications, addressing data limitations and improving generalization are critical objectives.

5.1.9. Meaningful Insights

The application of convolutional neural networks (CNNs) in Alzheimer’s disease (AD) detection has unveiled several meaningful insights. CNNs, particularly 3D architectures, have showcased their prowess in deciphering complex patterns within volumetric neuroimaging data.
One remarkable insight is the ability of CNNs to extract hierarchical features from brain images. Traditional CNN architectures, like LeNet-5 and AlexNet, excel in capturing intricate structural information but may struggle with deeper, more abstract features. In contrast, transfer learning, where pre-trained models are fine-tuned for AD detection, has proven highly effective. This approach capitalizes on the wealth of knowledge acquired from diverse image datasets, offering a robust foundation for AD-related feature extraction. The introduction of 3D CNNs has further illuminated the importance of spatial context in AD diagnosis. These models excel in capturing nuanced patterns across multiple image slices, aligning with the progressive nature of AD.
Performance metrics, including accuracy, sensitivity, specificity, and precision, have substantiated CNN’s effectiveness. These metrics provide quantitative evidence of CNNs’ diagnostic capabilities. Additionally, ongoing efforts in developing attention mechanisms and visualization tools aim to enhance model interpretability.
However, the ultimate insight gleaned from CNN-based AD detection is the need for substantial data. Generalizability across diverse populations demands large, representative datasets. This challenge underscores the importance of data acquisition and curation efforts.
In conclusion, CNNs have illuminated the path towards more accurate, data-driven AD detection. Leveraging hierarchical feature extraction, embracing 3D architectures, and ensuring interpretability are pivotal in harnessing CNNs’ potential for earlier and more reliable AD diagnosis.

5.2. Recurrent Neural Networks (RNN) for AD Detection

Recurrent neural networks (RNNs) have acquired considerable attention in medical imaging for AD detection. These deep learning models are well-suited for capturing temporal dependencies and sequential patterns in data, making them particularly useful for analyzing time series or sequential data in AD detection tasks. In recent years, RNNs have shown promising results in capturing complex relationships within longitudinal neuroimaging data and aiding in the early diagnosis of AD.
In this section, we present a systematic review focused on the application of recurrent neural networks for AD detection from 2018 to 2023. Our objective is to provide a comprehensive overview of the existing literature and summarize the key findings regarding the use of RNNs in this domain. By examining the different studies’ approaches, input types/modalities, techniques, evaluation metrics, and additional notes, we aim to highlight the potential of recurrent neural networks as a powerful tool for AD detection.

5.2.1. Long Short-Term Memory (LSTM) Networks

LSTM is a type of RNN that can effectively capture long-term dependencies in sequential data. Several studies have employed LSTM networks for AD diagnosis and prediction. These models typically take sequential data, such as time-series measurements from brain imaging or cognitive assessments, as input, and learn temporal patterns to classify or predict AD progression. LSTM-based models have demonstrated promising results in accurately diagnosing AD and predicting cognitive decline.

5.2.2. Encoder–Decoder Architectures

Encoder–decoder architectures, often combined with attention mechanisms, have been used in AD research to address tasks such as predicting disease progression or generating informative features. These models encode input sequences into latent representations and decode them to generate predictions or reconstructed sequences. Encoder–decoder architectures with attention mechanisms allow for the network to focus on relevant temporal information, improving prediction accuracy and interpretability.

5.2.3. Hybrid Models

Some studies have combined RNNs with other deep learning architectures, such as convolutional neural networks (CNNs) or generative adversarial networks (GANs), to leverage their respective strengths. These hybrid models aim to capture both spatial and temporal information from brain imaging data, leading to improved performance in AD diagnosis, progression prediction, or generating synthetic data for augmentation.

5.2.4. Overview of Recurrent Neural Network (RNN) Studies for AD Detection

Table 2 summarizes major studies that have utilized RNNs for AD detection. It provides relevant information such as the study name, date, input type/modalities, technique, evaluation metric, and additional notes. This table serves as a quick reference for researchers and practitioners interested in RNN-based approaches for AD detection.
Recurrent neural networks (RNNs) have emerged as a popular deep learning technique for analyzing temporal data, making them well-suited for Alzheimer’s disease research. This discussion section will highlight the various methods that have utilized RNNs in AD research, provide an overview of their approaches, compare their performance, and present meaningful insights for further discussion.

5.2.5. Performance Comparison

Comparing the performance of different RNN-based methods in AD research can be challenging due to variations in datasets, evaluation metrics, and experimental setups. However, several studies have reported high accuracy, sensitivity, and specificity in AD diagnosis and prediction tasks using RNNs. For example, LSTM-based models have achieved accuracies ranging from 80% to over 90% in AD classification. TCNs have demonstrated competitive performance in predicting cognitive decline, with high AUC scores. Encoder–decoder architectures with attention mechanisms have shown improvements in disease progression prediction compared to traditional LSTM models. Hybrid models combining RNNs with other architectures have reported enhanced performance by leveraging spatial and temporal information.

5.2.6. Meaningful Insights

RNNs, such as LSTMs, are well-suited for capturing long-term dependencies in sequential data. In the context of AD research, this capability allows for identifying subtle temporal patterns and predicting disease progression. By analyzing longitudinal data, RNNs can potentially detect early signs of cognitive decline and facilitate early intervention strategies.
RNNs can also be effectively used for data augmentation in AD research. Synthetic sequences can be generated using generative models, such as variational autoencoders (VAEs) or GANs, to increase the diversity and size of the training dataset. This augmented data can enhance the robustness and generalizability of RNN models, leading to improved diagnostic accuracy and generalization to unseen data.
In addition, RNNs offer interpretability and explainability in AD research. By analyzing the temporal patterns learned by the models, researchers can gain insights into the underlying disease progression mechanisms. This information can aid in understanding the neurobiological processes associated with AD and provide valuable clues for potential therapeutic interventions.
Moreover, RNNs can handle multimodal data sources, such as combining brain imaging (e.g., MRI, PET scans) with clinical assessments or genetic information. Integrating multiple modalities can provide a more comprehensive understanding of AD, capturing both structural and functional changes in the brain along with clinical markers. RNN-based models enable the fusion of diverse data sources to improve the accuracy and reliability of AD diagnosis and prognosis.
RNNs trained on large-scale datasets can learn robust representations that generalize well to unseen data. Pre-training RNN models on large cohorts or external datasets and finetuning them on specific AD datasets can facilitate knowledge transfer and enhance the performance of AD classification and prediction tasks. Transfer learning approaches enable the utilization of existing knowledge and leverage the expertise gained from related tasks or domains.
While RNNs have shown promise in AD research, there are still challenges to address. One major challenge is the limited availability of large-scale, longitudinal AD datasets. Acquiring and curating diverse datasets with longitudinal follow-up is crucial for training RNN models effectively. Additionally, incorporating uncertainty estimation and quantifying model confidence in predictions can further enhance the reliability and clinical applicability of RNN-based methods.
Furthermore, exploring the combination of RNNs with other advanced techniques, such as attention mechanisms, graph neural networks, or reinforcement learning, holds promise for improving AD diagnosis, understanding disease progression, and guiding personalized treatment strategies. Integrating multimodal data sources, such as imaging, genetics, and omics data, can provide a more comprehensive view of AD pathophysiology.
In conclusion, RNN-based approaches have emerged as powerful tools for AD research, enabling accurate diagnosis, prediction of disease progression, and data augmentation. Various RNN architectures, such as LSTMs, TCNs, and encoder–decoder models, have been applied to different AD tasks with notable success. These models showcase the ability to capture long-term temporal dependencies, enhance interpretability, and integrate multimodal data sources. Nonetheless, further advancements are needed to address challenges related to data availability, uncertainty estimation, and the integration of cutting-edge techniques. By continuing to explore and refine RNN-based methods, we can pave the way for improved understanding, early diagnosis, and personalized treatment of Alzheimer’s disease.

5.3. Generative Modeling for AD Detection

Generative modelling techniques have gained attention in medical imaging for Alzheimer’s disease (AD) detection. These models are capable of generating new samples that follow the distribution of the training data, enabling them to capture the underlying patterns and variations in AD-related imaging data. By leveraging generative models, researchers aim to enhance early detection, improve classification accuracy, and gain insights into the underlying mechanisms of AD.
In this section, we present a systematic review focused on the application of generative modelling techniques for AD detection from 2018 to 2023. Our objective is to provide an overview of the existing literature and summarize the key findings regarding the use of generative models in this domain. By examining the different studies’ approaches, input types/modalities, techniques, evaluation metrics, and additional notes, we aim to highlight the potential of generative modelling as a valuable tool for AD detection.

5.3.1. GANs for Image Generation

One prominent application of generative modelling in Alzheimer’s disease is the generation of synthetic brain images for diagnostic and research purposes. GANs have been used to generate realistic brain images that mimic the characteristics of Alzheimer’s disease, such as the presence of amyloid beta plaques and neurofibrillary tangles. These synthetic images can be valuable for augmenting datasets, addressing data scarcity issues, and improving classification performance.

5.3.2. Conditional GANs for Disease Progression Modeling

Conditional GANs (cGANs) have been employed to model the progression of Alzheimer’s disease over time. By conditioning the generator on longitudinal data, cGANs can generate synthetic brain images that capture disease progression stages, ranging from normal to mild cognitive impairment (MCI) and finally to Alzheimer’s disease. This enables the generation of realistic images representing the transition from healthy to pathological brain states.

5.3.3. Variational Autoencoders (VAEs) for Feature Extraction

In addition to GANs, variational autoencoders (VAEs) have been utilized to extract informative features from brain images for Alzheimer’s disease classification. VAEs can learn a compressed representation of the input images, known as latent space, which captures relevant features associated with the disease. By sampling from the latent space, new images can be generated, and the extracted features can be used for classification tasks.

5.3.4. Hybrid Approaches

Some studies have explored hybrid approaches that combine different generative models to leverage their respective advantages. For example, combining GANs and VAEs can harness the generative power of GANs while benefiting from the probabilistic nature and interpretability of VAEs. These hybrid models aim to generate high-quality images while preserving the meaningful representations learned by VAEs.

5.3.5. Overview of Generative Modeling Studies (GAN) for AD Detection

Table 3 provides a summary of major studies that have utilized generative modelling techniques for AD detection. It includes relevant information such as the study name, date, input type/modalities, technique, evaluation metric, and additional notes. This table serves as a quick reference for researchers and practitioners interested in exploring generative modelling approaches for AD detection.
Generative modelling, particularly through approaches like generative adversarial networks (GANs), has emerged as a promising technique in the field of Alzheimer’s disease research. This discussion section will provide an overview of the various methods used in generative modelling for Alzheimer’s disease, compare their strengths and limitations, and highlight meaningful insights for further exploration and discussion.

5.3.6. Comparative Analysis

When comparing the different generative modelling methods in Alzheimer’s disease research, several factors should be considered:
  • Image Quality: The primary goal of generative modelling is to generate high-quality brain images that closely resemble real data. GANs have demonstrated remarkable success in producing visually realistic images, while VAEs tend to produce slightly blurred images due to the nature of their probabilistic decoding process.
  • Feature Extraction: While GANs excel in image generation, VAEs are more suitable for feature extraction and latent space representation. VAEs can capture meaningful features that reflect disease progression and provide interpretability, making them valuable for understanding the underlying mechanisms of Alzheimer’s disease.
  • Data Scarcity: Alzheimer’s disease datasets are often limited in size, posing challenges for training deep learning models. Generative modelling techniques, especially GANs, can help address data scarcity by generating synthetic samples that augment the training data and improve model generalization.
  • Interpretability: VAEs offer an advantage in terms of interpretability because they learn a structured latent space that captures meaningful variations in the data. This can aid in understanding disease patterns and identifying potential biomarkers.

5.3.7. Meaningful Insights

Generative modelling in Alzheimer’s disease research holds great promise for advancing diagnosis, disease progression modelling, and understanding the underlying mechanisms of the disease. By generating realistic brain images and capturing disease-related features, these techniques can complement traditional diagnostic methods and provide new avenues for personalized treatment and intervention strategies.
One meaningful insight from the application of generative modelling is the potential to address data scarcity issues. Alzheimer’s disease datasets are often limited in size and subject to variability in imaging protocols and data acquisition. By using generative models like GANs and VAEs, researchers can generate synthetic data that closely resemble real brain images. This augmentation of the dataset not only increases the sample size but also captures a wider range of disease characteristics and progression patterns. Consequently, it enhances the robustness and generalizability of machine learning models trained on these augmented datasets.
Moreover, generative modelling techniques provide a unique opportunity to simulate disease progression and explore hypothetical scenarios. By conditioning the generative models on various disease stages, researchers can generate synthetic brain images that represent different pathological states, from early stages of mild cognitive impairment to advanced Alzheimer’s disease. This capability allows for the investigation of disease progression dynamics, identification of critical biomarkers, and evaluation of potential intervention strategies.
Furthermore, the combination of generative models with other deep learning techniques, such as convolutional neural networks (CNNs) or recurrent neural networks (RNNs), can further enhance the performance of Alzheimer’s disease classification and prediction tasks. These hybrid models can leverage the strengths of different architectures and generate more accurate and interpretable results. For example, combining GANs for image generation with CNNs for feature extraction and classification can lead to improved diagnostic accuracy and a better understanding of the underlying disease mechanisms.
However, despite the promising results and potential benefits, there are several challenges and considerations that need to be addressed in future research. Firstly, the interpretability of generative models remains a topic of investigation. While GANs and VAEs can generate realistic images or extract informative features, understanding the specific disease-related factors they capture is still an ongoing challenge. Developing methods to interpret and validate the generated features or images can further enhance their clinical relevance and utility.
Secondly, the generalizability of the generated synthetic data and models across different populations, imaging modalities, and data acquisition protocols needs to be carefully evaluated. It is crucial to ensure that the generated samples accurately represent the true population distribution and do not introduce biases or artifacts that may limit their applicability in real-world scenarios.
Lastly, the ethical implications of using generative models in Alzheimer’s disease research should be considered. The generation of synthetic brain images raises concerns about privacy, informed consent, and the potential impact on patients’ emotional well-being. Guidelines and protocols should be established to address these ethical considerations and ensure the responsible and ethical use of generative modelling techniques.
In conclusion, generative modelling techniques, such as GANs and VAEs, offer promising avenues for advancing Alzheimer’s disease research. The ability to generate realistic brain images, model disease progression, and extract meaningful features provides valuable insights for diagnosis, prognosis, and treatment planning. By addressing data scarcity, enhancing interpretability, and combining with other deep learning approaches, generative modelling can contribute to more accurate and personalized approaches in Alzheimer’s disease management. However, further research is needed to overcome challenges related to interpretability, generalizability, and ethical considerations to fully realize the potential of generative modelling in Alzheimer’s disease research and clinical practice.

6. Trending Technologies in AD Studies

In recent years, there has been a surge of interest in applying deep learning techniques to Alzheimer’s disease (AD) detection and diagnosis. While convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative models have received significant attention in the field, there are several other emerging technologies that hold promise for advancing AD research. In this section, we explore some of these trending technologies and their potential applications in AD studies.

6.1. Graph Convolutional Networks (GCNs)

Table 4 offers an overview of studies that have utilized GCNs for AD detection. Graph convolutional networks (GCNs) have gained attention for their ability to effectively analyze graph-structured data, making them particularly suitable for modelling brain connectivity networks in AD. GCNs can provide insights into the underlying structural and functional changes associated with AD by capturing relationships between brain regions. Recent studies have shown promising results in using GCNs to classify AD patients from healthy controls based on brain network connectivity data.

6.2. Attention Mechanisms

Table 5 offers an overview of studies that have utilized Attention mechanisms for AD detection. Attention mechanisms have emerged as a powerful tool in deep learning, allowing for models to focus on relevant features or regions of interest. In the context of AD studies, attention mechanisms can aid in identifying critical brain regions or biomarkers that contribute significantly to disease progression. By selectively attending to informative regions, attention-based models can improve the interpretability of predictions and enhance our understanding of AD pathology.

6.3. Transfer Learning

Table 6 offers an overview of studies that have utilized Transfer learning for AD detection. Transfer learning, a technique that leverages knowledge learned from one task to improve performance on another related task, has shown promise in AD research. Pretrained models on large-scale datasets, such as ImageNet, can be fine-tuned using AD-specific data to extract discriminative features. Transfer learning enables the utilization of knowledge from diverse domains and can enhance the generalization ability of AD detection models, especially when data scarcity is challenging.

6.4. Autoencoders

Table 7 offers an overview of studies that have utilized Autoencoders for AD detection. Autoencoders are unsupervised learning models that learn to encode and decode data, often used for dimensionality reduction or data reconstruction. In AD studies, autoencoders have been employed for anomaly detection by reconstructing normal brain patterns and identifying deviations indicative of AD pathology. By capturing the underlying structure of AD-related changes, autoencoders can contribute to early detection and monitoring of disease progression.

7. Highlights

Recent advancements in Alzheimer’s disease (AD) research have elucidated a diverse spectrum of disease subtypes, revealing at least five distinct variants, each characterized by unique anatomical pathologies divergent from traditional markers such as Thal or Braak staging [268]. Through meticulous neuropathological and neuroimaging analyses, researchers have consistently identified three primary subtypes: typical AD, limbic-predominant AD, and hippocampal-sparing AD, with the emergence of a fourth subtype, minimal atrophy AD [269]. Additionally, a subgroup devoid of discernible atrophy has been delineated as a distinct AD subtype. These subtypes have been discerned through intricate patterns of brain atrophy and neuropathological characteristics, exhibiting heterogeneous clinical and cognitive features, with certain variants demonstrating slower disease progression compared to the prototypical AD presentation [270]. Understanding the intricacies of these subtypes is paramount for elucidating the heterogeneity of AD, with implications for enhancing discrimination, accurate diagnosis, and targeted therapeutic interventions [271]. Moreover, it is posited that an individual’s positionings along the typicality and severity spectra are shaped by a complex interplay of protective factors, risk factors, and diverse brain pathologies, giving rise to the delineation of four unique AD subtypes: typical AD, limbic-predominant AD, hippocampal-sparing AD, and minimum atrophy AD [272].
Alzheimer’s disease (AD) is widely recognized for its inherent heterogeneity, both in terms of disease manifestation and demographic factors. Importantly, it is rare to encounter pure cases of AD, as individuals often present with a complex interplay of multiple diseases.
This aspect is crucial when assessing disease progression, developing new analyses, or classifying deep learning methods. Recent research has introduced the concept of at least five distinct AD subtypes, each characterized by unique anatomical pathologies beyond traditional markers like Thal or Braak staging [268]. While this classification enhances our understanding of AD diversity, it also poses challenges in diagnosis and necessitates a nuanced approach to disease characterization.
To expand the heterogeneous nature of AD, there is a need to emphasize implications for both clinical practice and research endeavors. Clinicians and researchers should know that AD cases often manifest as a composite of different subtypes, which makes it challenging to identify pure cases [269]. A thorough approach is necessary since the occurrence of multiple subtypes hampers diagnostic attempts significantly. Furthermore, various variables, including different brain disorders and protective and risk factors, influence how diseases proceed in different people [270].
Nevertheless, the diagnosis of pure AD cases presents a formidable challenge due to the disease’s inherent heterogeneity, compounded by mixed pathologies, coexisting conditions, and clinical syndromes [273]. The coexistence of multiple subtypes complicates diagnostic efforts, making identifying pure AD cases arduous. It is imperative to acknowledge that these subtypes are not mutually exclusive, as AD cases often manifest as a composite of different subtypes [274]. The presence of varied subtypes further complicates the diagnostic process, posing challenges in accurately delineating pure AD cases. Alzheimer’s disease presents a notable diversity among individuals, evident in both its clinical expression and underlying pathological mechanisms. Although memory impairment is a primary symptom, the impact on other cognitive functions, such as executive function, language, and visuospatial skills, varies among patients. The pace of disease progression differs significantly from person to person. Various factors contribute to this diversity, including age at onset, genetic predisposition, presence of other health conditions, environmental influences, and individual variances in brain resilience and compensatory mechanisms [275]. Moreover, AD frequently co-occurs with other neurodegenerative conditions, with stroke and vascular dementia emerging as the most prevalent comorbidities [276]. The diagnostic landscape is further convoluted by significant clinical variability in the age of onset and in neurological and cognitive characteristics. AD patients may present with concomitant illnesses such as TDP-43 proteinopathy, Lewy body disease, and cerebrovascular disease, further confounding diagnosis [277]. Notably, the diagnosis of AD necessitates the presence of intracellular neurofibrillary tangles and extracellular amyloid plaques, adding layers of complexity to the diagnostic process [278].
In conclusion, the multifaceted nature of AD subtypes underscores the need for a nuanced and comprehensive approach to disease characterization and diagnosis. By recognizing the diverse manifestations of AD and its associated subtypes, clinicians can tailor interventions to individual patients, optimizing clinical outcomes and enhancing patient care. Moreover, continued research into AD subtypes holds promise for the development of targeted therapeutic strategies that address specific pathological mechanisms underlying different AD variants, ultimately improving outcomes for individuals affected by this devastating disease.

8. Challenges

Deep learning architectures, such as recurrent neural networks (RNNs), convolutional neural networks (CNNs), and generative modelling, have emerged as powerful tools in Alzheimer’s disease (AD) research. These architectures have shown great potential in analyzing various types of data, including imaging, genetic, and clinical data, to advance our understanding of the disease. However, despite their successes, they also face a number of challenges that need to be addressed in order to maximize their impact and applicability in AD research.
One of the challenges faced by RNNs is the limited availability of longitudinal datasets. RNNs excel at modelling temporal dependencies and capturing sequential patterns, making them well-suited for analyzing disease progression over time. However, acquiring large-scale longitudinal datasets with diverse AD populations is crucial to training robust RNN models. Additionally, the heterogeneity of AD data poses a challenge for RNNs. AD is a complex and multifaceted disease, and there is significant variability in data acquisition protocols and demographic factors across different studies. This heterogeneity requires researchers to develop more sophisticated modelling techniques to effectively capture and generalize the patterns in AD data.
Interpretability and explainability are also important challenges for RNNs in AD research. RNNs are often regarded as black-box models, making interpreting and explaining their predictions difficult. To address this, researchers need to explore methods for extracting meaningful features, visualizing temporal patterns, and providing explanations for RNN-based predictions. This will help gain insights into the underlying neurobiological processes and enhance the clinical utility of RNN models.
CNNs, on the other hand, have demonstrated remarkable performance in analyzing medical imaging data, including brain MRI and PET scans. However, they face their own set of challenges in AD research. One such challenge is the need for large and diverse datasets to train CNN models effectively. AD data are often limited in size and can exhibit class imbalances, requiring careful data augmentation strategies and techniques to address these issues. Furthermore, CNNs struggle with generalizing across different imaging modalities and acquisition protocols. AD studies often involve multi-site collaborations and variations in imaging protocols that can introduce unwanted variability. Developing robust techniques to handle these challenges and ensure model generalization is a key area of research.
Generative modelling approaches, such as generative adversarial networks (GANs), offer exciting possibilities for data augmentation, image synthesis, and generating realistic brain images. However, there are challenges that need to be addressed in this domain as well. Training GANs for AD research requires access to large and diverse datasets, which can be difficult to obtain due to privacy concerns and data availability. Additionally, ensuring that the generated images are biologically plausible and representative of the underlying AD pathology is a critical challenge. Striking a balance between data augmentation and maintaining the integrity of AD-specific features is a topic of ongoing research.
While deep learning architectures have shown promise in AD research, they also face challenges that need to be overcome to fully harness their potential. Addressing the limitations and developing innovative solutions in data availability, heterogeneity, interpretability, generalization, and biological plausibility will contribute to advancing AD research and ultimately improve our understanding and management of the disease.

9. Future Perspectives and Recommendations

Deep learning architectures, including recurrent neural networks (RNNs), convolutional neural networks (CNNs), and generative modelling, have shown great promise in the field of Alzheimer’s disease (AD) research. These advanced techniques have provided valuable insights and improved our understanding of the disease. As we look to the future, several perspectives and recommendations can guide further advancements in deep learning for AD research.
One important perspective is the integration of multiple modalities. Deep learning models should continue to explore the combination of various data sources, such as neuroimaging, genetics, and clinical data. By leveraging the complementary information from these modalities, we can enhance the accuracy of AD diagnosis, prognosis, and treatment response prediction. Integrating multimodal data can provide a more comprehensive view of the disease and enable the development of personalized treatment strategies.
Another key perspective is the analysis of longitudinal data. AD is a progressive disease that unfolds over time, and capturing the dynamic changes is crucial for understanding its trajectory. Deep learning architectures can be further developed to effectively model and analyze longitudinal data, enabling researchers to track disease progression and identify early biomarkers of AD. Longitudinal analysis can provide valuable insights into disease mechanisms and aid in developing targeted interventions.
Furthermore, it is important to address the challenges associated with limited data availability in AD research. Deep learning techniques often require large amounts of labelled data for optimal performance. However, AD datasets are typically limited due to the difficulty and cost of data collection. To overcome this challenge, researchers can explore transfer learning techniques, where pre-trained models on related tasks or datasets are fine-tuned for AD analysis. Additionally, data augmentation strategies can artificially increase the available data’s size and diversity, enabling more robust and generalizable models.
In terms of model interpretability, future research should focus on developing techniques to enhance the transparency and explainability of deep learning models in AD diagnosis and prediction. Interpretability in medical applications is crucial to gain the trust and acceptance of clinicians and ensure the ethical use of AI technologies. Efforts should be made to incorporate interpretable components, such as attention mechanisms or saliency maps, into deep learning architectures for AD analysis.
Establishing standardized benchmarks and evaluation protocols for AD-related deep learning tasks is recommended to promote collaboration and accelerate progress in the field. This would allow for fair comparisons between different models and facilitate the reproducibility of research findings. Furthermore, the sharing of well-curated and annotated datasets can help overcome the limitations of data scarcity and encourage the development of novel algorithms and methodologies.
Deep learning architectures hold great potential for advancing our understanding of AD and improving diagnosis, prognosis, and treatment. By integrating multiple modalities, analyzing longitudinal data, addressing data limitations, enhancing interpretability, and fostering collaboration, we can pave the way for more accurate, efficient, and interpretable deep learning models in AD research. These efforts have the potential to transform clinical practice and contribute to the development of personalized and targeted interventions for individuals at risk or affected by AD.

10. Conclusions

In conclusion, this systematic literature review has provided valuable insights into the current state of Alzheimer’s disease (AD) detection using deep learning approaches. This review highlights the potential of deep models, particularly in neuroimaging, for accurate AD detection and emphasizes the importance of highly discriminative feature representations.
The analysis of various biomarkers, features, and pre-processing techniques for neuroimaging data from single-modality and multi-modality studies has demonstrated the versatility of deep learning models in capturing the complex patterns associated with AD. Specifically, deep learning architectures such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative models have been examined for their performance in AD detection.
Despite the promising results, this review also identifies several challenges that need to be addressed. The limited availability of datasets and the need for robust training procedures pose significant hurdles in achieving optimal performance with deep learning models. These challenges highlight the importance of developing benchmark platforms and standardized evaluation protocols to facilitate comparative analysis and foster collaboration in the field.
Looking ahead, future research directions should focus on overcoming the limitations identified in this review. The development of highly discriminative feature representations that can effectively differentiate AD from similar brain patterns is crucial. Additionally, advancements in model architectures and training methodologies are necessary to enhance the performance and generalizability of deep learning models for AD detection.
The findings of this review underscore the potential of deep learning in improving the diagnostic accuracy of AD. However, it is essential to recognize that deep learning is not a standalone solution, and it should be integrated with other clinical data and diagnostic tools to achieve comprehensive and accurate AD detection.
In summary, deep learning holds significant promise for advancing AD detection. However, further advancements in models and methodologies are necessary to overcome the challenges associated with limited datasets and training procedures. By addressing these challenges and promoting collaboration and standardization, deep learning can contribute to the development of practical diagnostic methods for AD, leading to earlier detection and intervention for improved patient outcomes.

Author Contributions

M.G.A.: conceptualization, methodology, software, visualization, validation, data curation, and writing—original draft preparation; S.L. and K.S.: methodology, supervision, project administration, and writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This study received no external funding.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Vaupel, J.W. Biodemography of human ageing. Nature 2010, 464, 536–542. [Google Scholar] [CrossRef]
  2. Samir, K.C.; Lutz, W. The human core of the shared socioeconomic pathways: Population scenarios by age, sex and level of education for all countries to 2100. Glob. Environ. Chang. 2017, 42, 181–192. [Google Scholar] [CrossRef]
  3. Godyń, J.; Jończyk, J.; Panek, D.; Malawska, B. Therapeutic strategies for Alzheimer’s disease in clinical trials. Pharmacol. Rep. 2016, 68, 127–138. [Google Scholar] [CrossRef]
  4. Blaikie, L.; Kay, G.; Lin, P.K.T. Current and emerging therapeutic targets of Alzheimer’s disease for the design of multi-target directed ligands. MedChemComm 2019, 10, 2052–2072. [Google Scholar] [CrossRef]
  5. Gutierrez, B.A.; Limon, A. Synaptic disruption by soluble oligomers in patients with Alzheimer’s and Parkinson’s disease. Biomedicines 2022, 10, 1743. [Google Scholar] [CrossRef]
  6. Zvěřová, M. Clinical aspects of Alzheimer’s disease. Clin. Biochem. 2019, 72, 3–6. [Google Scholar] [CrossRef]
  7. Bronzuoli, M.R.; Iacomino, A.; Steardo, L.; Scuderi, C. Targeting neuroinflammation in Alzheimer’s disease. J. Inflamm. Res. 2016, 9, 199–208. [Google Scholar] [CrossRef] [PubMed]
  8. Aljunid, S.M.; Maimaiti, N.; Ahmed, Z.; Nur, A.M.; Nor, N.M.; Ismail, N.; Haron, S.A.; Shafie, A.A.; Salleh, M.; Yusuf, S.; et al. Development of clinical pathway for mild cognitive impairment and dementia to quantify cost of age-related cognitive disorders in Malaysia. Malays. J. Public Health Med. 2014, 14, 88–96. [Google Scholar]
  9. Hu, K.; Wang, Y.; Chen, K.; Hou, L.; Zhang, X. Multi-scale features extraction from baseline structure MRI for MCI patient classification and AD early diagnosis. Neurocomputing 2016, 175, 132–145. [Google Scholar] [CrossRef]
  10. Eid, A.; Mhatre, I.; Richardson, J.R. Gene-environment interactions in Alzheimer’s disease: A potential path to precision medicine. Pharmacol. Ther. 2019, 199, 173–187. [Google Scholar] [CrossRef]
  11. McKhann, G.; Drachman, D.; Folstein, M.; Katzman, R.; Price, D.; Stadlan, E.M. Clinical diagnosis of Alzheimer’s disease: Report of the NINCDS—ADRDA Work Group under the auspices of Department of Health and Human Services Task Force on Alzheimer’s Disease. Neurology 1984, 34, 939. [Google Scholar] [CrossRef] [PubMed]
  12. Dubois, B.; Feldman, H.H.; Jacova, C.; DeKosky, S.T.; Barberger-Gateau, P.; Cummings, J.L.; Delacourte, A.; Galasko, D.; Gauthier, S.; Jicha, G.A.; et al. Research criteria for the diagnosis of Alzheimer’s disease: Revising the NINCDS–ADRDA criteria. Lancet Neurol. 2007, 6, 734–746. [Google Scholar] [CrossRef] [PubMed]
  13. Jack, C.R., Jr.; Bennett, D.A.; Blennow, K.; Carrillo, M.C.; Dunn, B.; Haeberlein, S.B.; Holtzman, D.M.; Jagust, W.; Jessen, F.; Karlawish, J.; et al. NIA-AA research framework: Toward a biological definition of Alzheimer’s disease. Alzheimer’s Dement. 2018, 14, 535–562. [Google Scholar] [CrossRef]
  14. Arevalo-Rodriguez, I.; Smailagic, N.; I Figuls, M.R.; Ciapponi, A.; Sanchez-Perez, E.; Giannakou, A.; Pedraza, O.L.; Cosp, X.B.; Cullum, S. Mini-Mental State Examination (MMSE) for the detection of Alzheimer’s disease and other dementias in people with mild cognitive impairment (MCI). Cochrane Database Syst. Rev. 2015, 5, 3. [Google Scholar] [CrossRef]
  15. Wei, Y.-C.; Chen, C.-K.; Lin, C.; Chen, P.-Y.; Hsu, P.-C.; Lin, C.-P.; Shyu, Y.-C.; Huang, W.-Y. Normative Data of Mini-Mental State Examination, Montreal Cognitive Assessment, and Alzheimer’s Disease Assessment Scale-Cognitive Subscale of Community-Dwelling Older Adults in Taiwan. Dement. Geriatr. Cogn. Disord. 2022, 51, 365–376. [Google Scholar] [CrossRef]
  16. Qiao, H.; Chen, L.; Zhu, F. Ranking convolutional neural network for Alzheimer’s disease mini-mental state examination prediction at multiple time-points. Comput. Methods Programs Biomed. 2022, 213, 106503. [Google Scholar] [CrossRef]
  17. Morris, J.C. Clinical dementia rating: A reliable and valid diagnostic and staging measure for dementia of the Alzheimer type. Int. Psychogeriatr. 1997, 9, 173–176. [Google Scholar] [CrossRef] [PubMed]
  18. Cedarbaum, J.M.; Jaros, M.; Hernandez, C.; Coley, N.; Andrieu, S.; Grundman, M.; Vellas, B.; Alzheimer’s Disease Neuroimaging Initiative. Rationale for use of the Clinical Dementia Rating Sum of Boxes as a primary outcome measure for Alzheimer’s disease clinical trials. Alzheimer’s Dement. 2013, 9, S45–S55. [Google Scholar] [CrossRef]
  19. Williams, M.M.; Storandt, M.; Roe, C.M.; Morris, J.C. Progression of Alzheimer’s disease as measured by Clinical Dementia Rating Sum of Boxes scores. Alzheimer’s Dement. 2013, 9, S39–S44. [Google Scholar] [CrossRef]
  20. Audronyte, E.; Pakulaite-Kazliene, G.; Sutnikiene, V.; Kaubrys, G. Properties of odor identification testing in screening for early-stage Alzheimer’s disease. Sci. Rep. 2023, 13, 6075. [Google Scholar] [CrossRef]
  21. Rodriguez-Santiago, M.A.; Sepulveda, V.; Valentin, E.M.; Arnold, S.E.; Jiménez-Velázquez, I.Z.; Wojna, V. Diagnosing Alzheimer Disease: Which Dementia Screening Tool to Use in Elderly Puerto Ricans with Mild Cognitive Impairment and Early Alzheimer Disease? Alzheimer’s Dement. 2022, 18, e062560. [Google Scholar] [CrossRef]
  22. Tsoy, E.; VandeVrede, L.; Rojas, J.C.; Possin, K.L. Cognitive assessment in diverse populations: Implications for Alzheimer’s disease clinical trials. Alzheimer’s Dement. 2022, 18, e064114. [Google Scholar] [CrossRef]
  23. Celik, S.; Onur, O.; Yener, G.; Kessler, J.; Özbek, Y.; Meyer, P.; Frölich, L.; Teichmann, B. Cross-cultural comparison of MMSE and RUDAS in German and Turkish patients with Alzheimer’s disease. Neuropsychology 2022, 36, 195–205. [Google Scholar] [CrossRef] [PubMed]
  24. Chen, X.; Zhang, W.; Lin, Z.; Zheng, C.; Chen, S.; Zhou, H.; Liu, Z. Preliminary evidence for developing safe and efficient fecal microbiota transplantation as potential treatment for aged related cognitive impairments. Front. Cell. Infect. Microbiol. 2023, 13, 211. [Google Scholar] [CrossRef] [PubMed]
  25. Adair, T.; Temple, J.; Anstey, K.J.; Lopez, A.D. Is the rise in reported dementia mortality real? Analysis of multiple-cause-of-death data for Australia and the United States. Am. J. Epidemiol. 2022, 191, 1270–1279. [Google Scholar] [CrossRef] [PubMed]
  26. Patel, D.; Montayre, J.; Karamacoska, D.; Siette, J. Progressing dementia risk reduction initiatives for culturally and linguistically diverse older adults in Australia. Australas. J. Ageing 2022, 41, 579–584. [Google Scholar] [CrossRef]
  27. Brennan, F.; Chapman, M.; Gardiner, M.D.; Narasimhan, M.; Cohen, J. Our dementia challenge: Arise palliative care. Intern. Med. J. 2023, 53, 186–193. [Google Scholar] [CrossRef] [PubMed]
  28. Ebrahimighahnavieh, A.; Luo, S.; Chiong, R. Deep learning to detect Alzheimer’s disease from neuroimaging: A systematic literature review. Comput. Methods Programs Biomed. 2020, 187, 105242. [Google Scholar] [CrossRef]
  29. Fathi, S.; Ahmadi, M.; Dehnad, A. Early diagnosis of Alzheimer’s disease based on deep learning: A systematic review. Comput. Biol. Med. 2022, 146, 105634. [Google Scholar] [CrossRef] [PubMed]
  30. Binaco, R.; Calzaretto, N.; Epifano, J.; McGuire, S.; Umer, M.; Emrani, S.; Wasserman, V.; Libon, D.J.; Polikar, R. Machine learning analysis of digital clock drawing test performance for differential classification of mild cognitive impairment subtypes versus Alzheimer’s disease. J. Int. Neuropsychol. Soc. 2020, 26, 690–700. [Google Scholar] [CrossRef]
  31. Karikari, T.K.; Ashton, N.J.; Brinkmalm, G.; Brum, W.S.; Benedet, A.L.; Montoliu-Gaya, L.; Lantero-Rodriguez, J.; Pascoal, T.A.; Suárez-Calvet, M.; Rosa-Neto, P.; et al. Blood phospho-tau in Alzheimer disease: Analysis, interpretation, and clinical utility. Nat. Rev. Neurol. 2022, 18, 400–418. [Google Scholar] [CrossRef]
  32. Wang, M.; Song, W.M.; Ming, C.; Wang, Q.; Zhou, X.; Xu, P.; Krek, A.; Yoon, Y.; Ho, L.; Orr, M.E.; et al. Guidelines for bioinformatics of single-cell sequencing data analysis in Alzheimer’s disease: Review, recommendation, implementation and application. Mol. Neurodegener. 2022, 17, 1–52. [Google Scholar] [CrossRef]
  33. Zhang, Y.; Wu, K.-M.; Yang, L.; Dong, Q.; Yu, J.-T. Tauopathies: New perspectives and challenges. Mol. Neurodegener. 2022, 17, 28. [Google Scholar] [CrossRef]
  34. Klyucherev, T.O.; Olszewski, P.; Shalimova, A.A.; Chubarev, V.N.; Tarasov, V.V.; Attwood, M.M.; Syvänen, S.; Schiöth, H.B. Advances in the development of new biomarkers for Alzheimer’s disease. Transl. Neurodegener. 2022, 11, 1–24. [Google Scholar] [CrossRef] [PubMed]
  35. Zhao, K.; Duka, B.; Xie, H.; Oathes, D.J.; Calhoun, V.; Zhang, Y. A dynamic graph convolutional neural network framework reveals new insights into connectome dysfunctions in ADHD. NeuroImage 2022, 246, 118774. [Google Scholar] [CrossRef]
  36. Jiang, Y.; Zhou, X.; Ip, F.C.; Chan, P.; Chen, Y.; Lai, N.C.; Cheung, K.; Lo, R.M.; Tong, E.P.; Wong, B.W.; et al. Large-scale plasma proteomic profiling identifies a high-performance biomarker panel for Alzheimer’s disease screening and staging. Alzheimer’s Dement. 2022, 18, 88–102. [Google Scholar] [CrossRef] [PubMed]
  37. Gómez-Isla, T.; Frosch, M.P. Lesions without symptoms: Understanding resilience to Alzheimer disease neuropathological changes. Nat. Rev. Neurol. 2022, 18, 323–332. [Google Scholar] [CrossRef]
  38. Liu, C.; Xiang, X.; Han, S.; Lim, H.Y.; Li, L.; Zhang, X.; Ma, Z.; Yang, L.; Guo, S.; Soo, R.; et al. Blood-based liquid biopsy: Insights into early detection and clinical management of lung cancer. Cancer Lett. 2022, 524, 91–102. [Google Scholar] [CrossRef]
  39. Hernandez, M.; Ramon-Julvez, U.; Ferraz, F.; with the ADNI Consortium. Explainable AI toward understanding the performance of the top three TADPOLE Challenge methods in the forecast of Alzheimer’s disease diagnosis. PLoS ONE 2022, 17, e0264695. [Google Scholar] [CrossRef] [PubMed]
  40. Lydon, E.A.; Nguyen, L.T.; Shende, S.A.; Chiang, H.-S.; Spence, J.S.; Mudar, R.A. EEG theta and alpha oscillations in early versus late mild cognitive impairment during a semantic Go/NoGo task. Behav. Brain Res. 2022, 416, 113539. [Google Scholar] [CrossRef]
  41. Yadav, S.; Zhou Shu, K.; Zachary, Z.; Yueyang, G.; Lana, X.; ADNI Consortium. Integrated Metabolomics and Transcriptomics Analysis Identifies Molecular Subtypes within the Early and Late Mild Cognitive Impairment Stages of Alzheimer’s Disease. medRxiv 2023. [Google Scholar] [CrossRef]
  42. Jeyavathana, R.B.; Balasubramanian, R.; Pandian, A.A. A survey: Analysis on pre-processing and segmentation techniques for medical images. Int. J. Res. Sci. Innov. (IJRSI) 2016, 3, 113–120. [Google Scholar]
  43. James, A.P.; Dasarathy, B.V. Medical image fusion: A survey of the state of the art. Inf. Fusion 2014, 19, 4–19. [Google Scholar] [CrossRef]
  44. Balagurunathan, Y.; Mitchell, R.; El Naqa, I. Requirements and reliability of AI in the medical context. Phys. Medica 2021, 83, 72–78. [Google Scholar] [CrossRef] [PubMed]
  45. Nithya, V.P.; Mohanasundaram, N. An Extensive Survey of Various Deep Learning Approaches for Predicting Alzheimer’s Disease. Ann. Rom. Soc. Cell Biol. 2021, 25, 848–860. [Google Scholar]
  46. Pan, Y.; Liu, M.; Lian, C.; Xia, Y.; Shen, D. Spatially-constrained fisher representation for brain disease identification with incomplete multi-modal neuroimages. IEEE Trans. Med. Imaging 2020, 39, 2965–2975. [Google Scholar] [CrossRef]
  47. Liu, M.; Zhang, J.; Adeli, E.; Shen, D. Landmark-based deep multi-instance learning for brain disease diagnosis. Med. Image Anal. 2017, 43, 157–168. [Google Scholar] [CrossRef]
  48. Ebrahimi, A.; Luo, S.; Alzheimer’s Disease Neuroimaging Initiative. Convolutional neural networks for Alzheimer’s disease detection on MRI images. J. Med. Imaging 2021, 8, 024503. [Google Scholar] [CrossRef]
  49. Lee, J.-G.; Jun, S.; Cho, Y.-W.; Lee, H.; Kim, G.B.; Seo, J.B.; Kim, N. Deep learning in medical imaging: General overview. Korean J. Radiol. 2017, 18, 570–584. [Google Scholar] [CrossRef]
  50. Mirzaei, G.; Adeli, H. Machine learning techniques for diagnosis of Alzheimer disease, mild cognitive disorder, and other types of dementia. Biomed. Signal Process. Control 2020, 72, 103293. [Google Scholar] [CrossRef]
  51. Feng, J.; Zhang, S.-W.; Chen, L.; Zuo, C. Detection of Alzheimer’s disease using features of brain region-of-interest-based individual network constructed with the sMRI image. Comput. Med. Imaging Graph. 2022, 98, 102057. [Google Scholar] [CrossRef] [PubMed]
  52. El-Sappagh, S.; Ali, F.; Abuhmed, T.; Singh, J.; Alonso, J.M. Automatic detection of Alzheimer’s disease progression: An efficient information fusion approach with heterogeneous ensemble classifiers. Neurocomputing 2022, 512, 203–224. [Google Scholar] [CrossRef]
  53. Dubois, B.; Picard, G.; Sarazin, M. Early detection of Alzheimer’s disease: New diagnostic criteria. Dialog. Clin. Neurosci. 2022, 10, 35. [Google Scholar] [CrossRef] [PubMed]
  54. Hamdi, M.; Bourouis, S.; Rastislav, K.; Mohmed, F. Evaluation of Neuro Images for the Diagnosis of Alzheimer’s Disease Using Deep Learning Neural Network. Front. Public Health 2022, 10, 834032. [Google Scholar] [CrossRef]
  55. Kim, J.S.; Han, J.W.; Bin Bae, J.; Moon, D.G.; Shin, J.; Kong, J.E.; Lee, H.; Yang, H.W.; Lim, E.; Sunwoo, L.; et al. Deep learning-based diagnosis of Alzheimer’s disease using brain magnetic resonance images: An empirical study. Sci. Rep. 2022, 12, 18007. [Google Scholar] [CrossRef]
  56. Qiu, S.; Miller, M.I.; Joshi, P.S.; Lee, J.C.; Xue, C.; Ni, Y.; Wang, Y.; De Anda-Duran, I.; Hwang, P.H.; Cramer, J.A.; et al. Multimodal deep learning for Alzheimer’s disease dementia assessment. Nat. Commun. 2022, 13, 3404. [Google Scholar] [CrossRef] [PubMed]
  57. Goel, T.; Sharma, R.; Tanveer, M.; Suganthan, P.N.; Maji, K.; Pilli, R. Multimodal Neuroimaging based Alzheimer’s Disease Diagnosis using Evolutionary RVFL Classifier. IEEE J. Biomed. Health Inform. 2023, 1–9. [Google Scholar] [CrossRef]
  58. Tu, Y.; Lin, S.; Qiao, J.; Zhuang, Y.; Zhang, P. Alzheimer’s disease diagnosis via multimodal feature fusion. Comput. Biol. Med. 2022, 148, 105901. [Google Scholar] [CrossRef]
  59. Jia, H.; Lao, H. Deep learning and multimodal feature fusion for the aided diagnosis of Alzheimer’s disease. Neural Comput. Appl. 2022, 34, 19585–19598. [Google Scholar] [CrossRef]
  60. Kong, Z.; Zhang, M.; Zhu, W.; Yi, Y.; Wang, T.; Zhang, B. Multi-modal data Alzheimer’s disease detection based on 3D convolution. Biomed. Signal Process. Control 2022, 75, 103565. [Google Scholar] [CrossRef]
  61. AlSaeed, D.; Omar, S.F. Brain MRI analysis for Alzheimer’s disease diagnosis using CNN-based feature extraction and machine learning. Sensors 2022, 22, 2911. [Google Scholar] [CrossRef] [PubMed]
  62. Rossini, P.M.; Miraglia, F.; Vecchio, F. Early dementia diagnosis, MCI-to-dementia risk prediction, and the role of machine learning methods for feature extraction from integrated biomarkers, in particular for EEG signal analysis. Alzheimer’s Dement. 2022, 18, 2699–2706. [Google Scholar] [CrossRef]
  63. Taghavirashidizadeh, A.; Sharifi, F.; Vahabi, S.A.; Hejazi, A.; SaghabTorbati, M.; Mohammed, A.S. WTD-PSD: Presentation of novel feature extraction method based on discrete wavelet transformation and time-dependent power spectrum descriptors for diagnosis of Alzheimer’s disease. Comput. Intell. Neurosci. 2022, 2022, 9554768. [Google Scholar] [CrossRef] [PubMed]
  64. Mahendran, N.; PM, D.R.V. A deep learning framework with an embedded-based feature selection approach for the early detection of the Alzheimer’s disease. Comput. Biol. Med. 2022, 141, 105056. [Google Scholar] [CrossRef] [PubMed]
  65. Shankar, V.G.; Sisodia, D.S.; Chandrakar, P. A novel discriminant feature selection–based mutual information extraction from MR brain images for Alzheimer’s stages detection and prediction. Int. J. Imaging Syst. Technol. 2022, 32, 1172–1191. [Google Scholar] [CrossRef]
  66. Zhou, K.; Liu, Z.; He, W.; Cai, J.; Hu, L. Application of 3D Whole-Brain Texture Analysis and the Feature Selection Method Based on within-Class Scatter in the Classification and Diagnosis of Alzheimer’s Disease. Ther. Innov. Regul. Sci. 2022, 56, 561–571. [Google Scholar] [CrossRef]
  67. Amrutesh, A.; CG, G.B.; Amruthamsh, A.; KP, A.R.; Gowrishankar, S. Alzheimer’s Disease Prediction using Machine Learning and Transfer Learning Models. In Proceedings of the 2022 6th International Conference on Computation System and Information Technology for Sustainable Solutions (CSITSS), Bangalore, India, 21–23 December 2022; pp. 1–6. [Google Scholar]
  68. Weerasinghe, M.S.U. Pre-Detection of Dementia Using Machine Learning Mechanism. Master’s Thesis, University of Colombo, Colombo, Sri Lanka, 2022. [Google Scholar]
  69. Valliani, A.A.-A.; Ranti, D.; Oermann, E.K. Deep learning and neurology: A systematic review. Neurol. Ther. 2019, 8, 351–365. [Google Scholar] [CrossRef]
  70. Pang, G.; Shen, C.; Cao, L.; Van Den Hengel, A. Deep learning for anomaly detection: A review. ACM Comput. Surv. (CSUR) 2021, 54, 1–38. [Google Scholar] [CrossRef]
  71. Jo, T.; Nho, K.; Saykin, A.J. Deep learning in Alzheimer’s disease: Diagnostic classification and prognostic prediction using neuroimaging data. Front. Aging Neurosci. 2019, 11, 220. [Google Scholar] [CrossRef]
  72. Wang, N.; Wang, Y.; Er, M.J. Review on deep learning techniques for marine object recognition: Architectures and algorithms. Control Eng. Pract. 2020, 118, 104458. [Google Scholar] [CrossRef]
  73. Francies, M.L.; Ata, M.M.; Mohamed, M.A. A robust multiclass 3D object recognition based on modern YOLO deep learning algorithms. Concurr. Comput. Pract. Exp. 2022, 34, e6517. [Google Scholar] [CrossRef]
  74. Balamurugan, D.; Aravinth, S.S.; Reddy, P.C.S.; Rupani, A.; Manikandan, A. Multiview objects recognition using deep learning-based wrap-CNN with voting scheme. Neural Process. Lett. 2022, 54, 1495–1521. [Google Scholar] [CrossRef]
  75. Ghasemi, Y.; Jeong, H.; Choi, S.H.; Park, K.-B.; Lee, J.Y. Deep learning-based object detection in augmented reality: A systematic review. Comput. Ind. 2022, 139, 103661. [Google Scholar] [CrossRef]
  76. Zaidi, S.S.A.; Ansari, M.S.; Aslam, A.; Kanwal, N.; Asghar, M.; Lee, B. A survey of modern deep learning based object detection models. Digit. Signal Process. 2022, 126, 103514. [Google Scholar] [CrossRef]
  77. Kang, J.; Tariq, S.; Oh, H.; Woo, S.S. A survey of deep learning-based object detection methods and datasets for overhead imagery. IEEE Access 2022, 10, 20118–20134. [Google Scholar] [CrossRef]
  78. Chu, P.; Wang, J.; You, Q.; Ling, H.; Liu, Z. Transmot: Spatial-temporal graph transformer for multiple object tracking. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 3–7 January 2023; pp. 4870–4880. [Google Scholar]
  79. Tang, X.; Zhang, Z.; Qin, Y. On-road object detection and tracking based on radar and vision fusion: A review. IEEE Intell. Transp. Syst. Mag. 2022, 14, 103–128. [Google Scholar] [CrossRef]
  80. Vaquero, L.; Brea, V.M.; Mucientes, M. Tracking more than 100 arbitrary objects at 25 FPS through deep learning. Pattern Recognit. 2022, 121, 108205. [Google Scholar] [CrossRef]
  81. Krentzel, D.; Shorte, S.L.; Zimmer, C. Deep learning in image-based phenotypic drug discovery. Trends Cell Biol. 2023, 33, 538–554. [Google Scholar] [CrossRef]
  82. Zhou, S.; Canchila, C.; Song, W. Deep learning-based crack segmentation for civil infrastructure: Data types, architectures, and benchmarked performance. Autom. Constr. 2023, 146, 104678. [Google Scholar] [CrossRef]
  83. He, S.; Bao, R.; Li, J.; Grant, P.E.; Ou, Y. Accuracy of segment-anything model (sam) in medical image segmentation tasks. arXiv 2023, arXiv:2304.09324. [Google Scholar]
  84. Hourri, S.; Kharroubi, J. A deep learning approach for speaker recognition. Int. J. Speech Technol. 2020, 23, 123–131. [Google Scholar] [CrossRef]
  85. Hourri, S.; Nikolov, N.S.; Kharroubi, J. A deep learning approach to integrate convolutional neural networks in speaker recognition. Int. J. Speech Technol. 2020, 23, 615–623. [Google Scholar] [CrossRef]
  86. Waldemar, G.; Dubois, B.; Emre, M.; Georges, J.; McKeith, I.G.; Rossor, M.; Scheltens, P.; Tariska, P.; Winblad, B. Recommendations for the diagnosis and management of Alzheimer’s disease and other disorders associated with dementia: EFNS guideline. Eur. J. Neurol. 2007, 14, e1–e26. [Google Scholar] [CrossRef]
  87. Shenton, M.E.; Hamoda, H.M.; Schneiderman, J.S.; Bouix, S.; Pasternak, O.; Rathi, Y.; Vu, M.-A.; Purohit, M.P.; Helmer, K.; Koerte, I.; et al. A review of magnetic resonance imaging and diffusion tensor imaging findings in mild traumatic brain injury. Brain Imaging Behav. 2012, 6, 137–192. [Google Scholar] [CrossRef]
  88. Matthews, P.M.; Peter, J. Functional magnetic resonance imaging. J. Neurol. Neurosurg. Psychiatry 2004, 75, 6–12. [Google Scholar]
  89. Klunk, W.E.; Mathis, C.A.; Price, J.C.; DeKosky, S.T.; Lopresti, B.J.; Tsopelas, N.D.; Judith, A.S.; Robert, D.N. Amyloid imaging with PET in Alzheimer’s disease, mild cognitive impairment, and clinically unimpaired subjects. In PET in the Evaluation of Alzheimer’s Disease and Related Disorders; Springer: New York, NY, USA, 2009; pp. 119–147. [Google Scholar]
  90. Basser, P.J. Inferring microstructural features and the physiological state of tissues from diffusion-weighted images. NMR Biomed. 1995, 8, 333–344. [Google Scholar] [CrossRef] [PubMed]
  91. Braak, H.; Alafuzoff, I.; Arzberger, T.; Kretzschmar, H.; Del Tredici, K. Staging of Alzheimer disease-associated neurofibrillary pathology using paraffin sections and immunocytochemistry. Acta Neuropathol. 2006, 112, 389–404. [Google Scholar] [CrossRef] [PubMed]
  92. Mueller, S.G.; Weiner, M.W.; Thal, L.J.; Petersen, R.C.; Jack, C.; Jagust, W.; Trojanowski, J.Q.; Toga, A.W.; Beckett, L. The Alzheimer’s disease neuroimaging initiative. Neuroimaging Clin. 2005, 15, 869–877. [Google Scholar] [CrossRef] [PubMed]
  93. Jagust, W. Imaging the evolution and pathophysiology of Alzheimer disease. Nat. Rev. Neurosci. 2018, 19, 687–700. [Google Scholar] [CrossRef] [PubMed]
  94. Weiner, M.W.; Veitch, D.P.; Aisen, P.S.; Beckett, L.A.; Cairns, N.J.; Green, R.C.; Harvey, D.; Jack, C.R.; Jagust, W.; Liu, E.; et al. The Alzheimer’s Disease Neuroimaging Initiative: A review of papers published since its inception. Alzheimer’s Dement. 2013, 9, e111–e194. [Google Scholar] [CrossRef]
  95. Klein, A.; Andersson, J.; Ardekani, B.A.; Ashburner, J.; Avants, B.; Chiang, M.-C.; Christensen, G.E.; Collins, D.L.; Gee, J.; Hellier, P.; et al. Evaluation of 14 nonlinear deformation algorithms applied to human brain MRI registration. NeuroImage 2009, 46, 786–802. [Google Scholar] [CrossRef]
  96. Ashburner, J.; Friston, K.J. Voxel-based morphometry—The methods. Neuroimage 2000, 11, 805–821. [Google Scholar] [CrossRef]
  97. Choy, G.; Khalilzadeh, O.; Michalski, M.; Synho, D.; Samir, A.E.; Pianykh, O.S.; Geis, J.R.; Pandharipande, P.V.; Brink, J.A.; Dreyer, K.J. Current applications and future impact of machine learning in radiology. Radiology 2018, 288, 318–328. [Google Scholar] [CrossRef] [PubMed]
  98. Smith, S.M.; Jenkinson, M.; Woolrich, M.W.; Beckmann, C.F.; Behrens, T.E.; Johansen-Berg, H.; Bannister, P.R.; De Luca, M.; Drobnjak, I.; Flitney, D.E.; et al. Advances in functional and structural MR image analysis and implementation as FSL. NeuroImage 2004, 23, S208–S219. [Google Scholar] [CrossRef]
  99. Jenkinson, M.; Bannister, P.; Brady, M.; Smith, S. Improved optimization for the robust and accurate linear registration and motion correction of brain images. Neuroimage 2002, 17, 825–841. [Google Scholar] [CrossRef]
  100. Hashem IA, T.; Yaqoob, I.; Anuar, N.B.; Mokhtar, S.; Gani, A.; Khan, S.U. The rise of “big data” on cloud computing: Review and open research issues. Inf. Syst. 2015, 47, 98–115. [Google Scholar] [CrossRef]
  101. Gorgolewski, K.J.; Auer, T.; Calhoun, V.D.; Craddock, R.C.; Das, S.; Duff, E.P.; Flandin, G.; Ghosh, S.S.; Glatard, T.; Halchenko, Y.O.; et al. The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments. Sci. Data 2016, 3, 160044. [Google Scholar] [CrossRef] [PubMed]
  102. Kandel, S.; Heer, J.; Plaisant, C.; Kennedy, J.; van Ham, F.; Riche, N.H.; Weaver, C.; Lee, B.; Brodbeck, D.; Buono, P. Research directions in data wrangling: Visualizations and transformations for usable and credible data. Inf. Vis. 2011, 10, 271–288. [Google Scholar] [CrossRef]
  103. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  104. Madrid, L.; Labrador, S.C.; González-Pérez, A.; Sáez, M.E.; Alzheimer’s Disease Neuroimaging Initiative (ADNI and others). Integrated Genomic, Transcriptomic and Proteomic Analysis for Identifying Markers of Alzheimer’s Disease. Diagnostics 2021, 11, 2303. [Google Scholar] [CrossRef]
  105. Konur, O.; Kingma, D.; Ba, J. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015; pp. 1–15. [Google Scholar] [CrossRef]
  106. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
  107. Jiang, X.; Chang, L.; Zhang, Y.-D. Classification of Alzheimer’s disease via eight-layer convolutional neural network with batch normalization and dropout techniques. J. Med. Imaging Health Inform. 2020, 10, 1040–1048. [Google Scholar] [CrossRef]
  108. Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
  109. Powers, D.M.W. Evaluation: From precision, recall and F-measure to ROC, informedness, markedness and correlation. arXiv 2020, arXiv:2010.16061. [Google Scholar]
  110. Fawcett, T. An introduction to ROC analysis. Pattern Recogn. Lett. 2006, 27, 861–874. [Google Scholar] [CrossRef]
  111. Kohavi, R. A study of cross-validation and bootstrap for accuracy estimation and model selection. Int. Jt. Conf. Artif. Intell. 1995, 14, 1137–1145. [Google Scholar]
  112. Welton, T.; Kent, D.A.; Auer, D.P.; Dineen, R.A. Reproducibility of graph-theoretic brain network metrics: A systematic review. Brain Connect. 2015, 5, 193–202. [Google Scholar] [CrossRef]
  113. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Ann. Intern. Med. 2009, 151, 264–269. [Google Scholar] [CrossRef] [PubMed]
  114. De Leon, M.J.; DeSanti, S.; Zinkowski, R.; Mehta, P.D.; Pratico, D.; Segal, S.; Clark, C.; Kerkman, D.; DeBernardis, J.; Li, J.; et al. MRI and CSF studies in the early diagnosis of Alzheimer’s disease. J. Intern. Med. 2004, 256, 205–223. [Google Scholar] [CrossRef]
  115. Mendelson, A.F. Validating Supervised Learning Approaches to the Prediction of Disease Status in Neuroimaging. Ph.D. Thesis, UCL (University College London), London, UK, 2017. [Google Scholar]
  116. Yao, H.; Liu, Y.; Zhou, B.; Zhang, Z.; An, N.; Wang, P.; Wang, L.; Zhang, X.; Jiang, T. Decreased functional connectivity of the amygdala in Alzheimer’s disease revealed by resting-state fMRI. Eur. J. Radiol. 2013, 82, 1531–1538. [Google Scholar] [CrossRef]
  117. Miltiadous, A.; Tzimourta, K.D.; Giannakeas, N.; Tsipouras, M.G.; Afrantou, T.; Ioannidis, P.; Tzallas, A.T. Alzheimer’s disease and frontotemporal dementia: A robust classification method of eeg signals and a comparison of validation methods. Diagnostics 2021, 11, 1437. [Google Scholar] [CrossRef]
  118. Arena, F. Brain Network Analysis and Deep Learning Models for Studying Neurological Disorders Based on EEG Signal Processing. Ph.D. Thesis, University of Messina, Messina, Italy, 2019. [Google Scholar]
  119. Ge, Q.; Lin, Z.-C.; Gao, Y.-X.; Zhang, J.-X. A Robust Discriminant framework based on functional biomarkers of EEG and its potential for diagnosis of Alzheimer’s disease. Healthcare 2020, 8, 476. [Google Scholar] [CrossRef]
  120. Barzegaran, E.; van Damme, B.; Meuli, R.; Knyazeva, M.G. Perception-related EEG is more sensitive to Alzheimer’s disease effects than resting EEG. Neurobiol. Aging 2016, 43, 129–139. [Google Scholar] [CrossRef]
  121. Lu, L.; Sedor, J.R.; Gulani, V.; Schelling, J.R.; O’brien, A.; Flask, C.A.; Dell, K.M. Use of diffusion tensor MRI to Identify early changes in diabetic nephropathy. Am. J. Nephrol. 2011, 34, 476–482. [Google Scholar] [CrossRef]
  122. Guskiewicz, K.M.; Marshall, S.W.; Bailes, J.; McCrea, M.; Cantu, R.C.; Randolph, C.; Jordan, B.D. Association between recurrent concussion and late-life cognitive impairment in retired professional football players. Neurosurgery 2005, 57, 719–726. [Google Scholar] [CrossRef] [PubMed]
  123. Ebrahimi, A.; Luo, S.; Chiong, R. Introducing transfer learning to 3D ResNet-18 for Alzheimer’s disease detection on MRI images. In Proceedings of the 2020 35th International Conference on Image and Vision Computing New Zealand (IVCNZ), Wellington, New Zealand, 25–27 November 2020; pp. 1–6. [Google Scholar]
  124. Liu, Y.; Dwivedi, G.; Boussaid, F.; Sanfilippo, F.; Yamada, M.; Bennamoun, M. Inflating 2D Convolution Weights for Efficient Generation of 3D Medical Images. Comput. Methods Programs Biomed. 2023, 240, 107685. [Google Scholar] [CrossRef] [PubMed]
  125. Jang, J.; Hwang, D. M3T: Three-dimensional Medical image classifier using Multi-plane and Multi-slice Transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 20686–20697. [Google Scholar]
  126. Poldrack, R.A. Region of interest analysis for fMRI. Soc. Cogn. Affect. Neurosci. 2007, 2, 67–70. [Google Scholar] [CrossRef] [PubMed]
  127. Liu, M.; Zhang, J.; Nie, D.; Yap, P.-T.; Shen, D. Anatomical landmark based deep feature representation for MR images in brain disease diagnosis. IEEE J. Biomed. Health Inform. 2018, 22, 1476–1485. [Google Scholar] [CrossRef] [PubMed]
  128. Acharya, U.R.; Fernandes, S.L.; WeiKoh, J.E.; Ciaccio, E.J.; Fabell, M.K.M.; Tanik, U.J.; Rajinikanth, V.; Yeong, C.H. Automated detection of Alzheimer’s disease using brain MRI images—A study with various feature extraction techniques. J. Med. Syst. 2019, 43, 302. [Google Scholar] [CrossRef]
  129. Wang, X.; Liang, X.; Jiang, Z.; Nguchu, B.A.; Zhou, Y.; Wang, Y.; Wang, H.; Li, Y.; Zhu, Y.; Wu, F.; et al. Decoding and mapping task states of the human brain via deep learning. Hum. Brain Mapp. 2020, 41, 1505–1519. [Google Scholar] [CrossRef]
  130. Dua, S.; Srinivasan, P. A non-voxel based feature extraction to detect cognitive states in fMRI. In Proceedings of the 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Vancouver, BC, Canada, 20–25 August 2008; pp. 4431–4434. [Google Scholar]
  131. Shresta, S.; Senanayake, S.A.; Triloka, J. Advanced Cascaded Anisotropic Convolutional Neural Network Architecture Based Optimized Feature Selection Brain Tumour Segmentation and Classification. In Proceedings of the 2020 5th International Conference on Innovative Technologies in Intelligent Systems and Industrial Applications (CITISIA, Sydney, Australia, 25–27 November 2020; pp. 1–11. [Google Scholar]
  132. Mahmood, R.; Ghimire, B. Automatic detection and classification of Alzheimer’s Disease from MRI scans using principal component analysis and artificial neural networks. In Proceedings of the 2013 20th International Conference on Systems, Signals and Image Processing (IWSSIP), Bucharest, Romania, 7–9 July 2013; pp. 133–137. [Google Scholar]
  133. López, M.; Ramírez, J.; Górriz, J.M.; Álvarez, I.; Salas-Gonzalez, D.; Segovia, F.; Chaves, R.; Padilla, P.; Gómez-Río, M. Principal component analysis-based techniques and supervised classification schemes for the early detection of Alzheimer’s disease. Neurocomputing 2011, 74, 1260–1271. [Google Scholar] [CrossRef]
  134. Rathore, S.; Habes, M.; Iftikhar, M.A.; Shacklett, A.; Davatzikos, C. A review on neuroimaging-based classification studies and associated feature extraction methods for Alzheimer’s disease and its prodromal stages. NeuroImage 2017, 155, 530–548. [Google Scholar] [CrossRef] [PubMed]
  135. Cui, Y.; Liu, B.; Luo, S.; Zhen, X.; Fan, M.; Liu, T.; Zhu, W.; Park, M.; Jiang, T.; Jin, J.S.; et al. Identification of conversion from mild cognitive impairment to Alzheimer’s disease using multivariate predictors. PLoS ONE 2011, 6, e21896. [Google Scholar] [CrossRef] [PubMed]
  136. Jones, S.E.; Mahmoud, S.Y.; Phillips, M.D. A practical clinical method to quantify language lateralization in fMRI using whole-brain analysis. NeuroImage 2011, 54, 2937–2949. [Google Scholar] [CrossRef] [PubMed]
  137. Jack, C.R., Jr.; Bernstein, M.A.; Fox, N.C.; Thompson, P.; Alexander, G.; Harvey, D.; Borowski, B.; Whitwell, P.J.L.; Ward, J.; Dale, A.M.; et al. The Alzheimer’s disease neuroimaging initiative (ADNI): MRI methods. J. Magn. Reson. Imaging Off. J. Int. Soc. Magn. Reson. Med. 2008, 27, 685–691. [Google Scholar] [CrossRef] [PubMed]
  138. Marcus, D.S.; Wang, T.H.; Parker, J.; Csernansky, J.G.; Morris, J.C.; Buckner, R.L. Open Access Series of Imaging Studies (OASIS): Cross-sectional MRI data in young, middle aged, nondemented, and demented older adults. J. Cogn. Neurosci. 2007, 19, 1498–1507. [Google Scholar] [CrossRef] [PubMed]
  139. Malone, I.B.; Cash, D.M.; Ridgway, G.R.; MacManus, D.G.; Ourselin, S.; Fox, N.C.; Schott, J.M. MIRIAD–Public release of a multiple time point Alzheimer’s MR imaging dataset. NeuroImage 2013, 70, 33–36. [Google Scholar] [CrossRef]
  140. Cardoso, B.R.; Hare, D.J.; Bush, A.I.; Li, Q.-X.; Fowler, C.J.; Masters, C.L.; Martins, R.N.; Ganio, K.; Lothian, A.; Mukherjee, S.; et al. Selenium levels in serum, red blood cells, and cerebrospinal fluid of Alzheimer’s disease patients: A report from the Australian Imaging, Biomarker & Lifestyle Flagship Study of Ageing (AIBL). J. Alzheimer’s Dis. 2017, 57, 183–193. [Google Scholar]
  141. Gupta, Y.; Lee, K.H.; Choi, K.Y.; Lee, J.J.; Kim, B.C.; Kwon, G.R.; National Research Center for Dementia; Alzheimer’s Disease Neuroimaging Initiative. Early diagnosis of Alzheimer’s disease using combined features from voxel-based morphometry and cortical, subcortical, and hippocampus regions of MRI T1 brain images. PLoS ONE 2019, 14, e0222446. [Google Scholar] [CrossRef]
  142. Katako, A.; Shelton, P.; Goertzen, A.L.; Levin, D.; Bybel, B.; Aljuaid, M.; Yoon, H.J.; Kang, D.Y.; Kim, S.M.; Lee, C.S.; et al. Machine learning identified an Alzheimer’s disease-related FDG-PET pattern which is also expressed in Lewy body dementia and Parkinson’s disease dementia. Sci. Rep. 2018, 8, 13236. [Google Scholar] [CrossRef]
  143. Freitas, S.; Simões, M.R.; Alves, L.; Santana, I. Montreal cognitive assessment: Validation study for mild cognitive impairment and Alzheimer disease. Alzheimer Dis. Assoc. Disord. 2013, 27, 37–43. [Google Scholar] [CrossRef]
  144. Desikan, R.S.; Cabral, H.J.; Settecase, F.; Hess, C.P.; Dillon, W.P.; Glastonbury, C.M.; Weiner, M.W.; Schmansky, N.J.; Salat, D.H.; Fischl, B. Automated MRI measures predict progression to Alzheimer’s disease. Neurobiol. Aging 2010, 31, 1364–1374. [Google Scholar] [CrossRef] [PubMed]
  145. Sharma, R.; Anand, H.; Badr, Y.; Qiu, R.G. Time-to-event prediction using survival analysis methods for Alzheimer’s disease progression. Alzheimer’s Dement. Transl. Res. Clin. Interv. 2021, 7, e12229. [Google Scholar] [CrossRef] [PubMed]
  146. Khojaste-Sarakhsi, M.; Haghighi, S.S.; Ghomi, S.F.; Marchiori, E. Deep learning for Alzheimer’s disease diagnosis: A survey. Artif. Intell. Med. 2022, 130, 102332. [Google Scholar] [CrossRef] [PubMed]
  147. Huang, H.-C.; Jiang, Z.-F. Accumulated amyloid-β peptide and hyperphosphorylated tau protein: Relationship and links in Alzheimer’s disease. J. Alzheimer’s Dis. 2009, 16, 15–27. [Google Scholar] [CrossRef]
  148. Johnson, K.A.; Fox, N.C.; Sperling, R.A.; Klunk, W.E. Brain imaging in Alzheimer disease. Cold Spring Harb. Perspect. Med. 2012, 2, a006213. [Google Scholar] [CrossRef] [PubMed]
  149. Thambisetty, M.; An, Y.; Kinsey, A.; Koka, D.; Saleem, M.; Güntert, A.; Kraut, M.; Ferrucci, L.; Davatzikos, C.; Lovestone, S.; et al. Plasma clusterin concentration is associated with longitudinal brain atrophy in mild cognitive impairment. Neuroimage 2012, 59, 212–217. [Google Scholar] [CrossRef]
  150. Storelli, L.; Azzimonti, M.; Gueye, M.; Vizzino, C.M.; Preziosa, P.M.; Tedeschi, G.; De Stefano, N.M.; Pantano, P.M.; Filippi, M.; Rocca, M.A. A deep learning approach to predicting disease progression in multiple sclerosis using magnetic resonance imaging. Investig. Radiol. 2022, 57, 423–432. [Google Scholar] [CrossRef]
  151. Kumar, P.; Nagar, P.; Arora, C.; Gupta, A. U-segnet: Fully convolutional neural network based automated brain tissue segmentation tool. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 3503–3507. [Google Scholar] [CrossRef]
  152. Narayana, P.A.; Coronado, I.; Sujit, S.J.; Sun, X.; Wolinsky, J.S.; Gabr, R.E. Are multi-contrast magnetic resonance images necessary for segmenting multiple sclerosis brains? A large cohort study based on deep learning. Magn. Reson. Imaging 2020, 65, 8–14. [Google Scholar] [CrossRef]
  153. Narayana, P.A.; Coronado, I.; Sujit, S.J.; Wolinsky, J.S.; Lublin, F.D.; Gabr, R.E. Deep-learning-based neural tissue segmentation of MRI in multiple sclerosis: Effect of training set size. J. Magn. Reson. Imaging 2020, 51, 1487–1496. [Google Scholar] [CrossRef]
  154. Zhang, J.; Zheng, B.; Gao, A.; Feng, X.; Liang, D.; Long, X. A 3D densely connected convolution neural network with connection-wise attention mechanism for Alzheimer’s disease classification. Magn. Reson. Imaging 2021, 78, 119–126. [Google Scholar] [CrossRef]
  155. Ben Ahmed, O.; Mizotin, M.; Benois-Pineau, J.; Allard, M.; Catheline, G.; Ben Amar, C.; Alzheimer’s Disease Neuroimaging Initiative. Alzheimer’s disease diagnosis on structural MR images using circular harmonic functions descriptors on hippocampus and posterior cingulate cortex. Comput. Med. Imaging Graph. 2015, 44, 13–25. [Google Scholar] [CrossRef]
  156. Chen, L.; Qiao, H.; Zhu, F. Alzheimer’s Disease Diagnosis With Brain Structural MRI Using Multiview-Slice Attention and 3D Convolution Neural Network. Front. Aging Neurosci. 2022, 14, 871706. [Google Scholar] [CrossRef]
  157. Li, Z.; Liu, F.; Yang, W.; Peng, S.; Zhou, J. A survey of convolutional neural networks: Analysis, applications, and prospects. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 6999–7019. [Google Scholar] [CrossRef]
  158. Bin Bae, J.; Lee, S.; Jung, W.; Park, S.; Kim, W.; Oh, H.; Han, J.W.; Kim, G.E.; Kim, J.S.; Kim, J.H.; et al. Identification of Alzheimer’s disease using a convolutional neural network model based on T1-weighted magnetic resonance imaging. Sci. Rep. 2020, 10, 22252. [Google Scholar] [CrossRef]
  159. Aderghal, K.; Benois-Pineau, J.; Afdel, K. Classification of sMRI for Alzheimer’s disease diagnosis with CNN: Single Siamese networks with 2D+? Approach and fusion on ADNI. In Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval, Bucharest, Romania, 6–9 June 2017; pp. 494–498. [Google Scholar]
  160. Khvostikov, A.; Aderghal, K.; Krylov, A.; Catheline, G.; Benois-Pineau, J. 3D inception-based CNN with sMRI and MD-DTI data fusion for Alzheimer’s disease diagnostics. arXiv 2018, arXiv:1809.03972. [Google Scholar]
  161. Khagi, B.; Lee, C.G.; Kwon, G.-R. Alzheimer’s disease classification from brain MRI based on transfer learning from CNN. In Proceedings of the 2018 11th Biomedical Engineering International Conference (BMEiCON), Chiang Mai, Thailand, 21–24 November 2018; pp. 1–4. [Google Scholar]
  162. Sahumbaiev, I.; Popov, A.; Ramirez, J.; Gorriz, J.M.; Ortiz, A. 3D-CNN HadNet classification of MRI for Alzheimer’s Disease diagnosis. In Proceedings of the 2018 IEEE Nuclear Science Symposium and Medical Imaging Conference Proceedings (NSS/MIC), Sydney, NSW, Australia, 10–17 November 2018; pp. 1–4. [Google Scholar]
  163. Roy, S.S.; Sikaria, R.; Susan, A. A deep learning based CNN approach on MRI for Alzheimer’s disease detection. Intell. Decis. Technol. 2019, 13, 495–505. [Google Scholar] [CrossRef]
  164. Kruthika, K.R.; Maheshappa, H.D.; Alzheimer’s Disease Neuroimaging Initiative. CBIR system using Capsule Networks and 3D CNN for Alzheimer’s disease diagnosis. Inform. Med. Unlocked 2019, 14, 59–68. [Google Scholar] [CrossRef]
  165. Huang, Y.; Xu, J.; Zhou, Y.; Tong, T.; Zhuang, X.; Alzheimer’s Disease Neuroimaging Initiative (ADNI). Diagnosis of Alzheimer’s disease via multi-modality 3D convolutional neural network. Front. Neurosci. 2019, 13, 509. [Google Scholar] [CrossRef] [PubMed]
  166. Jain, R.; Jain, N.; Aggarwal, A.; Hemanth, D.J. Convolutional neural network based Alzheimer’s disease classification from magnetic resonance brain images. Cogn. Syst. Res. 2019, 57, 147–159. [Google Scholar] [CrossRef]
  167. Folego, G.; Weiler, M.; Casseb, R.F.; Pires, R.; Rocha, A. Alzheimer’s disease detection through whole-brain 3D-CNN MRI. Front. Bioeng. Biotechnol. 2020, 8, 534592. [Google Scholar] [CrossRef] [PubMed]
  168. Dua, M.; Makhija, D.; Manasa, P.Y.; Mishra, P. A CNN–RNN–LSTM based amalgamation for Alzheimer’s disease detection. J. Med. Biol. Eng. 2020, 40, 688–706. [Google Scholar] [CrossRef]
  169. Hussain, E.; Hasan, M.; Hassan, S.Z.; Azmi, T.H.; Rahman, A.; Parvez, M.Z. Deep learning based binary classification for Alzheimer’s disease detection using brain mri images. In Proceedings of the 2020 15th IEEE Conference on Industrial Electronics and Applications (ICIEA), Kristiansand, Norway, 9–13 November 2020; pp. 1115–1120. [Google Scholar]
  170. Xia, Z.; Yue, G.; Xu, Y.; Feng, C.; Yang, M.; Wang, T.; Lei, B. A novel end-to-end hybrid network for Alzheimer’s disease detection using 3D CNN and 3D CLSTM. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020; pp. 1–4. [Google Scholar]
  171. Wen, J.; Thibeau-Sutre, E.; Diaz-Melo, M.; Samper-González, J.; Routier, A.; Bottani, S.; Dormont, D.; Durrleman, S.; Burgos, N.; Colliot, O. Convolutional neural networks for classification of Alzheimer’s disease: Overview and reproducible evaluation. Med. Image Anal. 2020, 63, 101694. [Google Scholar] [CrossRef]
  172. Liu, S.; Yadav, C.; Fernandez-Granda, C.; Razavian, N. On the design of convolutional neural networks for automatic detection of Alzheimer’s disease. In Machine Learning for Health Workshop; PMLR: Vancouver, BC, Canada, 2020; pp. 184–201. [Google Scholar]
  173. Liu, M.; Li, F.; Yan, H.; Wang, K.; Ma, Y.; Alzheimer’s Disease Neuroimaging Initiative; Shen, L.; Xu, M. A multi-model deep convolutional neural network for automatic hippocampus segmentation and classification in Alzheimer’s disease. NeuroImage 2020, 208, 116459. [Google Scholar] [CrossRef]
  174. Pan, D.; Zeng, A.; Jia, L.; Huang, Y.; Frizzell, T.; Song, X. Early detection of Alzheimer’s Disease using magnetic resonance imaging: A novel approach combining convolutional neural networks and ensemble learning. Front. Neurosci. 2020, 14, 259. [Google Scholar] [CrossRef]
  175. Ramana, T.V. Alzheimer disease detection and classification on magnetic resonance imaging (MRI) brain images using improved expectation maximization (IEM) and convolutional neural network (CNN). Turk. J. Comput. Math. Educ. (TURCOMAT) 2021, 12, 5998–6006. [Google Scholar]
  176. Miah, A.S.M.; Rashid, M.M.; Rahman, M.R.; Hossain, M.T.; Sujon, M.S.; Nawal, N.; Hasan, M.; Shin, J. Alzheimer’s disease detection using CNN based on effective dimensionality reduction approach. In Intelligent Computing and Optimization: Proceedings of the 3rd International Conference on Intelligent Computing and Optimization 2020 (ICO 2020); Springer: Berlin/Heidelberg, Germany, 2021; pp. 801–811. [Google Scholar]
  177. AbdulAzeem, Y.; Bahgat, W.M.; Badawy, M. A CNN based framework for classification of Alzheimer’s disease. Neural Comput. Appl. 2021, 33, 10415–10428. [Google Scholar] [CrossRef]
  178. Al-Khuzaie, F.E.K.; Bayat, O.; Duru, A.D. Diagnosis of Alzheimer disease using 2D MRI slices by convolutional neural network. Appl. Bionics Biomech. 2021, 2021, 6690539. [Google Scholar] [CrossRef]
  179. Janghel, R.; Rathore, Y. Deep convolution neural network based system for early diagnosis of Alzheimer’s disease. IRBM 2021, 42, 258–267. [Google Scholar] [CrossRef]
  180. Kang, W.; Lin, L.; Zhang, B.; Shen, X.; Wu, S. Multi-model and multi-slice ensemble learning architecture based on 2D convolutional neural networks for Alzheimer’s disease diagnosis. Comput. Biol. Med. 2021, 136, 104678. [Google Scholar] [CrossRef]
  181. Basher, A.; Kim, B.C.; Lee, K.H.; Jung, H.Y. Volumetric feature-based Alzheimer’s disease diagnosis from sMRI data using a convolutional neural network and a deep neural network. IEEE Access 2021, 9, 29870–29882. [Google Scholar] [CrossRef]
  182. Singh, A.; Kharkar, N.; Priyanka, P.; Parvartikar, S. Alzheimer’s disease detection using deep learning-CNN. In Ambient Communications and Computer Systems: Proceedings of RACCCS 2021; Springer: Berlin/Heidelberg, Germany, 2022; pp. 529–537. [Google Scholar]
  183. Davuluri, R.; Rengaswamy, R. Improved Classification Model using CNN for Detection of Alzheimer’s Disease. J. Comput. Sci. 2022, 18, 415–425. [Google Scholar] [CrossRef]
  184. Angkoso, C.V.; Tjahyaningtijas, H.P.A.; Purnama, I.; Purnomo, M.H. Multiplane Convolutional Neural Network (Mp-CNN) for Alzheimer’s Disease Classification. Int. J. Intell. Eng. Syst. 2022, 15, 329–340. [Google Scholar]
  185. Techa, C.; Ridouani, M.; Hassouni, L.; Anoun, H. Alzheimer’s disease multi-class classification model based on CNN and StackNet using brain MRI data. In International Conference on Advanced Intelligent Systems and Informatics; Springer: Berlin/Heidelberg, Germany, 2022; pp. 248–259. [Google Scholar]
  186. Divya, R.; Kumari, R.S.S.; Alzheimer’s Disease Neuroimaging Initiative. Detection of Alzheimer’s disease from temporal lobe grey matter slices using 3D CNN. Imaging Sci. J. 2022, 70, 578–587. [Google Scholar] [CrossRef]
  187. Dar, G.M.U.D.; Bhagat, A.; Ansarullah, S.I.; Ben Othman, M.T.; Hamid, Y.; Alkahtani, H.K.; Ullah, I.; Hamam, H. A novel framework for classification of different Alzheimer’s disease stages using CNN model. Electronics 2023, 12, 469. [Google Scholar] [CrossRef]
  188. Lanjewar, M.G.; Parab, J.S.; Shaikh, A.Y. Development of framework by combining CNN with KNN to detect Alzheimer’s disease using MRI images. Multimed. Tools Appl. 2023, 82, 12699–12717. [Google Scholar] [CrossRef]
  189. Khalid, A.; Senan, E.M.; Al-Wagih, K.; Al-Azzam, M.M.A.; Alkhraisha, Z.M. Automatic Analysis of MRI Images for Early Prediction of Alzheimer’s Disease Stages Based on Hybrid Features of CNN and Handcrafted Features. Diagnostics 2023, 13, 1654. [Google Scholar] [CrossRef] [PubMed]
  190. Ravikumar, R.; Sasipriyaa, N.; Thilagaraj, T.; Raj, R.H.; Abishek, A.; Kannan, G.G. Design and Implementation of Alzheimer’s Disease Detection using cGAN and CNN. In Proceedings of the 2023 International Conference on Computer Communication and Informatics (ICCCI), Coimbatore, India, 23–25 January 2023; pp. 1–7. [Google Scholar]
  191. Shastri, S.S.; Bhadrashetty, A.; Kulkarni, S. Detection and Classification of Alzheimer’s Disease by Employing CNN. Int. J. Intell. Syst. Appl. 2023, 15, 14–22. [Google Scholar] [CrossRef]
  192. Venkatasubramanian, S.; Dwivedi, J.N.; Raja, S.; Rajeswari, N.; Logeshwaran, J.; Kumar, A.P. Prediction of Alzheimer’s Disease Using DHO-Based Pretrained CNN Model. Math. Probl. Eng. 2023, 2023, 1110500. [Google Scholar] [CrossRef]
  193. Ilesanmi, A.E.; Ilesanmi, T.; Idowu, O.P.; Torigian, D.A.; Udupa, J.K. Organ segmentation from computed tomography images using the 3D convolutional neural network: A systematic review. Int. J. Multimed. Inf. Retr. 2022, 11, 315–331. [Google Scholar] [CrossRef]
  194. Irshad, S.; Gomes, D.P.S.; Kim, S.T. Improved abdominal multi-organ segmentation via 3d boundary-constrained deep neural networks. IEEE Access 2023, 11, 35097–35110. [Google Scholar] [CrossRef]
  195. Rickmann, A.-M.; Senapati, J.; Kovalenko, O.; Peters, A.; Bamberg, F.; Wachinger, C. AbdomenNet: Deep neural network for abdominal organ segmentation in epidemiologic imaging studies. BMC Med. Imaging 2022, 22, 1–11. [Google Scholar] [CrossRef] [PubMed]
  196. Hassan, S.M.; Maji, A.K. Plant disease identification using a novel convolutional neural network. IEEE Access 2022, 10, 5390–5401. [Google Scholar] [CrossRef]
  197. Ashwinkumar, S.; Rajagopal, S.; Manimaran, V.; Jegajothi, B. Automated plant leaf disease detection and classification using optimal MobileNet based convolutional neural networks. Mater. Today Proc. 2022, 51, 480–487. [Google Scholar] [CrossRef]
  198. Jiang, J.; Liu, H.; Zhao, C.; He, C.; Ma, J.; Cheng, T.; Zhu, Y.; Cao, W.; Yao, X. Evaluation of Diverse Convolutional Neural Networks and Training Strategies for Wheat Leaf Disease Identification with Field-Acquired Photographs. Remote Sens. 2022, 14, 3446. [Google Scholar] [CrossRef]
  199. Nirthika, R.; Manivannan, S.; Ramanan, A.; Wang, R. Pooling in convolutional neural networks for medical image analysis: A survey and an empirical study. Neural Comput. Appl. 2022, 34, 5321–5347. [Google Scholar] [CrossRef]
  200. Mattia, G.M.; Sarton, B.; Villain, E.; Vinour, H.; Ferre, F.; Buffieres, W.; Le Lann, M.-V.; Franceries, X.; Peran, P.; Silva, S. Multimodal MRI-Based Whole-Brain Assessment in Patients In Anoxoischemic Coma by Using 3D Convolutional Neural Networks. Neurocritical Care 2022, 37 (Suppl. S2), 303–312. [Google Scholar] [CrossRef]
  201. Warren, S.L.; Moustafa, A.A. Functional magnetic resonance imaging, deep learning, and Alzheimer’s disease: A systematic review. J. Neuroimaging 2023, 33, 5–18. [Google Scholar] [CrossRef]
  202. Dan, T.; Huang, Z.; Cai, H.; Laurienti, P.J.; Wu, G. Learning brain dynamics of evolving manifold functional MRI data using geometric-attention neural network. IEEE Trans. Med. Imaging 2022, 41, 2752–2763. [Google Scholar] [CrossRef]
  203. Guo, X.; Zhou, B.; Pigg, D.; Spottiswoode, B.; Casey, M.E.; Liu, C.; Dvornek, N.C. Unsupervised inter-frame motion correction for whole-body dynamic PET using convolutional long short-term memory in a convolutional neural network. Med. Image Anal. 2022, 80, 102524. [Google Scholar] [CrossRef] [PubMed]
  204. Bin Tufail, A.; Anwar, N.; Ben Othman, M.T.; Ullah, I.; Khan, R.A.; Ma, Y.-K.; Adhikari, D.; Rehman, A.U.; Shafiq, M.; Hamam, H. Early-stage Alzheimer’s disease categorization using PET neuroimaging modality and convolutional neural networks in the 2d and 3d domains. Sensors 2022, 22, 4609. [Google Scholar] [CrossRef]
  205. Estudillo-Romero, A.; Haegelen, C.; Jannin, P.; Baxter, J.S.H. Voxel-based diktiometry: Combining convolutional neural networks with voxel-based analysis and its application in diffusion tensor imaging for Parkinson’s disease. Hum. Brain Mapp. 2022, 43, 4835–4851. [Google Scholar] [CrossRef]
  206. Liu, S.; Liu, Y.; Xu, X.; Chen, R.; Liang, D.; Jin, Q.; Liu, H.; Chen, G.; Zhu, Y. Accelerated cardiac diffusion tensor imaging using deep neural network. Phys. Med. Biol. 2022, 68, 025008. [Google Scholar] [CrossRef]
  207. Park, S.; Yu, J.; Woo, H.-H.; Park, C.G. A novel network architecture combining central-peripheral deviation with image-based convolutional neural networks for diffusion tensor imaging studies. J. Appl. Stat. 2022, 50, 3294–3311. [Google Scholar] [CrossRef]
  208. Ghazal, T.M.; Abbas, S.; Munir, S.; Ahmad, M.; Issa, G.F.; Zahra, S.B.; Khan, M.A.; Hasan, M.K. Alzheimer disease detection empowered with transfer learning. Comput. Mater. Contin. 2022, 70, 5005–5019. [Google Scholar] [CrossRef]
  209. Liu, S.; Masurkar, A.V.; Rusinek, H.; Chen, J.; Zhang, B.; Zhu, W.; Fernandez-Granda, C.; Razavian, N. Generalizable deep learning model for early Alzheimer’s disease detection from structural MRIs. Sci. Rep. 2022, 12, 17106. [Google Scholar] [CrossRef]
  210. Loddo, A.; Buttau, S.; Di Ruberto, C. Deep learning based pipelines for Alzheimer’s disease diagnosis: A comparative study and a novel deep-ensemble method. Comput. Biol. Med. 2021, 141, 105032. [Google Scholar] [CrossRef]
  211. Cui, R.; Liu, M.; Li, G. Longitudinal analysis for Alzheimer’s disease diagnosis using RNN. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; 2018; pp. 1398–1401. [Google Scholar]
  212. Feng, C.; Elazab, A.; Yang, P.; Wang, T.; Lei, B.; Xiao, X. 3D convolutional neural network and stacked bidirectional recurrent neural network for Alzheimer’s disease diagnosis. In PRedictive Intelligence in MEdicine: First International Workshop, PRIME 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, Proceedings 1; Springer: Berlin/Heidelberg, Germany, 2018; pp. 138–146. [Google Scholar]
  213. Nguyen, M.; Sun, N.; Alexander, D.C.; Feng, J.; Yeo, B.T.T. Modeling Alzheimer’s disease progression using deep recurrent neural networks. In Proceedings of the 2018 International Workshop on Pattern Recognition in Neuroimaging (PRNI), Singapore, 12–14 June 2018; pp. 1–4. [Google Scholar]
  214. Wang, T.; Qiu, R.G.; Yu, M. Predictive modeling of the progression of Alzheimer’s disease with recurrent neural networks. Sci. Rep. 2018, 8, 9161. [Google Scholar] [CrossRef]
  215. Liu, M.; Cheng, D.; Yan, W.; Alzheimer’s Disease Neuroimaging Initiative. Classification of Alzheimer’s disease by combination of convolutional and recurrent neural networks using FDG-PET images. Front. Aging Neurosci. 2018, 12, 35. [Google Scholar] [CrossRef]
  216. Cui, R.; Liu, M.; Alzheimer’s Disease Neuroimaging Initiative. RNN-based longitudinal analysis for diagnosis of Alzheimer’s disease. Comput. Med. Imaging Graph. 2019, 73, 1–10. [Google Scholar] [CrossRef]
  217. Li, F.; Liu, M.; The Alzheimer’s Disease Neuroimaging Initiative. A hybrid convolutional and recurrent neural network for hippocampus analysis in Alzheimer’s disease. J. Neurosci. Methods 2019, 323, 108–118. [Google Scholar] [CrossRef]
  218. Velazquez, M.; Anantharaman, R.; Velazquez, S.; Lee, Y. RNN-based Alzheimer’s disease prediction from prodromal stage using diffusion tensor imaging. In Proceedings of the 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), San Diego, CA, USA, 18–21 November 2019; pp. 1665–1672. [Google Scholar]
  219. Jabason, E.; Ahmad, M.O.; Swamy, M. Hybrid Feature Fusion Using RNN and Pre-trained CNN for Classification of Alzheimer’s Disease (Poster). In Proceedings of the 2019 22th International Conference on Information Fusion (FUSION), Ottawa, ON, Canada, 2–5 July 2019. [Google Scholar]
  220. Hong, X.; Lin, R.; Yang, C.; Zeng, N.; Cai, C.; Gou, J.; Yang, J. Predicting Alzheimer’s disease using LSTM. IEEE Access 2019, 7, 80893–80901. [Google Scholar] [CrossRef]
  221. Feng, C.; Elazab, A.; Yang, P.; Wang, T.; Zhou, F.; Hu, H.; Xiao, X.; Lei, B. Deep learning framework for Alzheimer’s disease diagnosis via 3D-CNN and FSBi-LSTM. IEEE Access 2019, 7, 63605–63618. [Google Scholar] [CrossRef]
  222. Pan, Q.; Wang, S.; Zhang, J. Prediction of Alzheimer’s disease based on bidirectional LSTM. J. Phys. Conf. Ser. 2019, 1187, 052030. [Google Scholar] [CrossRef]
  223. Ghazi, M.M.; Nielsen, M.; Pai, A.; Cardoso, M.J.; Modat, M.; Ourselin, S.; Sørensen, L. Training recurrent neural networks robust to incomplete data: Application to Alzheimer’s disease progression modeling. Med. Image Anal. 2019, 53, 39–46. [Google Scholar] [CrossRef]
  224. Li, H.; Fan, Y. Early Prediction Of Alzheimer’s disease dementia based on baseline hippocampal mri and 1-year follow-up cognitive measures using deep recurrent neural networks. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI), Venice, Italy, 8–11 April 2019; pp. 368–371. [Google Scholar]
  225. Tabarestani, S.; Aghili, M.; Shojaie, M.; Freytes, C.; Cabrerizo, M.; Barreto, A.; Rishe, N.; Curiel, R.E.; Loewenstein, D.; Duara, R.; et al. Longitudinal prediction modeling of Alzheimer disease using recurrent neural networks. In Proceedings of the 2019 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI), Chicago, IL, USA, 19–22 May 2019; pp. 1–4. [Google Scholar]
  226. Nguyen, M.; He, T.; An, L.; Alexander, D.C.; Feng, J.; Yeo, B.T. Predicting Alzheimer’s disease progression using deep recurrent neural networks. NeuroImage 2020, 222, 117203. [Google Scholar] [CrossRef]
  227. Jung, W.; Jun, E.; Suk, H.-I.; Alzheimer’s Disease Neuroimaging Initiative. Deep recurrent model for individualized prediction of Alzheimer’s disease progression. NeuroImage 2021, 237, 118143. [Google Scholar] [CrossRef]
  228. Liang, W.; Zhang, K.; Cao, P.; Liu, X.; Yang, J.; Zaiane, O. Rethinking modeling Alzheimer’s disease progression from a multi-task learning perspective with deep recurrent neural network. Comput. Biol. Med. 2021, 138, 104935. [Google Scholar] [CrossRef]
  229. Aqeel, A.; Hassan, A.; Khan, M.A.; Rehman, S.; Tariq, U.; Kadry, S.; Majumdar, A.; Thinnukool, O. A long short-term memory biomarker-based prediction framework for Alzheimer’s disease. Sensors 2022, 22, 1475. [Google Scholar] [CrossRef]
  230. Lin, K.; Jie, B.; Dong, P.; Ding, X.; Bian, W.; Liu, M. Convolutional recurrent neural network for dynamic functional MRI analysis and brain disease identification. Front. Neurosci. 2022, 16, 933660. [Google Scholar] [CrossRef]
  231. Algani, Y.M.A.; Vidhya, S.; Ghai, B.; Acharjee, P.B.; Kathiravan, M.N.; Dwivedi, V.K. Innovative Method for Alzheimer Disease Prediction using GP-ELM-RNN. In Proceedings of the 2023 2nd International Conference on Applied Artificial Intelligence and Computing (ICAAIC), Salem, India, 4–6 May 2023; pp. 723–728. [Google Scholar]
  232. Pan, Y.; Liu, M.; Lian, C.; Zhou, T.; Xia, Y.; Shen, D. Synthesizing missing PET from MRI with cycle-consistent generative adversarial networks for Alzheimer’s disease diagnosis. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2018: 21st International Conference, Granada, Spain, September 16–20, 2018, Proceedings, Part III 11; Springer: Berlin/Heidelberg, Germany, 2018; pp. 455–463. [Google Scholar]
  233. Bowles, C.; Gunn, R.; Hammers, A.; Rueckert, D. Modelling the progression of Alzheimer’s disease in MRI using generative adversarial networks. In Medical Imaging 2018: Image Processing; SPIE: Houston, TX, USA, 2018; Volume 10574, pp. 397–407. [Google Scholar]
  234. Wegmayr, V.; Hörold, M.; Buhmann, J.M. Generative aging of brain MR-images and prediction of Alzheimer progression. In Pattern Recognition: 41st DAGM German Conference, DAGM GCPR 2019, Dortmund, Germany, September 10–13, 2019, Proceedings 41; Springer: Berlin/Heidelberg, Germany, 2019; pp. 247–260. [Google Scholar]
  235. Zhao, Y.; Ma, B.; Jiang, P.; Zeng, D.; Wang, X.; Li, S. Prediction of Alzheimer’s disease progression with multi-information generative adversarial network. IEEE J. Biomed. Health Inform. 2020, 25, 711–719. [Google Scholar] [CrossRef] [PubMed]
  236. Hu, S.; Yu, W.; Chen, Z.; Wang, S. Medical image reconstruction using generative adversarial network for Alzheimer disease assessment with class-imbalance problem. In Proceedings of the 2020 IEEE 6th International Conference on Computer and Communications (ICCC), Chengdu, China, 11–14 December 2020; pp. 1323–1327. [Google Scholar]
  237. Ma, D.; Lu, D.; Popuri, K.; Wang, L.; Beg, M.F.; Alzheimer’s Disease Neuroimaging Initiative. Differential diagnosis of frontotemporal dementia, Alzheimer’s disease, and normal aging using a multi-scale multi-type feature generative adversarial deep neural network on structural magnetic resonance images. Front. Neurosci. 2020, 14, 853. [Google Scholar] [CrossRef] [PubMed]
  238. Shin, H.C.; Ihsani, A.; Xu, Z.; Mandava, S.; Sreenivas, S.T.; Forster, C.; Cha, J.; Alzheimer’s Disease Neuroimaging Initiative. GANDALF: Generative adversarial networks with discriminator-adaptive loss fine-tuning for Alzheimer’s disease diagnosis from MRI. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part II 23; Springer: Berlin/Heidelberg, Germany, 2020; pp. 688–697. [Google Scholar]
  239. Kim, H.W.; Lee, H.E.; Lee, S.; Oh, K.T.; Yun, M.; Yoo, S.K. Slice-selective learning for Alzheimer’s disease classification using a generative adversarial network: A feasibility study of external validation. Eur. J. Nucl. Med. 2020, 47, 2197–2206. [Google Scholar] [CrossRef]
  240. Jung, E.; Luna, M.; Park, S.H. Conditional generative adversarial network for predicting 3d medical images affected by Alzheimer’s diseases. In Predictive Intelligence in Medicine: Third International Workshop, PRIME 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 8, 2020, Proceedings 3; Springer: Berlin/Heidelberg, Germany, 2020; pp. 79–90. [Google Scholar]
  241. Zhou, X.; Qiu, S.; Joshi, P.S.; Xue, C.; Killiany, R.J.; Mian, A.Z.; Chin, S.P.; Au, R.; Kolachalama, V.B. Enhancing magnetic resonance imaging-driven Alzheimer’s disease classification performance using generative adversarial learning. Alzheimer’s Res. Ther. 2021, 13, 497. [Google Scholar] [CrossRef]
  242. Logan, R.; Williams, B.G.; da Silva, M.F.; Indani, A.; Schcolnicov, N.; Ganguly, A.; Miller, S.J. Deep convolutional neural networks with ensemble learning and generative adversarial networks for Alzheimer’s disease image data classification. Front. Aging Neurosci. 2021, 13, 720226. [Google Scholar] [CrossRef]
  243. Sajjad, M.; Ramzan, F.; Khan, M.U.G.; Rehman, A.; Kolivand, M.; Fati, S.M.; Bahaj, S.A. Deep convolutional generative adversarial network for Alzheimer’s disease classification using positron emission tomography (PET) and synthetic data augmentation. Microsc. Res. Tech. 2021, 84, 3023–3034. [Google Scholar] [CrossRef]
  244. Sinha, S.; Thomopoulos, S.I.; Lam, P.; Muir, A.; Thompson, P.M. Alzheimer’s disease classification accuracy is improved by MRI harmonization based on attention-guided generative adversarial networks. In Proceedings of the 17th International Symposium on Medical Information Processing and Analysis, Campinas, Brazil, 17–19 November 2021; SPIE: Bellingham, WA, USA, 2021; pp. 180–189. [Google Scholar]
  245. Qu, C.; Zou, Y.; Ma, Y.; Chen, Q.; Luo, J.; Fan, H.; Jia, Z.; Gong, Q.; Chen, T. Diagnostic performance of generative adversarial network-based deep learning methods for Alzheimer’s disease: A systematic review and meta-analysis. Front. Aging Neurosci. 2022, 14, 841696. [Google Scholar] [CrossRef]
  246. Zhang, J.; He, X.; Qing, L.; Gao, F.; Wang, B. BPGAN: Brain PET synthesis from MRI using generative adversarial network for multi-modal Alzheimer’s disease diagnosis. Comput. Methods Programs Biomed. 2022, 217, 106676. [Google Scholar] [CrossRef] [PubMed]
  247. Ye, H.; Zhu, Q.; Yao, Y.; Jin, Y.; Zhang, D. Pairwise feature-based generative adversarial network for incomplete multi-modal Alzheimer’s disease diagnosis. Vis. Comput. 2022, 39, 2235–2244. [Google Scholar] [CrossRef]
  248. Cabreza, J.N.; Solano, G.A.; Ojeda, S.A.; Munar, V. Anomaly detection for Alzheimer’s disease in brain MRIS via unsupervised generative adversarial learning. In Proceedings of the 2022 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), Jeju Island, Republic of Korea, 21–24 February 2022; pp. 1–5. [Google Scholar]
  249. Dolci, G.; Rahaman, A.; Chen, J.; Duan, K.; Fu, Z.; Abrol, A.; Menegaz, G.; Calhoun, V.D. A deep generative multimodal imaging genomics framework for Alzheimer’s disease prediction. In Proceedings of the 2022 IEEE 22nd International Conference on Bioinformatics and Bioengineering (BIBE), Taichung, Taiwan, 7–9 November 2022; pp. 41–44. [Google Scholar]
  250. Bi, X.-A.; Wang, Y.; Luo, S.; Chen, K.; Xing, Z.; Xu, L. Hypergraph Structural Information Aggregation Generative Adversarial Networks for Diagnosis and Pathogenetic Factors Identification of Alzheimer’s Disease With Imaging Genetic Data. IEEE Trans. Neural Netw. Learn. Syst. 2022, 1–15. [Google Scholar] [CrossRef]
  251. Xia, T.; Sanchez, P.; Qin, C.; Tsaftaris, S.A. Adversarial counterfactual augmentation: Application in Alzheimer’s disease classification. Front. Radiol. 2022, 2, 1039160. [Google Scholar] [CrossRef]
  252. Shi, R.; Sheng, C.; Jin, S.; Zhang, Q.; Zhang, S.; Zhang, L.; Ding, C.; Wang, L.; Wang, L.; Han, Y.; et al. Generative adversarial network constrained multiple loss autoencoder: A deep learning-based individual atrophy detection for Alzheimer’s disease and mild cognitive impairment. Hum. Brain Mapp. 2022, 44, 1129–1146. [Google Scholar] [CrossRef]
  253. Noella, R.S.N.; Priyadarshini, J. Diagnosis of Alzheimer’s, Parkinson’s disease and frontotemporal dementia using a generative adversarial deep convolutional neural network. Neural Comput. Appl. 2023, 35, 2845–2854. [Google Scholar] [CrossRef]
  254. Parisot, S.; Ktena, S.I.; Ferrante, E.; Lee, M.; Guerrero, R.; Glocker, B.; Rueckert, D. Disease prediction using graph convolutional networks: Application to autism spectrum disorder and Alzheimer’s disease. Med. Image Anal. 2018, 48, 117–130. [Google Scholar] [CrossRef]
  255. Zhou, H.; He, L.; Zhang, Y.; Shen, L.; Chen, B. Interpretable graph convolutional network of multi-modality brain imaging for Alzheimer’s disease diagnosis. In Proceedings of the 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI), Kolkata, India, 28–31 March 2022; pp. 1–5. [Google Scholar]
  256. Zeng, L.; Li, H.; Xiao, T.; Shen, F.; Zhong, Z. Graph convolutional network with sample and feature weights for Alzheimer’s disease diagnosis. Inf. Process. Manag. 2022, 59, 102952. [Google Scholar] [CrossRef]
  257. Li, H.; Shi, X.; Zhu, X.; Wang, S.; Zhang, Z. FSNet: Dual Interpretable Graph Convolutional Network for Alzheimer’s Disease Analysis. IEEE Trans. Emerg. Top. Comput. Intell. 2022, 7, 15–25. [Google Scholar] [CrossRef]
  258. Zhu, Y.; Ma, J.; Yuan, C.; Zhu, X. Interpretable learning based dynamic graph convolutional networks for Alzheimer’s disease analysis. Inf. Fusion 2022, 77, 53–61. [Google Scholar] [CrossRef]
  259. Yan, B.; Li, Y.; Li, L.; Yang, X.; Li, T.-Q.; Yang, G.; Jiang, M. Quantifying the impact of Pyramid Squeeze Attention mechanism and filtering approaches on Alzheimer’s disease classification. Comput. Biol. Med. 2022, 148, 105944. [Google Scholar] [CrossRef]
  260. Ebrahimi-Ghahnavieh, A.; Luo, S.; Chiong, R. Transfer learning for Alzheimer’s disease detection on MRI images. In Proceedings of the 2019 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT), Bali, Indonesia, 1–3 July 2019; pp. 133–138. [Google Scholar]
  261. Khan, N.M.; Abraham, N.; Hon, M. Transfer learning with intelligent training data selection for prediction of Alzheimer’s disease. IEEE Access 2019, 7, 72726–72735. [Google Scholar] [CrossRef]
  262. Maqsood, M.; Nazir, F.; Khan, U.; Aadil, F.; Jamal, H.; Mehmood, I.; Song, O.-Y. Transfer learning assisted classification and detection of Alzheimer’s disease stages using 3D MRI scans. Sensors 2019, 19, 2645. [Google Scholar] [CrossRef]
  263. Agarwal, D.; Marques, G.; de la Torre-Díez, I.; Martin, M.A.F.; Zapirain, B.G.; Rodríguez, F.M. Transfer learning for Alzheimer’s disease through neuroimaging biomarkers: A systematic review. Sensors 2021, 21, 7259. [Google Scholar] [CrossRef]
  264. Martinez-Murcia, F.J.; Ortiz, A.; Gorriz, J.-M.; Ramirez, J.; Castillo-Barnes, D. Studying the manifold structure of Alzheimer’s disease: A deep learning approach using convolutional autoencoders. IEEE J. Biomed. Health Inform. 2019, 24, 17–26. [Google Scholar] [CrossRef]
  265. Mendoza-Léon, R.; Puentes, J.; Uriza, L.F.; Hoyos, M.H. Single-slice Alzheimer’s disease classification and disease regional analysis with Supervised Switching Autoencoders. Comput. Biol. Med. 2020, 116, 103527. [Google Scholar] [CrossRef]
  266. Ferri, R.; Babiloni, C.; Karami, V.; Triggiani, A.I.; Carducci, F.; Noce, G.; Lizio, R.; Pascarelli, M.T.; Soricelli, A.; Amenta, F.; et al. Stacked autoencoders as new models for an accurate Alzheimer’s disease classification support using resting-state EEG and MRI measurements. Clin. Neurophysiol. 2021, 132, 232–245. [Google Scholar] [CrossRef]
  267. Hedayati, R.; Khedmati, M.; Taghipour-Gorjikolaie, M. Deep feature extraction method based on ensemble of convolutional auto encoders: Application to Alzheimer’s disease diagnosis. Biomed. Signal Process. Control 2021, 66, 102397. [Google Scholar] [CrossRef]
  268. Jellinger, K.A. Neuropathology of the Alzheimer’s continuum: An update. Free Neuropathol. 2020, 1, 32. [Google Scholar] [CrossRef]
  269. Devi, G. A how-to guide for a precision medicine approach to the diagnosis and treatment of Alzheimer’s disease. Front. Aging Neurosci. 2023, 15. [Google Scholar] [CrossRef]
  270. Fiford, C.M. Disentangling the Relationship between White Matter Disease, Vascular Risk, Alzheimer’s Disease Pathology and Brain Atrophy: Timing of Events and Location of Changes. Ph.D. Dissertation, UCL (University College London), London, UK, 2019. [Google Scholar]
  271. Nbo, S.N. Unsupervised Discovery of Mental Disorder Factors Using MRI. Ph.D. Dissertation, National University of Singapore, Singapore, 2020. [Google Scholar]
  272. Veitch, D.P.; Weiner, M.W.; Miller, M.; Aisen, P.S.; Ashford, M.A.; Beckett, L.A.; Green, R.C.; Harvey, D.; Jack, C.R., Jr.; Jagust, W.; et al. The Alzheimer’s Disease Neuroimaging Initiative in the era of Alzheimer’s disease treatment: A review of ADNI studies from 2021 to 2022. Alzheimer’s Dement. 2024, 20, 652–694. [Google Scholar] [CrossRef] [PubMed]
  273. Xiao, J.; Li, J.; Wang, J.; Zhang, X.; Wang, C.; Peng, G.; Hu, H.; Liu, H.; Liu, J.; Shen, L.; et al. 2023 China Alzheimer’s disease: Facts and figures. Hum. Brain 2023, 2. [Google Scholar] [CrossRef]
  274. Tönges, L.; Buhmann, C.; Klebe, S.; Klucken, J.; Kwon, E.H.; Müller, T.; Pedrosa, D.J.; Schröter, N.; Riederer, P.; Lingor, P. Blood-based biomarker in Parkinson’s disease: Potential for future applications in clinical research and practice. J. Neural Transm. 2022, 129, 1201–1217. [Google Scholar] [CrossRef] [PubMed]
  275. Blasiak, J.; Sobczuk, P.; Pawlowska, E.; Kaarniranta, K. Interplay between aging and other factors of the pathogenesis of age-related macular degeneration. Ageing Res. Rev. 2022, 81, 101735. [Google Scholar] [CrossRef] [PubMed]
  276. Paolini Paoletti, F.; Simoni, S.; Parnetti, L.; Gaetani, L. The contribution of small vessel disease to neurodegeneration: Focus on Alzheimer’s disease, Parkinson’s disease and multiple sclerosis. Int. J. Mol. Sci. 2021, 22, 4958. [Google Scholar] [CrossRef] [PubMed]
  277. Erkkinen, M.G.; Kim, M.O.; Geschwind, M.D. Clinical neurology and epidemiology of the major neurodegenerative diseases. Cold Spring Harb. Perspect. Biol. 2018, 10, a033118. [Google Scholar] [CrossRef] [PubMed]
  278. van Oostveen, W.M.; de Lange, E.C. Imaging techniques in Alzheimer’s disease: A review of applications in early diagnosis and longitudinal monitoring. Int. J. Mol. Sci. 2021, 22, 2110. [Google Scholar] [CrossRef]
Figure 1. Illustration depicting the interconnected elements of the AD detection system.
Figure 1. Illustration depicting the interconnected elements of the AD detection system.
Make 06 00024 g001
Table 1. Overview of CNN studies for AD detection.
Table 1. Overview of CNN studies for AD detection.
ReferenceInput DataTechniqueEvaluationNotes
[48]MRICNNs, including both 2D and 3D models, as well as RNNs.Accuracy, sensitivity, and specificity of the different models.Combination of transfer learning and 3D voxel data led to improved AD classification accuracy.
[61]MRIPre-trained (CNN) model, specifically ResNet50, used for automatic feature extraction.CNN model with Softmax, SVM, and RF classifiers achieved high accuracy, ranging from 85.7% to 99%, outperforming other state-of-the-art models.Deep learning with pre-trained CNN models improved AD diagnosis to enhance patient survival rates.
[158]T1-weighted MRICNN-based AD classification algorithm using coronal slices from T1-weighted MRI images. It was evaluated on data from two populations (SNUBH and ADNI) using within-dataset and between-dataset validations with AUC.CNN-based AD classification algorithm achieved AUCs of 0.91–0.94 (within-dataset) and 0.88–0.89 (between-dataset). Processing time was 23–24 s per person.CNN-based AD classification demonstrated promising accuracy and generalization for AD classification with high AUC values.
[159]sMRI, DTICNN integrating sMRI and DTI modalities.Comparison with single modality approach, analysis of data augmentation for class balancing, and investigation of the impact of ROI size on classification results.CNN-based fusion of sMRI and DTI modalities on hippocampal variable ROI showing promising results on ADNI dataset.
[160]sMRI, DTI3D inception-based CNN with the fusion of sMRI and DTI modalities.Comparison with a conventional AlexNet-based network using ADNI dataset.3D inception-based CNN using multi-modal fusion to outperform conventional AlexNet-based networks on the ADNI dataset.
[161]3D MRIFeature extraction using AlexNet and classification using machine learning (ML) algorithms.Performance comparison of classification based on extracted features versus Softmax-based probability scores.Patient classification using 3D-MRI, i.e., extraction of 2D features and dimensionality reduction leading to improved accuracy.
[162]MRI3D CNN (HadNet) using stacked convolutions.Classification to segregate AD, MCI, and healthy individuals.A deep learning approach for early Alzheimer’s diagnosis using 3D CNN with a reported accuracy of 88.31%.
[163]MRIConvolutional neural network (CNN) model.Accuracy.CNN model for AD detection in MRI images achieved 80% accuracy on the OASIS dataset using Python’s Keras library, but needs a performance improvement.
[164]3D MRICombines content-based image retrieval (CBIR) with a 3D capsule network, a 3D-convolutional neural network (CNN), and pre-trained 3D-autoencoder technology for early AD detection.The performance of the proposed model was evaluated using accuracy as the metric for AD classification, and it achieved up to 98.42% accuracy in AD classification.Validation of an ensemble approach 3D capsule networks, CNNs, and a pre-trained 3D autoencoder for early AD detection, showing CapsNet’s potential towards future improvements.
[165]T1-weighted MRI, FDGPETCNN integrates the multimodality information from T1MR and FDG-PET images to diagnose AD. The CNN learns features directly from the 3D images without the need for manually extracted features.The proposed network was evaluated on the ADNI dataset with T1-MR and FDG-PET images. Accuracy results were 90.10% for CN vs. AD, 87.46% for CN vs. pMCI, and 76.90% for sMCI vs. pMCI classification.The proposed method reported better performance, i.e., integration of T1-MR and FDG-PET data improved CNN results, showcasing AI’s potential in AD diagnosis.
[166]MRIThe paper proposes PFSECTL, a mathematical model using transfer learning with VGG-16, a CNN architecture. The pretrained VGG-16 model from ImageNet is used as a feature extractor for classification.The proposed method achieved 95.73% accuracy for 3-way classification on the ADNI database, but the specific classes were not specified.PFSECTL model employed transfer learning with VGG-16 for feature extraction and high accuracy demonstrates the potential of the proposed method towards AD detection.
[167]sMRIConvolutional neural networks (CNN) for feature extraction and classification.Accuracy of classification into AD, MCI, and CN groups.ADNet model for AD biomarker extraction and classification reported 52.3% accuracy in the CADDementia challenge, demonstrating its potential for efficient early AD detection.
[168]MRIAmalgamation of deep learning models (CNN, RNN, long short-term memory (LSTM)) using ensemble and bagging approaches.Accuracy, sensitivity, specificity, and precision.An ensemble approach using CNN, RNN, LSTM models, and bagging for precise dementia level determination in AD reported 92.22% accuracy, i.e., notable enhanced diagnostic accuracy on the OASIS Brain dataset.
[169]MRIThe proposed technique in this paper is a 12-layer CNN model for the binary classification and detection of AD.The performance of the proposed model was evaluated using various metrics, including accuracy, precision, recall, F1-score, and the receiver operating characteristic (ROC) curve analysis.A 12-layer CNN model achieved 97.75% accuracy, outperforming existing models on the OASIS dataset for brain MRI data.
[170]sMRIUnified CNN framework combining 3D CNN and 3D convolutional long short-term memory.Accuracy for AD detection.CNN framework for AD diagnosis using sMRI data was proposed with a reported accuracy of an impressive 94.19% for AD detection on the ADNI dataset.
[171]T1-weighted MRICNNs were employed to classify AD. The authors compared different CNN architectures, including 2D slice-level, 3D patch-level, ROI-based, and 3D subject-level approaches.CNN models were evaluated using accuracy, sensitivity, specificity, and AUC. Rigorous validation and data integrity were ensured.An open-source framework for AD classification ensuring reproducibility, transparency, and improved evaluation procedures.
[172]sMRIThe study improved 3D CNNs for early AD detection. Techniques explored include instance normalization instead of batch normalization, avoiding early spatial downsampling, widening the model, and incorporating age information.Improved CNN models were evaluated on the ADNI dataset, showing a 14% accuracy increase over existing models. Similar performance was observed on an independent dataset.Model provided insights for improving 3D CNN models in AD detection, i.e., effectiveness of normalization, early downsampling, and model widening were investigated.
[173]sMRIMulti-modal deep learning framework for joint hippocampal segmentation and AD classification using structural MRI data. It includes a multi-task CNN model for segmentation and classification, along with a 3D DenseNet model for disease classification.The proposed method achieved 87.0% dice similarity for hippocampal segmentation and 88.9% accuracy, 92.5% AUC for AD vs. NC classification, and 76.2% accuracy, 77.5% AUC for MCI vs. NC classification. It outperformed other methods.A multi-modal deep learning (DL) framework for early-stage AD diagnosis, outperforming single-model methods and competitors in AD.
[174]MRICNN-EL approach combines CNNs and ensemble learning for AD classification using MRI slices, identifying brain regions contributing to classification based on intersection points.The ensemble’s performance was evaluated using fivefold cross-validation for AD vs. HC, MCIc vs. HC, and MCIc vs. MCInc classification. Brain regions contributing to classification were identified based on intersection points.Proposed CNN-EL method resulted in improved AD and MCI classification using MRI data. Moreover, this method also identified key brain regions associated with early AD.
[175]MRIProposed methodology: 2D-ACF for noise reduction, EP-CI for image enhancement, and EFCMAT for AD region segmentation.The proposed method outperformed existing approaches in segmentation quality.An efficient method for segmentation of AD-related regions in MR brain images, leading to improved diagnostic performance.
[176]3D MRIThe study improved Alzheimer’s disease detection accuracy using dimensionality reduction methods (PCA, RP, FA) and applied RF and CNN with reduced features as inputs.Proposed methodology evaluated using accuracy 93% and other metrics: confusion matrix, precision, recall, and F1-score.Effective detection of Alzheimer’s using random forest and CNN with proposed RF model’s 93% accuracy.
[177]MRICNN-based framework for AD classification utilized deep learning’s advantages over traditional methods.The proposed framework achieved high classification accuracies of 99.6%, 99.8%, and 97.8% for binary AD vs. CN classification and 97.5% for multi-classification on the ADNI dataset.CNN-based framework reported excellent accuracy in AD classification using brain MRI scans, showing potential for early AD diagnosis.
[178]2D MRIConvolutional neural network (CNN).Accuracy of the enhanced network (Alzheimer Network—AlzNet) for discriminating between Alzheimer’s patients and healthy patients.AlzNet, a CNN trained on 2D MRI slices from the OASIS dataset, reported 99.30% accuracy in AD recognition.
[179]fMRI, PETConverting 3D images to 2D and using VGG-16 CNN for feature extraction. Various classifiers were employed for image classification.The experimental results show 99.95% accuracy for fMRI classification and 73.46% for PET. Compared to existing methods, it exhibited superior performance in various parameters.Enhanced AD diagnosis through preprocessing, CNN models, and diverse classifiers surpasses prior methods.
[180]sMRIEnsemble model architecture using 2D CNNs selects the top 11 coronal slices, trains VGG16, ResNet50, GAN discriminator models, majority voting for multi-slice decisions, ensemble model, and transfer learning for domain adaptation.Proposed approach evaluated for AD vs. CN, AD vs. MCI, and MCI vs. CN accuracy.Ensemble learning architecture reported high accuracy for AD classification for limited data, which has been a problem for conventional deep learning models.
[181]sMRIThe proposed method combines CNN and DNN models for hippocampal localization and classification. Three-dimensional patches were extracted, and two-dimensional slices were obtained from them. Volumetric features were extracted using DVE-CNN for classification.Proposed approach achieves high accuracy for left and right hippocampi: 94.82% and 94.02%, respectively, with AUC values of 92.54% and 90.62%.Proposed hybrid method reported high accuracy in Alzheimer’s diagnosis by combining CNN and DNN localized positions.
[182]MRICNN for Alzheimer’s prediction from brain MRI scans. Extracts disease-related features for accurate diagnosis.Evaluated on accuracy, sensitivity, specificity, and AUC for Alzheimer’s prediction. Outperforms existing methods in diagnostic accuracy.Proposed CNN system improved early Alzheimer’s detection, ensured timely interventions, and reduced false negatives.
[183]MRICNN with building components for AD classification. It extracted essential features from MRI images, aiding disease classification.The proposed CNN-based method was evaluated using accuracy for disease classification.CNN model achieved 97.8% accuracy in Alzheimer’s disease detection from brain MRI images using automated feature extraction.
[184]MRIMp-CNN utilized three 2D CNNs to analyze discriminatory information from multiple planes in 3D-MRI.Method achieved 93% accuracy for multiclass AD-MCI-NC classification, with precision rates of 93% for AD, 91% for MCI, and 95% for NC subjectsMP-CNN outperformed the single-plane approach, offering an effective method for early AD detection in 3D images.
[185]MRIPre-trained CNNs (DenseNet196, VGG16, and ResNet50) were used for feature extraction from MRI images. The stacking ensemble method was employed for multi-class AD stage classification.The proposed model achieved 89% accuracy on brain MRI data.Model employed pre-trained CNNs for feature extraction and a stacking ensemble to classify Alzheimer’s disease stages with impressive 89% accuracy.
[186]MRI3D CNN.Evaluated using accuracy, sensitivity, and specificity metrics. For ADNI-2 MRI volumes, it achieved an accuracy of 88.06%, sensitivity of 94.03%, and specificity of 82.09% in classifying AD from normal controls.A 3D CNN with a focus on the temporal lobe achieved high performance in AD classification from 3D MRI volumes.
[187]MRICNN, specifically MobileNet pre-trained model, for early AD prediction and classification. Transfer learning was applied to leverage pre-trained models for health data classification.Model achieved an accuracy of 96.6% for multi-class AD stage classifications. Comparison with VGG16 and ResNet50 models was performed on the same dataset.MobileNet-based framework enabled precise AD progression, i.e., stage-classification, which greatly contributed towards early detection and classification.
[188]MRICombined CNN and KNN for AD detection. CNN extracted features from MRI images, used to train and validate the KNN model.The performance of the CNNKNN framework was evaluated using accuracy, precision, recall, F1-score, MCC, CKC, ROC curves, and stratified K-fold cross-validation.CNN-KNN integrated framework with 99.58% accuracy in AD detection surpassed existing deep CNN models for clinical diagnosis.
[189]MRIFFNN and various feature extraction methods, such as GoogLeNet, DenseNet-121, PCA, DWT, LBP, and GLCM, for classifying MRI images as AD or non-AD.Methodologies were evaluated using accuracy, sensitivity, AUC, precision, and specificity to measure their effectiveness in detecting AD and predicting disease progression stages.Combination of DL model with exclusive feature extraction improved AD detection to promising 99.7% accuracy.
[190]MRICNN and GAN for AD and MCI diagnosis. The GAN generated additional training instances, improving accuracy. CNN extracted brain features from 2D images.The classification accuracy was evaluated using Keras.Combining CNN and cGAN, the hybrid model efficiently diagnosed AD and MCI and reported improved accuracy on the ADNI dataset.
[191]MRIThe proposed method used a 12-layer CNN for early AD identification from brain MRI scans, taking advantage of CNNs’ effectiveness in image processing tasks.The model was evaluated based on its accuracy in detecting AD. The accuracy of the model was 97.80%.Proposed CNN analyzed MRI scans for early AD detection with a reported accuracy rate of 97.80%, emphasizing the importance of timely diagnosis for both mental and physical health in AD patients.
[192]sMRIDeep learning framework with multi-task learning for hippocampus segmentation and AD classification. Capsule network CNN model with optimized hyperparameters using deer hunting optimization (DHO).MTDL model evaluation: accuracy of 97.1% and Dice coefficient of 93.5%. For binary classification (AD vs. non-AD), there was an accuracy of 96%, and for multi-class classification (AD stages), there was an accuracy of 93%.The proposed method improved AD detection accuracy using hippocampus segmentation and AD categorization for ADNI datasets.
Table 2. Overview of RNN studies for AD detection.
Table 2. Overview of RNN studies for AD detection.
ReferenceInput DataTechniqueEvaluationNotes
[211]T1-weighted sMRICombination MLP and RNN for spatial and longitudinal feature extraction, respectively.Classification accuracy.The proposed method reported 89.7% accuracy for AD classification by utilizing T1-weighted sMR images demonstrating the potential for longitudinal AD diagnosis.
[212]MRI, PETCombining 3D CNN and stacked bidirectional recurrent neural network (SBi-RNN).Average accuracy for AD vs. normal classification (NC), pMCI vs. NC, and MCI vs. NC.Integration of 3D-CNN and SBiRNN was reported for AD diagnosis. Accordingly, MRI and PET modalities demonstrated improvements over the ADNI dataset.
[213]Multivariate time series data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI)RNN with strategies to handle missing data.Performance comparison with baseline models in multi-step prediction of Alzheimer’s disease progression.AD progression using RNN-LSTM and fully connected neural networks with reported 88.24% accuracy on the ADNI dataset. The proposed method improved AD progression, i.e., stage prediction using RNN with strategies for handling missing data.
[214]Heterogeneous medical data of 5432 patients with probable Alzheimer’s disease (AD)Long short-term memory recurrent neural networks (RNN).Accuracy comparison with classic baseline methods.Enhanced RNN tracks AD progression with >99% accuracy, showcasing potential for chronic disease progression prediction.
[215]FDG-PETCombination of 2D CNN and RNNs.AUC for AD vs. NC classification and MCI vs. NC classification.Proposed 2D CNN-RNN framework achieved high AUC values without requiring image registration or segmentation in the preprocessing stage.
[216]sMRICombination of CNN and RNN for spatial and longitudinal feature extraction, respectively.Classification accuracy.The proposed method attained a classification accuracy of 91.33% for AD vs. NC and 71.71% for pMCI vs. sMCI, indicating promise.
[217]sMRIHybrid convolutional and recurrent neural network using DenseNets and bidirectional gated recurrent units (BGRU).Area under ROC curve (AUC).Combination of CNN and RNN for AD diagnosis using MR images of the hippocampus reported promising results (AUCs: 91.0%, 75.8%, and 74.6%).
[218]Diffusion tensor imaging (DTI)Recurrent neural network (RNN) model.Classification accuracy, identifying individuals with early mild cognitive impairment (EMCI).The proposed RNN model to identify AD risk using diffusion tensor imaging (DTI) data achieved promising results and high accuracy in predictions.
[219]MRICombination of pre-trained DenseNet with long short-term memory (LSTM).Performance comparison with state-of-the-art deep learning methods using 5-fold cross-validation.DenseNet and LSTM integration for precise AD classification resulted in improvement over state-of-the-art methods on the OASIS dataset.
[220]MRI, PET, DTILong short-term memory (LSTM) network with fully connected and activation layers.Comparison of the predictive performance of the proposed LSTM model with existing models.LSTM-based model was proposed to predict AD progression, which demonstrated impressive performance towards AD prediction in MRI/PET data.
[221]MRI, PETCombining 3D CNNs and fully stacked bidirectional long short-term memory (FSBi-LSTM).The study reported average accuracies for AD vs. NC, pMCI vs. NC, and MCI vs. NC classification tasks. The proposed method was compared with existing algorithms to assess its performance.DL framework was proposed for AD diagnosis using 3D-CNN and FSBi-LSTM, which reported a higher classification rate w.r.t conventional algorithms.
[222]MRIBidirectional long short-term memory (LSTM) with attention mechanism.Prediction of AD development and classification into NL, MCI, and AD.Bidirectional LSTM-based AD prediction was proposed using neuro-psychological, genetic, and tomographic data.
[223]MRILSTM networks with a generalized training rule for handling missing predictor and target values.The study evaluates MAE for predicting MRI biomarkers and AUC for clinical AD diagnosis.LSTM model to handle missing values (integration of RNN) was proposed leading to improved prediction of MRI biomarkers and AD diagnosis.
[224]MRIRNN-based model for Alzheimer’s disease progression prediction using cognitive measures and MRI scans. RNN captures temporal patterns for accurate prognostic predictions.This model was evaluated for its accuracy in predicting AD progression early in individuals with MCI.An RNN-based model tracking AD progression in MCI individuals was proposed. Accordingly, cognitive data and baseline MRI scans were investigated.
[225]MRI, PETRNNs with long short-term memory (LSTM) and gated recurrent unit (GRU) architectures.Accuracy, F-score, sensitivity, and specificity for classification. Low RMSE, high correlation coefficient for regression;. Outperforms SVM, SVR, and ridge regression models.RNN models (LSTM and GRU) outperformed conventional classification methods (SVM, SVR, and ridge regression) for multimodal image data towards classification.
[226]MRI, PETMinimal RNN model.Predicting diagnosis, cognition, and ventricular volume. Compared to baseline algorithms and handling missing data.Proposed RNN model predicts AD diagnosis, cognition, and ventricular volume effectively using MRI/PET data as demonstrated in the TADPOLE challenge.
[227]MRIDeep recurrent network for joint prediction of missing values, phenotypic measurements, trajectory estimation of cognitive scores, and clinical status.Performance measured using various metrics, comparison with competing methods in the literature, exhaustive analyses, and ablation studies.Using deep RNN, missing data were handled to improve AD predictions in the TADPOLE challenge cohort.
[228]Longitudinal data (MRI volumetric measurements, cognitive score, clinical status)Multi-task learning framework with adaptive imputation and prediction.Improvement in mAUC, BCA, and MAE (ADAS-Cog13 and ventricles).The study employed multi-task learning for tracking Alzheimer’s disease and resulted in improved performance towards gmAUC, BCA, and MAE (ADAS-Cog13 and ventricles).
[229]NeuropsychologicaRlecurrent measures and MRI biomarkersRNN with LSTM and fully connected neural network layers.Accuracy of 88.24%. Proposed framework predicted AD progression using RNN-LSTM and fully connected neural networks with reported 88.24% accuracy on the ADNI dataset.
[230](rs-fMRI) data, specifically dFC networks derived from the rs-fMRI data.CRNN for brain disease classification using rs-fMRI data. Sliding window strategy, convolutional and LSTM layers for feature extraction and temporal dynamics, fully connected layers for classification.CRNN method was evaluated on 174 subjects with 563 rs-fMRI scans for binary and multicategory classification tasks. The study demonstrated its effectiveness in accurately classifying brain diseases.Using rs-fMRI data and dFC, the proposed method automated brain disease classification but it required generalization on a larger dataset due to limited samples.
[231]CTGP-ELM-RNN network (a combination of genetic programming, extreme learning machines, and recurrent neural networks).Accuracy, specificity, and comparison with ELM and RNN models.Proposed GP-ELM-RNN network achieves an accuracy (around 99.23%) in classifying AD stages with CT brain scans, but validation is required over a larger dataset for generalization.
Table 3. Overview of generative modeling studies for AD detection.
Table 3. Overview of generative modeling studies for AD detection.
ReferenceInput DataTechniqueEvaluationNotes
[232]MRI, PETTwo-stage deep learning for AD diagnosis using MRI and PET data. Stage 1: Impute missing PET data from MRI using 3D-cGAN. Stage 2: Use a deep multi-instance neural network for AD diagnosis and MCI conversion prediction with complete MRI and PET data.Quality of synthesized PET images was assessed using 3D-cGAN. The performance of the two-stage deep learning framework was compared to state-of-the-art methods in AD diagnosis.The proposed method used multi-instance NN for AD diagnosis for improved AD diagnosis by addressing missing data issues in stage 1, i.e., impute PET data from MRI.
[233]MRIGANs model AD progressionin MR images. Synthetic images with varying AD features were generated. Image arithmetic manipulated AD-like features in specific brain regions. A modified GAN training handled extreme AD cases.The GAN-based approach was evaluated through experiments and comparisons with oserved changes in AD-like features. The modified GAN training was assessed for encoding and reconstructing real images with high atrophy and unusual features.GANs were trained on synthetic images to learn AD features and, subsequently, a modified GAN was deployed to make predictions on MR images.
[234]MRIWasserstein GANs to artificially age individual brain images. A novel recursive generator model was developed to generate brain image time series based on longitudinal data.The brain ageing model was evaluated on healthy and demented subjects, predicting conversion from MCI to AD using GAN and pre-trained CNN classifier.Method utilized Wasserstein GANs to assess age from brain images and predict individual brain ageing and MCI to AD conversion.
[235]3D sMRIDisease progression prediction framework: 3D mi-GAN generates future brain MRI images, 3D DenseNet-classifier predicts clinical stage using focal loss.Performance measured using SSIM to evaluate the quality of generated MRI images and accuracy improvement for differentiating between pMCI and sMCI stages.Future brain MRI generated by GAN and, subsequently, classification of AD stage using mi-GAN with focal loss optimization.
[236]PETGAN to reconstruct missing PET images. A densely connected convolutional network is then developed as the classification model for binary classification.Densely connected model evaluated on ADNI dataset. Reconstructed images improved classification for class-imbalanced data. Noisy dimensions’ influence was assessed using metrics.GAN-based augmentation method to address missing PET data improved classification model performance on imbalanced datasets, as demonstrated on the ADNI dataset.
[237]sMRIGAN data augmentation for accurate differential diagnosis between normal ageing, AD, and FTD using multi-scale MRI features.Proposed framework evaluated with 10-fold cross-validation on 1954 images achieved 88.28% accuracy.Combination of multi-scale MRI features, GAN augmentation, and ensemble classifier led to high classification accuracy for normal ageing, AD, and FTD samples.
[238]MRI, PETInnovative approach: GAN for PET synthesis with AD diagnosis integration. Fine-tuned architecture for optimized AD classification.High-performance evaluation: state-of-the-art results in three- and four-class AD classification tasks using synthesized PET images. Effective AD diagnosis demonstrated.GAN integration with AD diagnosis for PET image synthesis, leading to state-of-the-art AD classification results.
[239][18F] FDG PET, CTGAN called BEGAN for slice selective learning to address PET imaging environment differences. The extracted unbiased features are used to train an SVM classifier for AD and NC classification.The model was evaluated on the severance and ADNI datasets using accuracy, sensitivity, and specificity metrics, and the results were statistically compared.Proposed SVM classifier (based on GAN features) reported a good performance on the ADNI dataset for AD and NC classification, i.e., less sensitive to acquisition conditions.
[240]MRIcGAN architecture synthesizes MRI at various AD stages using a 2D generator and 2D/3D discriminators to assess image realism. The optimization process involves both 2D and 3D GAN losses for evaluating consecutive 2D images in 3D space.It was evaluated by generating synthetic 3D MR images at different conditions and comparing their quality with those generated by 2D or 3D cGANs.GAN-based image synthesis for AD evolution (at different conditions) by evaluation of 2D/3D losses.
[241]T1-weighted MRIGAN model generated synthetic 1.5T MRI images used by the FCN for AD status prediction.It was evaluated using SNR, BRISQUE, and NIQE. The classification model’s performance was measured using AUC on various datasets.GAN-based framework enhanced AD classification using synthetic 1.5T MRI images, leading to improved performance across multiple datasets.
[242]MRI, PETCNN and GANs for AD classification using neuroimaging data. Three-dimensional CNNs handle multimodal PET/MRI data, while GANs address limited data by generating synthetic samples. EL enhances model robustness and classification performance by combining multiple models.AD classification performance was evaluated using neuroimaging data and metrics like accuracy, sensitivity, specificity, and AUC.CNNs’ potential for AD classification using neuroimaging was tested. Ensemble learning, including PET/MRI and GANs, was used to show effectiveness towards early detection and disease understanding.
[243]PETDeep GANs used for synthesizing brain PET images across AD stages.GAN-generated brain PET images evaluated using classification model (72% accuracy) for AlD stages. Quality was measured with PSNR (avg. 82, 72, 73) and SSIM (avg. 25.6, 22.6, 22.8) scores.GAN-based method generated Alzheimer’s disease images from limited data, promising improved diagnosis model accuracy.
[244]MRIGAN to harmonize the MRI images.The model’s performance was assessed by comparing AD classification accuracy using harmonized MR images and original non-harmonized datasets.Proposed method used GAN-based harmonized MR images for computing AD classification performance w.r.t original dataset.
[245]MRI, PETGAN-based deep learning methods utilized for AD classification and compared with non-GAN methods.GAN-based deep learning methods were evaluated using accuracy, odds ratios (ORs), pooled sensitivity, pooled specificity, and AUC in a meta-analysis.GAN-based method for AD classification outperformed non-GAN methods, but improvement is required for differentiating pMCI vs. sMCI.
[246]T1-weighted sMRI, PET3D end-to-end generative adversarial network (BPGAN) that learns a mapping function to generate PET scans from MRI.The performance of BPGAN was evaluated using MAE, PSNR, SSIM.BPGAN generated high-quality PET images from MRI scans, enhancing AD diagnosis accuracy in multi-modal medical image analysis.
[247]MRI, PETGAN-based approach for AD diagnosis generates PET features from brain images using attention mechanisms for structural information retention.The effectiveness of the proposed method was evaluated through extensive experiments, demonstrating promising results in the diagnosis of AD.Pairwise feature-based GAN model for AD diagnosis, using the attention mechanics model generated PET features from MRI to diagnose AD.
[248]sMRIThe proposed approach is based on an unsupervised deep learning model using a deep convolutional generative adversarial network (DCGAN) using brain MRIs without labels.The model achieved an AUROC of 0.7951, precision of 0.8228, recall of 0.7386, and accuracy of 74.44% for AD diagnosis.DCGAN-based unsupervised learning for AD diagnosis using sMRI images. Method showed promising results to discriminate AD and non-AD cases with accuracy of 75%.
[249]fMRIProposed multimodal generative data fusion framework addresses missing modalities using GANs for accurate predictions.Proposed model excelled in AD vs. healthy control classification, handling missing modalities effectively.Deep multimodal fusion, including neuroimaging and genomics data, handled missing modalities using GAN for improved AD classification.
[250]fMRI, SNPHSIA-GAN uses hypergraph structural information aggregation, capturing low-order relations with vertex and edge graphs, and extracting structural information using generator and discriminator components.HSIA-GAN model evaluated in three AD neuroimaging classification tasks for accurate sample classification and feature extraction.HSIA-GAN model integrated multi-level information(structural) for AD analysis, improving disease classification with informative features.
[251]MRIThe proposed approach uses an adversarial counterfactual augmentation scheme to address classifier weaknesses by leveraging the generative model.The proposed approach improves Alzheimer’s disease classification and addresses spurious correlations and catastrophic forgetting.Proposed work enhanced AD classifier using adversarial counterfactual augmentation to mitigate spurious correlations and forgetting.
[252]sMRIGANCMLAE, combining GANs and multiple loss autoencoder to depict individual atrophy patterns.Model: Trained on ADNI NCs, validated on Xuanwu cohort. Evaluation: SSIM, PSNR, MSE for image reconstruction; MCI subtype atrophy pattern identification; AUC-ROC for AD and MCI vs. NC classification.GANCMLAE model combined GAN and autoencoder for accurate atrophy pattern depiction in AD and MCI, outperforming the t-test model with promising precision in AD and MCI.
[253]FDG-PETThe system uses a GAN-based DCNN for AD, PD, and FTD diagnosis, addressing distribution issues and handling feature learning and classification.The model achieved an accuracy of 97.7%, with sensitivity and specificity both at 0.97.A system for multi-type dementia classification using FDG-PET brain scans with an accuracy of (97.7%) to identify AD, FTD, and PD.
Table 4. Overview of GCN studies for AD detection.
Table 4. Overview of GCN studies for AD detection.
ReferenceInput DataTechniqueEvaluationNotes
[254]MRIGCNs combined with imaging and non-imaging information integration in a sparse graph representation.Framework evaluated on ABIDE and ADNI datasets. Assessed for disease prediction accuracy.Method utilized GCNs to merge imaging and non-imaging data, leading to improved classification accuracy, reaching 70.4% for ABIDE and 80.0% for ADNI.
[255]sMRI, FDG-PET, and AV45-PETInterpretable graph convolutional network (GCN) framework extended with GradCAM technique.The method was evaluated using VBM-MRI, FDG-PET, and AV45-PET modalities. Assessed on clinical score prediction, disease status identification, and biomarker identification for AD and MCI.Proposed multi-modality imaging-based GCN for AD classification for effective ROI quantification on the ADNI dataset.
[256]MRITwo-phase framework that iteratively assigns weights to samples and features to address training set bias and improve interpretability.Compared to the state-of-the-art in classification and interpretability.Proposed two-phase framework for AD diagnosis, leading to reduced biasing and improved interpretability for binary classification on ADNI dataset.
[257]MRIFSNet is a dual interpretable graph convolutional network for enhancing model performance and interpretability in medical diagnosis.The FSNet model demonstrates superior classification performance and interpretability compared to recent state-of-the-art methods.FSNet overcomes GCN limitations with the dual interpretable framework, outperforming state-of-the-art methods in ADNI dataset classification.
[258]MRIGCNs coupled with interpretable feature learning and dynamic graph learning.The performance of the proposed method was evaluated based on its diagnosis accuracy for early AD detection.Integration of feature learning and dynamic graph learning into GCN for robust and personalized disease diagnosis with improved accuracy.
Table 5. Overview of attention mechanism studies for AD detection.
Table 5. Overview of attention mechanism studies for AD detection.
ReferenceInput DataTechniqueEvaluationNotes
[154]MRIDensely connected CNN with an attention mechanism.Proposed method evaluated on ADNI MRI data of AD vs. healthy, MCI converter vs. healthy, and MCI converter vs. non-converter using accuracy.DL method using connected CNNs for AD detection resulted in improved AD classification rate MCI predictions.
[259]MRICombining image filtering, pyramid squeeze attention (PSA) mechanism, FCN, and MLP for improved image analysis.Evaluation of classification performance using accuracy, considering image filtering approaches and attention mechanisms’ impact on AD diagnosis.Study explored image filtering and PSA impact on AD classification, with a reported accuracy of 98.85% for classification.
Table 6. Overview of transfer learning studies for AD detection.
Table 6. Overview of transfer learning studies for AD detection.
ReferenceInput DataTechniqueEvaluationNotes
[208]MRITransfer learning on a multiclass classification model using deep learning.The evaluation metric used was accuracy, measuring the system’s performance in classifying MRI images into different Alzheimer’s disease stages: MD, MOD, ND, and VMD.Automated system for precise AD detection using multi-class approach achieved 91.70% accuracy to predict the stage of disease.
[260]MRITransfer learning for 2D CNNs followed by RNN.The evaluation is based on the accuracy of the system for AD detection using MRI scans. The performance is compared between using a 2D CNN alone and using a combination of a 2D CNN and an RNN.The method explored the sequential relationship of transfer learning and RNN for Alzheimer’s detection improvements.
[261]MRIVGG as the pre-trained model for transfer learning on MRI images. Fine-tuning with layer-wise tuning improves efficiency with smaller datasets.The proposed model was evaluated on AD vs. NC, AD vs. MCI, and MCI vs. NC classification tasks. It outperformed state-of-the-art methods in terms of accuracy and other performance metrics.The study proposed transfer learning followed by intelligent tuning for improved AD classification over small datasets.
[262]MRIThe proposed system employs transfer learning with AlexNet for image classification, tested on both segmented and unsegmented images.The system’s performance was evaluated using various metrics, including overall accuracy for binary (AD vs. non-AD) and multiclass (four dementia stages) classification.Transfer learning validated for AD detection on brain MRI with 92.85% accuracy on segmented/unsegmented imagery.
[263]MRIDeep learning models with transfer learning are used, including 3D CNNs and pretrained network-based architectures, to extract high-level features from neuroimaging data.Models were evaluated using accuracy, sensitivity, specificity, precision, and F1-score to assess AD classification and disease progression prediction.Transfer learning improved AD detection accuracy up to 98.20% and prognostic prediction accuracy up to 87.78%; however; the dataset used was limited.
Table 7. Overview of autoencoder studies for AD detection.
Table 7. Overview of autoencoder studies for AD detection.
ReferenceInput DataTechniqueEvaluationNotes
[264]MRIDeep convolutional autoencoders are used for exploratory data analysis of Alzheimer’s disease. They extract abstract features from MRI images, representing the data distribution in low-dimensional manifolds.The study analyzed extracted features using regression, classification, and correlation techniques. It evaluated their relationship with clinical variables and measured AD diagnosis accuracy.Proposed deep convoltional autoencoders extracted AD-related imaging features, with strong correlations (>0.6) to clinical data, achieving 80% diagnosis accuracy, and showcasing deep learning’s potential in understanding AD’s clinical features.
[265]sMRIStudy used supervised switching autoencoders (SSAs) for AD classification. Models trained on 2D slice patches, exploring sizes/parameters. Patch-level classification identifies disease regions based on accuracy densities.Supervised switching autoencoders (SSAs) accuracy was assessed for healthy vs. AD-demented subjects, comparing identified regions with prior studies and medical knowledge.The proposed model supervised switching autoencoders (SSAs) classified AD using one MRI slice by combining patch representations and achieved high accuracy.
[266]rsEEG, sMRITwo ANNs, with stacked hidden layers for input recreation, classify ADD using LORETA source estimates and sMRI variables. The task involves discriminating between AD and healthy controls.The ANNs were evaluated based on accuracy for classifying ADD and control participants using rsEEG, sMRI, and combined features. Specialized ANNs for ADD and controls were also assessed with the same features.ANNs stacked hidden layers effectively to distinguish AD from healthy controls, i.e., a combination of rsEEG and sMRI features yields improved accuracy.
[267]3D MRIThe method involves two steps: (1) Extracting image features using a pre-trained autoencoder ensemble, and (2) Diagnosing Alzheimer’s disease with a CNN.Evaluation metrics included accuracy, sensitivity, and specificity. Accuracy rates were 95% for AD/NC, 90% for AD/MCI, and 92.5% for MCI/NC classification.Two-step approach resulted in high accuracy for Alzheimer’s disease diagnosis using 3D images.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alsubaie, M.G.; Luo, S.; Shaukat, K. Alzheimer’s Disease Detection Using Deep Learning on Neuroimaging: A Systematic Review. Mach. Learn. Knowl. Extr. 2024, 6, 464-505. https://doi.org/10.3390/make6010024

AMA Style

Alsubaie MG, Luo S, Shaukat K. Alzheimer’s Disease Detection Using Deep Learning on Neuroimaging: A Systematic Review. Machine Learning and Knowledge Extraction. 2024; 6(1):464-505. https://doi.org/10.3390/make6010024

Chicago/Turabian Style

Alsubaie, Mohammed G., Suhuai Luo, and Kamran Shaukat. 2024. "Alzheimer’s Disease Detection Using Deep Learning on Neuroimaging: A Systematic Review" Machine Learning and Knowledge Extraction 6, no. 1: 464-505. https://doi.org/10.3390/make6010024

Article Metrics

Back to TopTop