Next Article in Journal
Automated Classification of User Needs for Beginner User Experience Designers: A Kano Model and Text Analysis Approach Using Deep Learning
Next Article in Special Issue
Artificial Intelligence in Healthcare: ChatGPT and Beyond
Previous Article in Journal / Special Issue
Convolutional Neural Networks in the Diagnosis of Colon Adenocarcinoma
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

New Convolutional Neural Network and Graph Convolutional Network-Based Architecture for AI Applications in Alzheimer’s Disease and Dementia-Stage Classification

1
Department of Mathematical Sciences, The University of Texas at El Paso, 500 W. University Ave., El Paso, TX 79968, USA
2
Department of Public Health, The University of Texas at El Paso, 500 W. University Ave., El Paso, TX 79968, USA
*
Author to whom correspondence should be addressed.
AI 2024, 5(1), 342-363; https://doi.org/10.3390/ai5010017
Submission received: 29 October 2023 / Revised: 29 January 2024 / Accepted: 29 January 2024 / Published: 1 February 2024

Abstract

:
Neuroimaging experts in biotech industries can benefit from using cutting-edge artificial intelligence techniques for Alzheimer’s disease (AD)- and dementia-stage prediction, even though it is difficult to anticipate the precise stage of dementia and AD. Therefore, we propose a cutting-edge, computer-assisted method based on an advanced deep learning algorithm to differentiate between people with varying degrees of dementia, including healthy, very mild dementia, mild dementia, and moderate dementia classes. In this paper, four separate models were developed for classifying different dementia stages: convolutional neural networks (CNNs) built from scratch, pre-trained VGG16 with additional convolutional layers, graph convolutional networks (GCNs), and CNN-GCN models. The CNNs were implemented, and then the flattened layer output was fed to the GCN classifier, resulting in the proposed CNN-GCN architecture. A total of 6400 whole-brain magnetic resonance imaging scans were obtained from the Alzheimer’s Disease Neuroimaging Initiative database to train and evaluate the proposed methods. We applied the 5-fold cross-validation (CV) technique for all the models. We presented the results from the best fold out of the five folds in assessing the performance of the models developed in this study. Hence, for the best fold of the 5-fold CV, the above-mentioned models achieved an overall accuracy of 43.83%, 71.17%, 99.06%, and 100%, respectively. The CNN-GCN model, in particular, demonstrates excellent performance in classifying different stages of dementia. Understanding the stages of dementia can assist biotech industry researchers in uncovering molecular markers and pathways connected with each stage.

1. Introduction

Dementia is a complex and debilitating condition that is not a single disease but a common term encompassing a range of specified medical conditions, characterized by abnormal brain changes. The cognitive abilities of a person experiencing dementia decline significantly, which is substantial enough to impair a person’s daily life and ability to perform self-sustaining tasks. In addition to affecting cognitive abilities, dementia can significantly impact a person’s behavior, feelings, and relationships. It can cause changes in a person’s personality and emotional state and impact their ability to form and maintain social connections. The loss of cognitive functioning that is associated with dementia can manifest in a variety of ways, including difficulties with memory, language, problem-solving, and attention. As the condition progresses, these difficulties can become more pronounced and interfere with a person’s ability to perform daily activities. Some people with dementia may also experience changes in their behavior, such as increased agitation or aggression, and struggle to control their emotions [1].
Dementia refers to numerous cognitive problems that all involve a loss in cognitive function. The levels of dementia vary and can include a healthy brain (no dementia), mild dementia, and severe dementia [2]. Because dementia is a progressive condition and its severity can range from very mild to severe, it is imperative to have models that can classify disease status and automate the process for classification for both treatment purposes and drug development. Early on in the condition’s development, a person may only experience minor difficulties with cognitive functioning and may still be able to carry out many of their daily activities independently. However, as the condition progresses, these difficulties become more pronounced, and a person may require more assistance with tasks such as dressing, bathing, and feeding themselves. When dementia has advanced to later stages, people may become completely dependent on others for their basic needs and require round-the-clock care to maintain their health and well-being. Clinical medical professionals are not easily able to identify disease progression based on behavior and other outward manifestations of the disease. Hence, a tool for disease classification can advance early diagnoses and assist in drug development.
One pathway for developing models for assisting in dementia classification is to use image analysis of brain cells. This is a reasonable approach since dementia manifests due to harm caused to brain cells, leading to disruption in communication between them. As a result, this disruption can negatively impact an individual’s behavior, emotions, and thought processes. Dementia is a prevalent condition that primarily affects older individuals, with a higher incidence rate among those over the age of 85 [3]. However, it is not considered a natural aspect of the aging process since many people can live well into their 90s without experiencing dementia symptoms. Alzheimer’s disease (AD) is the most frequent type of dementia, though there are many other types [4].
A medical disorder, mild cognitive impairment (MCI) manifests as mild impairments in cognitive functioning such as memory or thinking. Although the symptoms are more severe than those typically expected for a healthy individual of the same age, they are not severe enough to impede daily life and, thus, are not classified as dementia. MCI is estimated to affect between 5% and 20% of individuals over the age of 65 [5]. While it is not a form of dementia, it may increase the likelihood of developing dementia in the future. MCI is a term for a condition that affects the brain. It does not have as severe symptoms as AD or dementia, and individuals with MCI can still perform their everyday tasks. It is considered to be in between regular age-related changes and dementia.
Symptoms of MCI include forgetfulness, trouble remembering appointments or events, difficulty finding words, and, in some cases, problems with movement and sense of smell. These symptoms do not significantly interfere with daily life but may indicate an increased risk of developing dementia. MCI does not have a singular cause, and the likelihood of developing it rises with age. Certain conditions like depression, diabetes, and stroke may also raise the risk of developing MCI [6].
A neurological disorder, AD progressively impairs memory and cognitive abilities, ultimately hindering an individual’s ability to perform even basic tasks. Most people with Alzheimer’s are diagnosed with the late-onset type, which typically manifests in their mid-60s. Beginning-onset AD, which is much less common, occurs between the ages of 30 and the mid-60s. Among older adults, the most common kind of dementia is caused by AD. Dr. Alois Alzheimer, for whom the illness is named, discovered changes in the brain tissue of a patient who died in 1906 from an uncommon mental ailment. She exhibited symptoms such as language problems, memory loss, and unpredictable behavior. After she passed away, Dr. Alzheimer investigated her brain and discovered numerous unusual clumps, now known as amyloid plaques, and tangled bundles of fibers known as tau tangles, which are still considered among the main features of AD [7].
Some of the main signs of AD that are still known today are the buildup of amyloid plaque and neurofibrillary tangles in the brain. Along with this, the disease also involves a decrease in the connections between neurons, which are responsible for sending signals between different parts of the brain and between the brain and other parts of the body. Additionally, it is believed that various other intricate changes in the brain also contribute to the development and progression of Alzheimer’s. Damage to certain kinds of brain cells in particular regions of the brain is linked to AD. Abnormally high levels of certain proteins inside and outside brain cells hinder the health of brain cells and disrupt communication between them. Memory loss is one of the early signs of AD because the hippocampus, which is responsible for learning and memory, is frequently the first area of the brain to experience damage. In the United States, AD is presently the seventh primary cause of death and the leading cause of dementia in elderly people [7].
Convolutional filters identify the image characteristics of a complex AD image. These characteristics are the edges, corners, or textures of an image. Each filter detects a feature and learns its values during training. CNNs detect numerous features concurrently using multiple convolutional filters. Each filter focuses on a unique element, helping the network learn hierarchical features from basic edges to complex patterns. Adding pooling layers after convolutional layers reduces the size of activation maps in space while keeping important data. After numerous layers, convolutional filters might identify attributes of the images for the classification of dementia stages [8]. Since convolution and pooling layers can learn the feature maps accurately, we will use a CNN for feature selection from the different types of dementia magnetic resonance imaging (MRI) scans.
The initial symptoms of AD differ among individuals. Scientists are investigating biomarkers, such as biological indicators of disease found in brain scans, cerebrospinal fluid, and blood, to identify changes in the brain at an earlier stage in those who suffer from MCI and those without cognitive impairment who may be at increased risk for AD. This study contributes to providing a mechanism for detecting changes in the brain consistent with dementia as a diagnostic and clinical tool. With this objective in mind, we developed CNNs, pre-trained VGG16 with additional convolutional layers, GCNs, and a fusion network of CNNs with GCNs for classifying different stages of dementia and AD. Knowing the stages of dementia helps healthcare professionals deliver the best treatments and medicines, since each stage of the disease has unique symptoms and patterns of progression. Hence, this paper introduces a CNN-GCN technique, which is a deep learning methodology, to predict various stages of dementia. The method combines a CNN for feature mapping and GCN layers for the final classification tasks. The deep learning approach we present may effectively integrate resilient feature selection and achieve accurate classification of various stages of dementia. The performance of the model is evaluated using a variety of criteria, including accuracy, precision, recall, and F1 score, on the test set when training is complete. With these measures, we can determine how well the CNN- and GCN-based methods can classify the distinct stages of dementia. We hope that by bringing CNN-GCN to the problem of dementia-stage classification, we will help improve the medical community’s ability to diagnose the condition early and help practitioners provide better care for those who suffer from it.

2. Literature Review

Lim et al. [9] presented a prediction model based on deep learning to predict the progressive MCI to AD using structural MRI scans. The methodology of the paper involved training a 3D CNN model on MRI scans of patients with MCI, which aimed to predict whether a patient will progress to AD within a certain period. The authors evaluated the proposed model on a dataset of 352 patients with MCI and compared its performance with several baseline models. The results showed that the proposed deep learning model achieved a high accuracy of 89.5% in predicting the conversion from MCI to AD, outperforming several baseline models. However, one of the main drawbacks of the paper is the limited sample size of the dataset used for evaluation. Additionally, the study was conducted retrospectively, and the model was not validated on an external dataset, which could limit the generalizability of the results. Therefore, further studies with larger and more diverse datasets are needed to validate the effectiveness of their proposed model.
A deep-learning-based model was proposed by Basaia et al. for the automated classification of AD and MCI from a single MRI dataset [10]. The methodology of the paper involved training a deep neural network model on a dataset of MRI scans from patients with AD, MCI, and healthy controls. However, one of the main drawbacks of the paper is that the dataset used for evaluation was relatively small and homogeneous, which could limit the generalizability of the results to other populations. Additionally, the authors did not explore the interpretability of the model, which could be an important consideration for clinical applications. Therefore, further studies are needed to validate the effectiveness and interpretability of a deep learning model on larger and more diverse datasets.
Jiang et al. [11] presented a deep-learning-based approach for the diagnosis of MCI using structural MRI images. The methodology of the paper involved training a CNN model on a dataset of MRI scans from patients with MCI and healthy controls. The authors evaluated the proposed model on a separate dataset and compared its performance with several state-of-the-art methods for MCI diagnosis. Overall, 120 participants were tested using the publicly accessible Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset. Using a relatively small dataset for model training and validation, they classified MCI versus healthy controls with 89.4% accuracy. Hence, there is a possibility to obtain higher accuracy by utilizing a comparatively larger dataset on an advanced graph-based deep learning model.
Aderghal et al. [12] presented a deep-learning-based approach for the categorization of different stages of AD using different MRI modalities. The methodology of the paper involved using transfer learning and fine-tuning a pre-trained CNN model on a dataset of MRI scans from patients with AD at different stages. The authors evaluated the proposed model on a dataset and compared its performance with several state-of-the-art deep learning methods for AD stage categorization.
In a related study, Basheera et al. [13] proposed a deep-learning-based approach for the classification of AD using a hybrid enhanced independent component analysis (ICA) on the segmented gray matter of MRI. The methodology of the paper involved using a CNN model trained on a dataset of MRI scans from patients with AD and healthy controls. The authors preprocessed the MRI images using a hybrid enhanced ICA method to segment the gray matter regions of interest.
Acharya et al. proposed a system that uses automated techniques to detect AD through brain MRI images [14]. The study emphasized the importance of the early detection of AD to improve treatment and patient care. Features were extracted from test images, and the dominant features were identified using the Student’s t-test (ST). A KNN classifier was used to classify the test images based on their features. The ST + KNN technique provided better classification performance measures and outperformed the SVM (polynomial), RF, and Adaboost classifier methods. Hence, feature extraction plays a vital role in obtaining higher accuracy for classifying AD stages. So, we applied several feature selection techniques to our dataset before applying the proposed models.
Nagarathna et al. [15] proposed a method known as a multilayer feedforward neural network (MFNN) to categorize the stages of AD. The study used a dataset of MRI images obtained from the ADNI database. The images were preprocessed to remove noise and artifacts, and features were extracted. The extracted features were used to train and test the MFNN classifier. The feature extraction model consisted of five sets of convolutional blocks, and the classifier model used a multilayer feedforward network with three layers, including a hidden layer and an output layer. The results showed that the proposed model performed well on this dataset, even though the study had some limitations, such as a small dataset size and a lack of comparison with other classification techniques. In our study, we developed the CNN model for feature extraction, consisting of five blocks of convolutional layers.
Using medical imaging data, Kapadnis et al. [16] proposed an approach that explored the use of deep learning techniques for the detection of AD. The authors used CNNs for feature extraction and a support vector machine (SVM) for classification. The study first preprocessed the images and then extracted features using CNNs. The SVM was trained on these features to classify the images as either healthy or AD. The study highlights the potential of deep learning techniques for detecting AD, which can aid in early diagnosis and treatment. AI systems’ capacity to learn intricate details through nonlinear transformations could produce promising results for the identification of AD. The study concludes by emphasizing the need for further research to improve the accuracy and efficiency of AD detection using AI techniques.
To distinguish between patients with MCI and AD by incorporating information about the thickness and geometry of the cortex, Wee et al. [17] proposed a method in which a neural network called a spectral graph CNN can be used. The suggested method used a spectral graph CNN framework to find the difference between AD and MCI and predict when MCI will turn into AD. The spectral graph CNN outperformed voxel-based CNN models and achieved balanced prediction for imbalanced sample sizes. The framework was also effective in predicting MCI-to-AD conversion. The authors suggested that the model could be applied to other brain imaging data and be integrated with other classification approaches on multi-modal brain image data for further improvement.
Guo et al. [18] described a way to use hierarchical GCNs on positron emission tomography (PET) imaging data to predict AD. They called it PETNet and used PET images from the ADNI 2 dataset for validation and evaluation. The ResNet-50 network was used for training the model with pre-trained weights. PETNet achieved similar performance to the pre-trained ResNet-50 for binary classification (MCI/NC) but outperformed ResNet-50 for MCI staging (EMCI/LMCI/NC). The limitations, including graph construction and inference, are still inconclusive issues in the fields of neuroimaging and medical image analysis. Additionally, there are difficulties in choosing the appropriate metrics and weight among different graph inference methods and in applying manifold learning.
Park et al. [19] used PET scans to create CNN-LSTM and 3D CNN models that can predict the difference between people with MCI and AD and those who do not have cognitive impairment (CU). The features were extracted using CNN layers, and then the flattened layer was passed as input into the LSTM. The area under the curve (AUC) for the classification of AD from the CU method was 0.964 using the CNN-LSTM. Since their proposed CNN-LSTM performed very well, we propose the CNN-GCN architecture to obtain higher accuracy for classifying different dementia stages.
Tajammal et al. [20] constructed a deep-learning-based ensembling technique that aimed to effectively extract features from input data and attain optimal performance. The experimental findings indicated that their method achieved an overall average accuracy of 98.8% for the classification task, including AD, MCI, and CN. They applied binary classification techniques for different classes. In our study, we performed multi-class classification for different dementia stages.
Liu et al. [21] came up with a new way to use a 3D deep CNN to predict the difference between people with mild Alzheimer’s dementia, MCI, and CN by analyzing structural MRIs. The deep learning model demonstrated a high level of accuracy, achieving an AUC of 85.12 when differentiating between CN people and patients with either MCI or mild Alzheimer’s dementia. Even though much research has been developed based on the CNN model for predicting AD dementia, there is active research ongoing to develop a more accurate prediction model by improving the CNN model. So, in our study, we developed an advanced CNN-GCN model for accurate dementia-stage prediction.
Building off the aforementioned work, our proposed method contributes to the literature in the following ways. First, Adeghal et al. [12] and Basheeraa et al. [13] used CNN-based approaches for their studies, but each study’s purposes and methods differed. Adeghal et al. aimed to improve the categorization of AD stages through transfer learning using MRI data, while Basheeraa et al. focused on classifying AD using a hybrid enhanced ICA segmentation of gray matter in MRI images. In contrast, our study uses cutting-edge preprocessing methods and presents the pre-trained VGG16 model based on transfer learning with additional convolution layers to better identify the different stages of dementia. Moreover, we implement this advanced deep learning method on a more diverse multi-class dataset than in these previous studies. In a manner similar to [16,19], we utilize a CNN model for feature selection and GCNs for classifying the different AD stages, but anticipate that our proposed (CNN-GCN) method has higher accuracy in detecting AD compared to the CNNs-only model. Additionally, we work with a multi-class imbalanced dataset, as in [17], so GCNs could be a better classification technique for predicting different dementia stages. Finally, building upon [18], we use a highly accurate GCN model with appropriate graph construction techniques, metrics, and weights in this study to improve their proposed GCN model for dementia-stage classification.

3. Methodology

3.1. CNNs

CNNs are popular for image classification tasks due to their ability to automatically learn and extract features from images. CNNs have attained state-of-the-art performance in image classification tasks, often outperforming traditional machine learning algorithms. This is because CNNs can learn to identify complex features in images, such as edges, corners, and textures, without the need for explicit feature engineering. CNNs are designed to mimic the visual cortex of the brain and consist of several layers, including convolutional layers, pooling layers, and fully connected layers. Generally, convolutional layers apply a set of filters to the input image, which results in feature maps highlighting different aspects of the image. Pooling layers reduce the spatial dimensionality of the feature maps by selecting the most important features. Finally, the fully connected layers use the features to classify the image into different categories.
We applied a 5-fold cross-validation technique (a grid search on the validation set) to find the optimal layer size and network hyperparameters (learning rate, regularization, etc.). The original training set was divided into a new training set and a validation set with proportions of 80% and 20%, respectively. We used cross-validation to tune the hyperparameters and layer sizes on the following grid: 1 × 10 k for k = 1 , , 5 and 2 i for i = 4 , , 10 , respectively. Additionally, we tried different hyperparameters using the KerasTuner. This eliminates the difficulties associated with the hyperparameter search with its user-friendly and extensible optimization framework [22]. We utilized various search methods to discover the optimal values for our model’s hyperparameters after configuring the search space using a define-by-run syntax.
The proposed CNN architecture consists of five convolutional layers, followed by ReLU activation, batch normalization (BN), and max pooling. The first two convolutional layers have 32 and 64 filters of size 4 × 4, respectively. The third convolutional layer has 128 filters of size 1 × 1. The fourth convolutional layer has 256 filters of size 1 × 1. The fifth convolutional layer has four filters of size 1 × 1, corresponding to the dataset’s number of classes. The output of the fifth convolutional layer is flattened and passed to the final layer, which is a softmax layer that outputs the class probabilities. The L2 regularization loss of the weights is used to prevent overfitting.

3.2. VGG16 with Additional Convolutional Layers

A pre-trained neural network architecture for image classification is utilized in our work to develop a more advanced CNN model. The network architecture is made by following VGGNet, a popular neural network architecture for image classification.
VGGNet [23] is a CNN architecture that was introduced in 2014. VGGNet is known for its straightforward architecture, which consists of 16–19 convolutional layers and three fully connected layers. The convolutional layers are designed to extract features from the input image, while the fully connected layers are responsible for the classification task. VGGNet has been used in various computer vision applications, such as image classification, object detection, and semantic segmentation. One of the main benefits of VGGNet is its simplicity, which makes it easy to understand and implement. The uniform architecture of VGGNet also allows for easy experimentation with different layer configurations, which can be useful for fine-tuning the performance of the network. Additionally, VGGNet has achieved state-of-the-art results on several image classification benchmarks, such as the ImageNet Large Scale Visual Recognition Challenge. However, VGGNet’s simplicity can also be a disadvantage in some cases. The large number of parameters in the network can make it difficult to train, especially with limited computational resources. VGGNet is also relatively slow compared to some other convolutional neural network architectures due to its large number of parameters and layers.
To further improve the performance of the pre-trained VGG16 architecture, we added five additional convolutional layers, a max pooling layer, a BN layer, and a ReLU activation layer. The number of layers, the sizes of the layers, and several other hyperparameters were modified in accordance with the cross-validation methods described earlier.

3.3. GCNs

In this part, we provide a comprehensive analysis of GCNs from distinct viewpoints. Additionally, we designed an upgrade to the current system by integrating CNNs with GCNs. This modification makes existing GCNs more suitable for the dementia and AD image classification problem.
At first, we cover some fundamentals of GCNs, such as the definitions and notation, as well as the formation of graphs. One-to-many relationships in non-Euclidean spaces can be explained through the use of graphs, which are highly nonlinear data structures. Here, an undirected graph is represented by G = ( V , E ) , where V and E stand for the sets of vertices and edges. The AD images serve as the vertex set, while the similarities between any pair of vertices ( V i and V j ) constitute the edge set. The connections between vertices are specified by the adjacency matrix, A. In our case, when two images are from the same class, we use the label 1, otherwise, we use the label 0 to generate the adjacency matrix. After obtaining A, the appropriate graph Laplacian matrix L is computed using the following equation: L = D A and the degrees of A are represented by the diagonal matrix D, where D i , i = j A i , j .
The symmetric normalized Laplacian matrix ( L s y m ) can be utilized to improve the graph’s generalization, capitalizing on the decomposition L s y m = D 1 / 2 L D 1 / 2 . For example, the propagation rule for GCNs is
H l + 1 = h ( D 1 / 2 L D 1 / 2 H l W l + b l )
The output of the lth layer is denoted by H ( l ) , and the activation function ReLU is denoted by h(•), where W ( l ) and b ( l ) are the weights and biases of the layers that must be learned.

GCNs Architecture

The GCNs architecture is used to classify data represented as graphs, with each node representing a feature and each edge representing a relationship between nodes. The input layer receives the feature matrix of the graph as input. GCN layers use the graph’s Laplacian matrix and learnable weights to perform a graph convolution operation on the input. Finally, the output layer produces the final classification output.
The GCNs architecture defines several helper functions for creating placeholders, initializing parameters, performing GCN operations, and optimizing the network. It also defines a function for training the network and returning it’s accuracy. The training function takes in training and validation data, as well as the Laplacian matrix of the graph. It initializes the network’s parameters, creates the placeholders for the input data, and defines the loss and optimization functions. We utilized the Xavier uniform initializer, then trained the network and returned the accuracy on the test set. We utilized the Proximal Adagrad Optimizer as an optimization function, although we tried other optimizers to determine the best performer. With K = 5 , we used a KNN-based graph to calculate the adjacency matrix A. Before providing the features into the softmax layer, the GCNs implement a 128-unit graph convolutional hidden layer, similar to the CNN architecture described above.

3.4. CNN-GCN Architecture

Hong et al. [24] proposed the FuNet architectures that use a range of models and/or features to make it easier to predict the difference between features by training CNNs and GCNs at the same time to classify hyperspectral images. Additive (A), elementwise multiplicative (M), and concatenation (C) were the three fusion procedures by combining miniGCNs with the CNN model that were taken into consideration for their study. In this study, we proposed a novel fusion network by integrating CNNs with GCNs and considering the resulting features from CNNs before the final classifications of all the classes by GCNs. The proposed fusion network architecture (CNN-GCN) is an implementation of our previously mentioned CNN model for feature extraction and the GCN model to classify the graph nodes. So, in short, the features extracted from the CNN model are fed into the GCN classifier model to obtain the four nodes representing the class probabilities.
In Figure 1, the first convolution block to the fourth block consist of the convolution layer, BN layer, 2D max pooling layer, and the ReLU layer. We utilize three-dimensional input images with a height of 224 and a width of 224, and the number of channels is 3. Furthermore, it is worth noting that the receptive fields throughout the spatial and spectral domains for every single convolutional layer are expressed as follows: 4 × 4 × 32, 4 × 4 × 64, 1 × 1 × 128, and 1 × 1 × 256, respectively. We kept the number of filters in the early levels relatively low, and then, progressively increased the number as we moved deeper into the layers. The final block consists only of the convolution before flattening the features. Hence, we obtained the future maps by utilizing these five convolution blocks. After obtaining the feature maps by training the CNNs, we determined the adjacency matrix, training, validation, and test data to classify the four dementia stages. In GCNs, V represents vertexes, W indicates hidden features in the GCN layer, R denotes hidden features through the ReLU layer, Z represents hidden features in the softmax layer, and Y represents outcomes, respectively. The model has a single hidden layer with 128 nodes, and the output layer has 4 nodes representing the class probabilities. The input data consists of two parts: a feature vector and an adjacency matrix representing the graph structure. The feature vector is passed through a GCN layer to produce the hidden representation. The hidden layer is concatenated with a convolutional layer to process the feature vector representation of the graph and pass it through graph convolutional layers before being flattened to a vector. The final output is obtained by passing the flattened vector through a fully connected layer with four nodes. We trained the CNN-GCN model twice since, at first, the CNNs layers were trained to obtain robust feature maps and GCN layers were trained to classify the dementia stages.
Gradient-weighted class activation mapping (Grad-CAM) [25] is a technique used to precisely determine the specific features of input images that a model must capture. It achieves this by analyzing the gradients of a target concept which flow into the final convolutional layer. Grad-CAM then generates a coarse localization map that highlights the significant regions in the image that are crucial for improving the prediction accuracy. In Figure 2, we visualized how the Grad-CAM method makes the CNN-GCN model’s outputs clearer for the input images so that the proposed CNN-GCN model can predict the stages of dementia perfectly. Hence, Grad-CAM uses targeted processing and captures the key image features to maintain the CNN-GCN model’s accuracy and robustness.

4. Results

There are several performance indicators that we assess, such as overall accuracy, F1 score, recall, and precision, for determining the model’s performance in classifying different dementia stages. The F1 score is a metric that calculates the weighted average of recall and precision, whereas accuracy measures the proportion of properly classified individuals. Following the development of the models, several metrics were utilized in order to adjust parameters. The 5-fold cross-validation method is utilized to determine the performance of each model. Performance assessments are conducted using a multi-class approach and are represented by the confusion matrix. In the results section, we display the figures of loss, accuracy, confusion matrix, and the table of classification scores based on the best fold outputs from the 5-fold cross-validation for all models. Instead of displaying the loss, accuracy, and confusion matrix obtained from all folds of the 5-fold cross-validation, displaying the best fold outputs is better since it is not messy for following the outputs of this study. Moreover, AD- and dementia-stage prediction is crucial, since having knowledge about the stage enables physicians to have a more comprehensive understanding of the impact of the disease on the patient.

4.1. Data Description

The ADNI dataset is the most widely used structural and functional brain imaging scan in AD research to accelerate understanding and treatment development. We collected 6400 raw images from the ADNI database. There are four different categories, including healthy people, very mild, mild, and moderate dementia patients.
In Figure 3, we can see that we have 3200 healthy cases, 2240 very mild cases, 896 mild cases, and 64 moderate cases for training and testing the models.

4.2. Preprocessing

The dataset used for dementia and AD analysis primarily consisted of MRI image data; however, the available data were imbalanced. So, there is an overwhelming dominance of samples from a single class over those from any other class. Data preprocessing techniques were employed to augment the dataset to address this issue. Two distinct augmentation methods, namely, Gaussian noise addition and rotation, were applied. The purpose of data augmentation was to increase the quantity of data in the dataset, thereby preventing overfitting of the model. The Gaussian noise addition process involves adding noise to a set of input images to augment the dataset by introducing variations to the images. Gaussian noise, with a specified standard deviation, is generated randomly and added to each image. This process creates noisy versions of the original images, allowing for a larger and more diverse dataset and, ultimately, more accurate predictions. By increasing the amount of data and introducing randomness, the model trained on this augmented dataset is less likely to overfit the available data, leading to potentially improved performance and generalization. Rotation is another way image augmentation is performed by applying random rotations to the image dataset. A random rotation angle within the specified range is generated, and the image is rotated accordingly. Random rotation of each image creates variations in their orientation. This process ensures that the augmented images are saved in a structured manner, replicating the directory structure of the original dataset. The implementation applies image augmentation to a collection of images. This process enhances the dataset by generating diverse versions of the images, which can be beneficial for subsequent analysis or training purposes.

4.3. Network Implementation

The TensorFlow platform is utilized to build the CNN and GCN networks, and the Adagrad optimizer [26] is employed to optimize the networks. The “exponential” learning rate strategy allows for the dynamic updating of the current learning rate by multiplying a base learning rate (such as 0.001) by every epoch. The maximum number of epochs allowed during network training is 100. The 0.8 momenta are used with BN [27], and the training phase’s batch size is 32. Additionally, the weights are subject to a 2-norm regularization with a 0.001 setting to stabilize network training and minimize overfitting.

4.4. Evaluation Metrics

For each class of the dataset that was provided per model, we calculated the F1 score (4), precision (2), recall (3), and accuracy (1) to evaluate the performance of the proposed techniques. True positive (TP) is the number of images that are correctly classified as being in a particular class. False positive (FP) is the number of images that should belong to another class but are mistakenly assigned to that class. False negative (FN) is the number of images that are part of a class but are mistakenly assigned to another class. The number of images that are accurately classified as belonging to a different class are considered to be true negatives (TNs).
accuracy = TP + TN TP + FP + FN + TN × 100
precision = TP TP + FP
recall = TP TP + FN
F 1 - score = 2 × precision × recall precision + recall
In Figure 4, the value of the loss function is plotted against the number of completed iterations. Both the training and testing losses decreased for the first fifty iterations, even though the testing losses became stable. So, there is a deviation between the training and testing phases after approximately 50 iterations because of the complexity of the model. The CNN model has a high capacity to fit enough parameters of the training data that obstructs the model’s capacity to generalize on testing data.
The accuracy vs. iterations curve provides us with the information we need to validate the performance of the CNN model. In Figure 5, we can see that both the training and testing accuracy are increasing. This indicates that the learning progress of this model is rising to the point of 100 iterations, beyond which point it stays constant when we run it for a greater number of iterations. So, in the case of accuracy, there was not a huge deviation between the training and testing phases.
The confusion matrix that was produced for the CNN model is shown in Figure 6 for the classification of the healthy, very mild, mild, and moderate classes. Even though there is no correct prediction of the moderate class, the number of correctly predicted healthy class cases is 445; the number of correctly predicted very mild class cases is 100; and the number of correctly predicted mild class cases is 16. Due to the fact that there are only 64 images of the moderate class, it is difficult to make a precise prediction of the moderate class.
The F1 score, precision, and recall of each of the classes that comprise the CNN model are detailed in Table 1. The overall accuracy of this model is calculated to be 43.83%. The F1 score for the healthy, very mild, mild, and moderate classes are 0.59, 0.30, 0.10, and 0. The CNN algorithm properly predicted 561 out of 1280 test images. According to Table 1, the category that was predicted with the greatest degree of precision was healthy, while the category that was predicted with the least degree of precision was moderate dementia.
In Figure 7, the value of the loss function is plotted against the number of completed iterations. Initially, the loss was high due to random parameters, but it started decreasing when the number of iterations increased. After iteration 40, both the training and testing losses were stable for the VGG16 with additional convolution layers.
We can verify the VGG16 with additional convolutional layers model’s performance from the accuracy versus iteration curve. In Figure 8, we can see that the training and testing accuracy is around 80% which shows that the learning progress of this model is increasing until 40 iterations and remains fixed after that.
In Figure 9, the confusion matrix for the VGG16 with additional convolutional layers model for the classes healthy, very mild, mild, and moderate is shown. The number of correctly predicted healthy classes is 505, the very mild class is 383, and the mild class is 23, even though there is no correct prediction of the moderate class out of 13 moderate class test images. Since there are only 64 moderate class images, there is very little chance for a correct prediction of the moderate class.
Table 2 describes the F1 score, precision, and recall of all the classes for the VGG16 with additional convolutional layers. The overall accuracy is 71.17% for this model. The F1 scores for the healthy, very mild, mild, and moderate classes are 0.83, 0.68, 0.23, and 0. Out of 1280 test images, our model correctly predicts 911. Table 2 shows that the class most accurately predicted is healthy, while the class least accurately predicted is moderate dementia. Since there are a very small number of moderate class images, the model performance for this class is very low.
The loss function’s value versus the number of iterations is shown in Figure 10. The GCN model’s loss decreased for the training and testing images, even though the loss was very high initially. After iteration 18, the cost was fixed at around zero for both the training and testing images.
The accuracy versus iteration curve gives us the evidence we need to verify the GCN model’s efficacy. Figure 11 shows a rise in accuracy throughout both training and testing until it becomes fixed close to 1. This shows that the model’s learning rate increases up to about 20 iterations, after which it levels off and remains constant.
For the categorization of healthy, very mild, mild, and moderate, the GCN model’s resulting confusion matrices are shown in Figure 12. The total number of correctly predicted healthy cases is 640; for very mild cases, this is 448; and for mild class cases, this is 180, although no moderate cases were identified in any of the 12 test images. Even though the GCN model was able to predict all of the other category images accurately, it was unable to predict the moderate dementia cases.
Table 3 shows the F1 score, precision, and recall of the GCN model for each class. This model is estimated to have a global accuracy of 99.06 percent. The F1 scores for the healthy, very mild, mild, and moderate classes are 1, 0.99, 1, and 0. Out of a total of 1280 test images, the GCN algorithm successfully predicted 1268. The most accurate prediction was made for the healthy and mild categories, while the least accurate prediction was made for the moderate dementia category.
Figure 13 depicts the loss function’s value versus the number of iterations. Even though the loss was initially quite high for the test images, the CNN-GCN model’s loss decreased abruptly for the test images. After the first couple of iterations, the cost for both training and testing images became fixed at approximately zero.
The accuracy versus iteration curve provides the evidence necessary to confirm the efficacy of the CNN-GCN model. Figure 14 depicts a rise in accuracy during both training and testing until it approaches 1 and stabilizes. This demonstrates that the model’s learning rate increases after a couple of iterations before leveling off and remaining constant.
Figure 15 displays the confusion matrices produced by the CNN-GCN model for the categorization of the healthy, very mild, mild, and moderate classes. The number of correctly predicted healthy cases is 641, the number of very mild cases is 448, the number of mild cases is 179, and the number of moderate cases is 12. All category images are predicted correctly using the CNN-GCN model.
Table 4 shows the F1 score, precision, and recall of the CNN-GCN model for each class. We obtained an overall accuracy of 100% from the CNN-GCN model. The F1 scores for the healthy, very mild, mild, and moderate classes are 1, 1, 1, and 1, respectively. The CNN-GCN algorithm successfully predicts all of the 1280 images.
Our proposed CNN-GCN model achieves 100% accuracy on both the training and test data and may overfit the training data by capturing even irrelevant and abnormal patterns, such as noise and outliers. Consequently, the model may exhibit sub-par performance when presented with novel, unfamiliar data. The high accuracy may indicate a lack of generalizability of the model to novel contexts or datasets. This is particularly crucial when dealing with health-related data since it might exhibit significant variability. Hence, it is important to verify the accuracy of the CNN-GCN model by using distinct test datasets and ensuring its efficacy in real-world scenarios rather than just relying on controlled experimental settings. So, we collected a separate dataset for implementing our proposed CNN-GCN model. Neeraj [28] provided a dataset to Kaggle, which consists of 2D images collected from the ADNI baseline dataset that were originally Nifti images. The dataset has three distinct classes: AD, MCI, and CN. After implementing our proposed CNN-GCN model, we presented the confusion matrix in Figure 16 to categorize the AD, MCI, and CN classes. There are 8 accurately predicted instances of AD, 21 cases of MCI, and 15 cases of CN. The CNN-GCN model accurately predicted all of the category images. This illustrates the potential for its use with novel and unfamiliar data.

5. Discussion

Dementia includes the very mild, mild, and moderate cognitive impairment phases that may or may not evolve into AD. The most common kind of dementia is AD. AD and other cognitive impairments fall under the general category of dementia. Because the cognitive impairment stages are the time during which AD may or may not develop, it is of the utmost importance to appropriately identify individuals during this stage [29]. Identifying and diagnosing the illness at its various phases helps doctors come up with more effective therapy and management solutions.
So, in this study, we developed a CNN model, a transfer-learning-based CNN model, a GCN model, and the proposed fusion network model (CNN-GCN) for identifying AD and dementia stages.
In Table 5, HC, CN, MCI, EMCI, and AD represent healthy control, normal control, mild cognitive impairment, early mild cognitive impairment, and Alzheimer’s disease, respectively. We can see that the above works are based on different datasets and different methods. Some authors considered multi-class classification, and others considered binary classification. Since our approach, CNN-GCN, is a new approach for classifying the healthy, very mild dementia, mild dementia, and moderate dementia classes, no one has utilized this technique for classification tasks. We present each method’s accuracy, which is defined as the percentage of accurate predictions on the test set.
In Table 6, we presented the GPU times in seconds for the four methods. We can see that the CNN, pre-trained VGG16 with additional convolutional layers, GCN, and CNN-GCN models took 411, 2364, 64, and 95 seconds, respectively, by using the NVIDIA T4 Tensor Core GPU for completing 100 epochs. The CNN-based methods are more computationally expensive compared to the GCN model. Our proposed CNN-GCN model is less computationally expensive and provides better accuracy.
The performance of the models differed significantly. The performance of the CNN model was very poor compared to other models. By performing hyperparameter tuning, we found Adagrad to be the best optimizer. Additionally, we tried several optimizers, like gradient descent, Adam, RMSprop, and the proximal Adagrad optimizer. Since the AD MRI dataset is imbalanced, the performance of the CNN model was not sufficiently accurate in all cases.
We incorporated the VGG16 transfer learning model with the five additional convolutional layers to improve the performance of the CNN model. By utilizing the VGG16 model with additional convolutional layers, we obtained better accuracy compared to our developed CNN model. To determine the best transfer learning model, we also implemented the DenseNet, MobileNet, InceptionNet, and ResNet transfer learning models and received an overall accuracy of 70.70%, 70.37%, 65%, and 59.69%, respectively. We found that VGG16 is the best transfer learning model in our case. Although the VGG16 with additional convolutional layers model’s performance seems to be behind that of competing approaches, our findings are on par with other CNN models developed so far for biomedical imaging.
Even though CNN-based models are very popular for analyzing image datasets, we developed the GCN model for AD- and dementia-stage prediction. This is due to the fact that GCNs give a strong and flexible representation of the connections between the many components in a complex AD image. Often, the AD MRI images include complicated structures, patterns, and interactions, all of which are amenable to being comprehended and examined to a greater degree by using graph-based methods. The graph-based methods were helpful in the prediction since they take into account the global context and linkages, which assists in mitigating the impacts of class imbalance. For the GCN and CNN-GCN models, we applied hyperparameter tuning and the above-mentioned optimizers. We received almost perfect accuracy in each case. The GCN model’s overall accuracy was 99.06% and correctly predicted healthy, very mild, and mild dementia classes, but could not correctly predict any of the moderate dementia class test images. We then utilized our proposed CNN-GCN fusion network to classify the AD and dementia stages. We developed the same GCN model but utilized the features from the CNN model as input data for the CNN-GCN model. So, we obtained an overall accuracy of 100% by using the CNN-GCN model, which is supplied with features that have been extracted from CNNs. Only the CNN-GCN model was able to predict the moderate class test images accurately, whereas all the other previously mentioned models could not predict them correctly. This study provides evidence that a CNN-GCN modeling approach can work well and obtain high accuracy for applications involving imbalanced data.

6. Conclusions

To sum up, we created four models for identifying healthy, very mild, mild, and moderate dementia patients by utilizing both CNN- and GCN-based algorithms. When compared to the CNN model, the performance of the pre-trained VGG16 with additional convolutional layers model is superior. Moreover, when measured against different works in the literature, its performance is regarded as satisfactory. Even though we achieved an overall accuracy of 99.06% by utilizing the GCN model, it could not accurately predict any of the 12 test images from the moderate stage of dementia. The CNN-GCN model demonstrated the highest F1 score, precision, and recall, as well as an overall accuracy of 100%, out of all the algorithms that we examined. The CNN-GCN model was able to predict all the classes accurately, including the 12 test images from the moderate class. The major limitation of this project is the imbalanced dataset. Although it is common to have low accuracy for an imbalanced dataset in the field of biomedical imaging, our proposed GCNs and CNN-GCN models worked excellently with the imbalanced dataset. This model’s performance may not be helpful for clinical diagnosis, but it marks a significant development in the classification of AD and different stages of dementia.

Author Contributions

Data curation, M.E.H.; methodology, M.E.H.; writing: M.E.H. and A.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Institutional Review Board Statement

The data for this article were collected from ADNI. The ADNI study was conducted according to Good Clinical Practice guidelines, the Declaration of Helsinki, US 21CFR Part 50—Protection of Human Subjects, and Part 56—Institutional Review Boards, and pursuant to state and federal HIPAA regulations. Each participating site obtained ethical approval from their Institutional Review Board before commencing subject enrollment. So, the Institutional Review Board approval does not apply to this manuscript.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. The human data were acquired from the publicly available ADNI database, which meets the ethics requirements.

Data Availability Statement

The subject data used in this study were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu, accessed on 10 August 2023). Meanwhile, data supporting the findings of this study are available from the corresponding authors upon reasonable request.

Acknowledgments

The authors express their gratitude to the anonymous reviewers who contributed to raising the paper’s standard.

Conflicts of Interest

The authors declare that this study was conducted without any commercial or financial relationship that could be considered a potential conflict of interest.

References

  1. WHO. Dementia; WHO: Geneva, Switzerland, 2023. [Google Scholar]
  2. Javeed, A.; Dallora, A.; Berglund, J.; Ali, A.; Ali, L.; Anderberg, P. Machine Learning for Dementia Prediction: A Systematic Review and Future Research Directions. J. Med. Syst. 2023, 47, 17. [Google Scholar] [CrossRef] [PubMed]
  3. NIH. What Is Dementia; NIH: Bethesda, MD, USA, 2022. [Google Scholar]
  4. Alzheimer’s Society. What Is the Difference between Dementia and Alzheimer’s Disease? Alzheimer’s Society: London, UK, 2023. [Google Scholar]
  5. NIH. Mild Cognitive Impairment; NIH: Bethesda, MD, USA, 2022. [Google Scholar]
  6. NIH. Mild Cognitive Impairment; NIH: Bethesda, MD, USA, 2021. [Google Scholar]
  7. NIH. What Is Alzheimers; NIH: Bethesda, MD, USA, 2021. [Google Scholar]
  8. Krizhevsky, A.; Sutskever, I.; Hinton, G. Imagenet classification with deep convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; Volume 25. [Google Scholar]
  9. Lim, B.Y.; Lai, K.W.; Haiskin, K.; Kulathilake, K.; Ong, Z.C.; Hum, Y.C.; Dhanalakshmi, S.; Wu, X.; Zuo, X. Deep learning model for prediction of progressive mild cognitive impairment to Alzheimer’s disease using structural MRI. Front. Aging Neurosci. 2022, 14, 560. [Google Scholar] [CrossRef] [PubMed]
  10. Basaia, S.; Agosta, F.; Wagner, L.; Canu, E.; Magnani, G.; Santangelo, R.; Filippi, M.; Alzheimer’s Disease Neuroimaging Initiative. Automated classification of Alzheimer’s disease and mild cognitive impairment using a single MRI and deep neural networks. Neuroimage Clin. 2019, 21, 101645. [Google Scholar] [CrossRef] [PubMed]
  11. Jiang, J.; Kang, L.; Huang, J.; Zhang, T. Deep learning based mild cognitive impairment diagnosis using structure MR images. Neurosci. Lett. 2020, 730, 134971. [Google Scholar] [CrossRef] [PubMed]
  12. Aderghal, K.; Afdel, K.; Benois-Pineau, J.; Catheline, G. Improving Alzheimer’s stage categorization with Convolutional Neural Network using transfer learning and different magnetic resonance imaging modalities. Heliyon 2020, 6, e05652. [Google Scholar] [CrossRef] [PubMed]
  13. Basheera, S.; Ram, M.S.S. A novel CNN based Alzheimer’s disease classification using hybrid enhanced ICA segmented gray matter of MRI. Comput. Med Imaging Graph. 2020, 81, 101713. [Google Scholar] [CrossRef] [PubMed]
  14. Acharya, U.R.; Fernandes, S.L.; WeiKoh, J.E.; Ciaccio, E.J.; Fabell, M.K.M.; Tanik, U.J.; Rajinikanth, V.; Yeong, C.H. Automated detection of Alzheimer’s disease using brain MRI images—A study with various feature extraction techniques. J. Med. Syst. 2019, 43, 1–14. [Google Scholar] [CrossRef] [PubMed]
  15. Nagarathna, C.; Kusuma, M.; Seemanthini, K. Classifying the stages of Alzheimer’s disease by using multi layer feed forward neural network. Procedia Comput. Sci. 2023, 218, 1845–1856. [Google Scholar]
  16. Kapadnis, M.N.; Bhattacharyya, A.; Subasi, A. Artificial intelligence based Alzheimer’s disease detection using deep feature extraction. In Applications of Artificial Intelligence in Medical Imaging; Elsevier: Amsterdam, The Netherlands, 2023; pp. 333–355. [Google Scholar]
  17. Wee, C.Y.; Liu, C.; Lee, A.; Poh, J.S.; Ji, H.; Qiu, A.; Alzheimers Disease Neuroimage Initiative. Cortical graph neural network for AD and MCI diagnosis and transfer learning across populations. Neuroimage Clin. 2019, 23, 101929. [Google Scholar] [CrossRef] [PubMed]
  18. Guo, J.; Qiu, W.; Li, X.; Zhao, X.; Guo, N.; Li, Q. Predicting Alzheimer’s disease by hierarchical graph convolution from positron emission tomography imaging. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 5359–5363. [Google Scholar]
  19. Park, S.; Yeo, N.; Kim, Y.; Byeon, G.; Jang, J. Deep learning application for the classification of Alzheimer’s disease using 18F-flortaucipir (AV-1451) tau positron emission tomography. Sci. Rep. 2023, 13, 8096. [Google Scholar] [CrossRef] [PubMed]
  20. Tajammal, T.; Khurshid, S.; Jaleel, A.; Wahla, S.Q.; Ziar, R.A. Deep Learning-Based Ensembling Technique to Classify Alzheimer’s Disease Stages Using Functional MRI. J. Healthc. Eng. 2023, 2023, 6961346. [Google Scholar] [CrossRef] [PubMed]
  21. Liu, S.; Masurkar, A.; Rusinek, H.; Chen, J.; Zhang, B.; Zhu, W.; Fernandez-Granda, C.; Razavian, N. Generalizable deep learning model for early Alzheimer’s disease detection from structural MRIs. Sci. Rep. 2022, 12, 17106. [Google Scholar] [CrossRef] [PubMed]
  22. O’Malley, T.; Bursztein, E.; Long, J.; Chollet, F.; Jin, H.; Invernizzi, L. KerasTuner. 2019. Available online: https://github.com/keras-team/keras-tuner (accessed on 1 August 2023).
  23. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  24. Hong, D.; Gao, L.; Yao, J.; Zhang, B.; Plaza, A.; Chanussot, J. Graph convolutional networks for hyperspectral image classification. IEEE Trans. Geosci. Remote. Sens. 2020, 59, 5966–5978. [Google Scholar] [CrossRef]
  25. Selvaraju, R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference On Computer Vision, Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
  26. Duchi, J.; Hazan, E.; Singer, Y. Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 2011, 12, 2121–2159. [Google Scholar]
  27. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, PMLR, Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
  28. Kumar, N. ADNI-Extracted-Axial. 2021. Available online: https://www.kaggle.com/ds/1830702 (accessed on 10 August 2023).
  29. Gupta, Y.; Lama, R.K.; Kwon, G.R.; Initiative, A.D.N. Prediction and classification of Alzheimer’s disease based on combined features from apolipoprotein-E genotype, cerebrospinal fluid, MR, and FDG-PET imaging biomarkers. Front. Comput. Neurosci. 2019, 13, 72. [Google Scholar] [CrossRef] [PubMed]
  30. Payan, A.; Montana, G. Predicting Alzheimer’s disease: A neuroimaging study with 3D convolutional neural networks. arXiv 2015, arXiv:1502.02506. [Google Scholar]
  31. Khvostikov, A.; Aderghal, K.; Benois-Pineau, J.; Krylov, A.; Catheline, G. 3D CNN-based classification using sMRI and MD-DTI images for Alzheimer disease studies. arXiv 2018, arXiv:1801.05968. [Google Scholar]
  32. Valliani, A.; Soni, A. Deep residual nets for improved Alzheimer’s diagnosis. In Proceedings of the 8th ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics, Boston, MA, USA, 20–23 August 2017; p. 615. [Google Scholar]
  33. Helaly, H.; Badawy, M.; Haikal, A. Deep learning approach for early detection of Alzheimer’s disease. Cogn. Comput. 2021, 14, 1711–1727. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The architecture of the proposed CNN-GCN model.
Figure 1. The architecture of the proposed CNN-GCN model.
Ai 05 00017 g001
Figure 2. Moderate dementia image: (a) original and (b) Grad-CAM image.
Figure 2. Moderate dementia image: (a) original and (b) Grad-CAM image.
Ai 05 00017 g002
Figure 3. Total number of images by each category.
Figure 3. Total number of images by each category.
Ai 05 00017 g003
Figure 4. Epoch versus cost for the CNN model.
Figure 4. Epoch versus cost for the CNN model.
Ai 05 00017 g004
Figure 5. Epoch versus accuracy for the CNN model.
Figure 5. Epoch versus accuracy for the CNN model.
Ai 05 00017 g005
Figure 6. Confusion matrix for the CNN model.
Figure 6. Confusion matrix for the CNN model.
Ai 05 00017 g006
Figure 7. Epoch vs. cost for the VGG16 with additional convolutional layers model.
Figure 7. Epoch vs. cost for the VGG16 with additional convolutional layers model.
Ai 05 00017 g007
Figure 8. Epoch vs. accuracy for the VGG16 with additional convolutional layers model.
Figure 8. Epoch vs. accuracy for the VGG16 with additional convolutional layers model.
Ai 05 00017 g008
Figure 9. Confusion matrix for the VGG16 with additional convolutional layers model.
Figure 9. Confusion matrix for the VGG16 with additional convolutional layers model.
Ai 05 00017 g009
Figure 10. Epoch versus cost for the GCN model.
Figure 10. Epoch versus cost for the GCN model.
Ai 05 00017 g010
Figure 11. Epoch versus accuracy for the GCN model.
Figure 11. Epoch versus accuracy for the GCN model.
Ai 05 00017 g011
Figure 12. Confusion matrix for the GCN model.
Figure 12. Confusion matrix for the GCN model.
Ai 05 00017 g012
Figure 13. Epoch versus cost for the CNN-GCN model.
Figure 13. Epoch versus cost for the CNN-GCN model.
Ai 05 00017 g013
Figure 14. Epoch versus accuracy for the CNN-GCN model.
Figure 14. Epoch versus accuracy for the CNN-GCN model.
Ai 05 00017 g014
Figure 15. Confusion matrix for the CNN-GCN model.
Figure 15. Confusion matrix for the CNN-GCN model.
Ai 05 00017 g015
Figure 16. Confusion matrix for the CNN-GCN model for classifying AD, MCI, and CN.
Figure 16. Confusion matrix for the CNN-GCN model for classifying AD, MCI, and CN.
Ai 05 00017 g016
Table 1. Classification scores for the CNN model.
Table 1. Classification scores for the CNN model.
Classn (Classified)n (Truth)F1 ScoreRecallPrecision
Healthy8536450.590.690.52
Very Mild2234520.300.220.45
Mild1521730.100.090.11
Moderate5210000
Table 2. Classification scores for the VGG16 with additional convolutional layers model.
Table 2. Classification scores for the VGG16 with additional convolutional layers model.
Classn (Classified)n (Truth)F1 ScoreRecallPrecision
Healthy5536670.830.760.91
Very Mild7014290.680.870.55
Mild261710.230.130.88
Moderate013000
Table 3. Classification scores for the GCN model.
Table 3. Classification scores for the GCN model.
Classn (Classified)n (Truth)F1 ScoreRecallPrecision
Healthy640640111
Very Mild4604480.9910.97
Mild180180111
Moderate012000
Table 4. Classification scores for the CNN-GCN model.
Table 4. Classification scores for the CNN-GCN model.
Classn (Classified)n (Truth)F1 ScorePrecisionRecall
Healthy641641111
Very Mild448448111
Mild179179111
Moderate1212111
Table 5. Review of selected existing works for the classification of AD and MCI.
Table 5. Review of selected existing works for the classification of AD and MCI.
PaperNo. of ClassesCNNsPre-Trained VGGGCNsCNN-GCNResNet-50VGG16-SVM
Our workMulti-class (4 way)43.83%71.17%99.06%100%59.69%
Lim et al. [9]Multi-class (3 way: CN vs. MCI vs. AD)72.70%78.57% 75.71%
Jiang et al. [11]Binary (EMCI vs. NC) 89.4%
Payan et al. [30]Binary (AD vs. HC, MCI vs. HC, AD vs. MCI)95.39%, 92.13%, 86.84%
Khvostikov et al. [31]Binary (AD vs. HC, MCI vs. HC, AD vs. MCI)93.3%, 73.3%, 86.7%
Valliani et al. [32]Multi-class (3 way: AD vs. MCI vs. CN)49.2% 50.8%
Helaly et al. [33]Multi-class (4 way: AD vs. EMCI vs. LMCI vs. NC)93%
Table 6. Numerical comparison of the GPU times for all the models.
Table 6. Numerical comparison of the GPU times for all the models.
ModelCNNsVGG16 with Additional Convolutional LayersGCNsCNN-GCN
GPU time (s)41123646495
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hasan, M.E.; Wagler, A. New Convolutional Neural Network and Graph Convolutional Network-Based Architecture for AI Applications in Alzheimer’s Disease and Dementia-Stage Classification. AI 2024, 5, 342-363. https://doi.org/10.3390/ai5010017

AMA Style

Hasan ME, Wagler A. New Convolutional Neural Network and Graph Convolutional Network-Based Architecture for AI Applications in Alzheimer’s Disease and Dementia-Stage Classification. AI. 2024; 5(1):342-363. https://doi.org/10.3390/ai5010017

Chicago/Turabian Style

Hasan, Md Easin, and Amy Wagler. 2024. "New Convolutional Neural Network and Graph Convolutional Network-Based Architecture for AI Applications in Alzheimer’s Disease and Dementia-Stage Classification" AI 5, no. 1: 342-363. https://doi.org/10.3390/ai5010017

Article Metrics

Back to TopTop