Next Article in Journal
Research on the Derated Power Data Identification Method of a Wind Turbine Based on a Multi-Gaussian–Discrete Joint Probability Model
Next Article in Special Issue
Automatic Classification Service System for Citrus Pest Recognition Based on Deep Learning
Previous Article in Journal
Efficient Object Detection Based on Masking Semantic Segmentation Region for Lightweight Embedded Processors
Previous Article in Special Issue
Development of Real-Time Landmark-Based Emotion Recognition CNN for Masked Faces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Zoom-In Neural Network Deep-Learning Model for Alzheimer’s Disease Assessments

Department of Computer Science, Gachon University, Sujeong-gu, Seongnam-si 13557, Gyeonggi-do, Republic of Korea
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(22), 8887; https://doi.org/10.3390/s22228887
Submission received: 18 October 2022 / Revised: 9 November 2022 / Accepted: 11 November 2022 / Published: 17 November 2022
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)

Abstract

:
Deep neural networks have been successfully applied to generate predictive patterns from medical and diagnostic data. This paper presents an approach for assessing persons with Alzheimer’s disease (AD) mild cognitive impairment (MCI), compared with normal control (NC) persons, using the zoom-in neural network (ZNN) deep-learning algorithm. ZNN stacks a set of zoom-in learning units (ZLUs) in a feedforward hierarchy without backpropagation. The resting-state fMRI (rs-fMRI) dataset for AD assessments was obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI). The Automated Anatomical Labeling (AAL-90) atlas, which provides 90 neuroanatomical functional regions, was used to assess and detect the implicated regions in the course of AD. The features of the ZNN are extracted from the 140-time series rs-fMRI voxel values in a region of the brain. ZNN yields the three classification accuracies of AD versus MCI and NC, NC versus AD and MCI, and MCI versus AD and NC of 97.7%, 84.8%, and 72.7%, respectively, with the seven discriminative regions of interest (ROIs) in the AAL-90.

1. Introduction

Neuroscience has provided inspiration for and insight into artificial neural networks, including deep-learning networks. However, the field of artificial neural networks has been optimized through mathematics rather than neuroscientific findings [1]. In addition, there is a view that the backpropagation (backprop for short) rule itself is neurobiologically unrealistic [2]. Backprop networks are neuron-like supervised learning systems that adjust the synapses of the connections between neuron units [3]. However, whether and how this technology is implemented in the brain is still difficult to grasp [4].
Deep-learning algorithms with a hierarchical architecture are exploited as representative learning schemes, including restricted Boltzmann machines (RBMs), using local unsupervised pretraining layers from the bottom to top layers [5,6,7]. A deep belief network (DBN) is composed of a stack of RBMs, and full backpropagation fine-tuning is then performed. Recurrent neural networks (RNNs) are suitable for modeling sequential data [8]. A convolutional neural network (CNN) [9] consists of one or more subsampling convolutional layers, followed by a multilayer neural network through a supervised backprop. Deep Q-learning [10] demonstrates deep reinforcement learning trained by backprop in a deep-learning model. Although a backprop algorithm is mainly used in deep models, it causes obstacles in learning, such as a local minimum, slow convergence [11], and vanishing gradient problem [12].
A simple feedback alignment mechanism that adjusts weights by multiplying errors with even random synaptic weights performs as effectively as backprop [13]. The algorithm uses random weights instead of a symmetric backward connectivity pattern in backpropagation.
As one of the features of deep-learning algorithms used in unsupervised learning, the pretraining is determined by starting with random weights. Biological learning is more advantageous for the optimization of weight initialization due to large numbers of trials and errors. However, because backprop consumes considerable time to correct the weights after initialization, optimization using various weight initializations is difficult to achieve.
An intuitive way to consider diverse brain learning paradigms is to find another sup-pattern that is different from the main pattern. For example, ambiguous forms for a particular pattern can be learned through a more detailed look using a magnifying glass. This allows us to make more accurate judgments by learning the causes from a misjudgment, prejudice, or misclassification. In other words, learning can come from mistakes that are to be discarded or ignored. At the same time, it is possible to learn more precisely only through the refined instances. This paradigm reinforces learning efficiency through rewards, from both mistakes and well-learned domains simultaneously.
According to Wikipedia, the dictionary meaning of metacognition is an awareness of one’s thought processes and an understanding of the patterns behind them. When metacognition is applied to a person, it means that the person understands his or her learning process or judgment process or thinking process. This concept was applied to the learning algorithm in this paper. We propose a feedforward zoom-in neural network (ZNN) deep-learning algorithm that implements a learning paradigm called metacognitive learning. A ZNN stacks layers, each of which consists of a set of zoom-in learning units (ZLUs) in a feedforward manner. In a ZLU, meta-cognitive information is accumulated through subpattern learning and fine-tuning learning according to the learning results after learning occurs. Through subpattern learning, another pattern is found as meta-cognitive information from errors in standard learning. Through fine-tuning learning, a more detailed pattern for standard learning is found as meta-cognitive information.
The outputs of the ZLUs in a layer are forwarded to a higher layer to achieve a gradual improvement in the discrimination power. This brings about a dimensionality reduction with structural flexibility by decreasing the number of ZLUs as the learning of the ZNN moves to higher-level layers [14].

2. Zoom-In Neural Network with Subpattern and Refined Learning

In this paper, a deep-learning model based on a neural network with weighted fuzzy membership functions (NEWFMs) [15] is proposed. The NEWFM is a supervised feedforward neuro-fuzzy system that uses the bounded sum of weighted fuzzy membership functions (BSWFMs) for classification. The structure of the NEWFM is composed of three layers: the input layers, the hyperbox layers, and the output layers. The NEWFM reinforces BSWFMs in the hyperbox layer for training. The learning scheme adjusts each BSWFM assigned to each feature to construct a BSWFM. The output values from the output layer can be defuzzified by the Takagi–Sugeno method [16]. The Takagi-Sugeno method is fuzzy is a type of fuzzy model that generates defuzzified values in a non-linear manner.

2.1. Structure of Zoom-In Neural Network

The proposed ZNN is a supervised feedforward deep-learning system using ZLUs implemented by the NEWFMs. The structure of the ZNN, illustrated in Figure 1, is composed of an input layer, multi-ZLU layers, and an output layer. The input layer is split into subsets of input features according to their characteristics [17]. Each ZLU in a ZLU layer performs standard learning followed by a supervised subpattern and refined learning to generate an output to the next ZLU layer or output layer. The main contribution and advantage of a ZNN is that it learns without a backpropagation or vanishing gradient problem and uses metacognitive information during learning process. This is because every ZLU layer executes a supervised subpattern and refined learning for the next ZLU layer.

2.2. Zoom-In Learning Unit Processes

The basic processes of the ZNN are executed by the neuronal ZLU module. Each ZLU generates two subpatterns and a refined defuzzified output for the next layer. The subpatterns and the refined defuzzified output include metacognitive information.
A ZLU consists of two-part learning, as shown in Figure 2. The bottom NEWFM is trained for the standard training process from the split input features. Then, the input instances are classified into two groups of misclassified instances (MIs) and correctly classified instances (CCIs) by the bottom NEWFM, which is called instance grouping in Figure 2. The top two NEWFMs execute subpattern and refined training by the MIs and CCIs, respectively, called zoom-in training, to find a subpattern of the MIs and a refined pattern of CCIs precisely. These patterns are used as metacognitive information. The subpattern training using MIs attempts to find another pattern ignored from the standard training. In contrast, the refined training learns in detail through the noise-reduced CCIs. For the test process after training, the input instances of ZLU are input to the top two NEWFMs directly to produce the output for the next layer.
The first advantage of a ZLU module is that the load of learning can be distributed to the ZLUs according to the divided input features. Second, a ZLU module can be assembled flexibly to find a pattern of functional connectivity of neural networks. As the third advantage, the cost function for each ZLU module could first be applied locally, which may follow the cognitive processes of the human brain learning inferences of a hierarchical principle by fusing or filtering out of the local decisions [18].
As the main contribution of ZLU, it applies a zoom-in learning process. A neuroscience basis for the learning scheme in the human brain is provided by the metacognitive learning method that monitors uncertainty regarding a decision [19,20], as in the standard training of a ZLU followed by revising the decision in ZLU zoom-in training.

2.3. Zoom-In Neural Network Algorithm

ZLU is a basic unit module of the ZNN algorithm. Each ZLU module generates two types of subpatterns and refined outputs for the next ZLU layer. The layerwise supervised learnings in ZNN are executed to improve the learning ability by zooming in on the input patterns using the ZLUs. Feedforward learning of a ZNN without backpropagation in deep-learning algorithms provides an efficient learning method by eliminating the vanishing gradient problem.
The ZNN algorithm is divided into training and testing parts, i.e., a training algorithm and test algorithm of the ZNN, respectively. Algorithms 1 and algorithm 2 show train and test algorithms. To describe the training and testing that takes place in a ZLU, once the standard training, instance grouping, and zoom-in training are completed in a ZLU, as indicated by the solid lines in Figure 2, the test will only run the zoom-in test along the dotted lines in Figure 2. Based on the feature selection function built in a NEWFM, the features can be selected during the standard training in a ZLU to reduce the dimensionality of the input of every ZLU layer. The output of the j-th ZLU in the i-th layer is represented by Takagi–Sugeno defuzzifications (TSDs) {Ti,j,M} and {Ti,j,C} for a subpattern and refined output, respectively. The detailed processes of the ZNN algorithm are as follows (Algorithms 1 and 2):
Algorithm 1: Train Algorithm of ZNN
1 Input Layer
 1.1 Split train input data with n features into the k subsets with
   n/k features such that {I0,1}, {I0,2}, …, {I0,k}
2 i-th ZLU layer (initially i = 1)
 2.1 Let ZLUi,j be j-th ZLU in i-th ZLU layer
 2.2 For each {Ii−1,j}, where j = 1 to k
  2.2.1 Assign {Ii−1,j} to ZLUi,j as input
 2.3 For each ZLUi,j, where j = 1 to k
  2.3.1 Standard training
   2.3.1.1 Standard training using NEWFMi,j from {Ii−1,j}
   2.3.1.2 Instance grouping: divide {Ii−1,j} instances into
      misclassified instances (MI) {Mi,j} and correctly
      classified instances (CCI) {Ci,j}
  2.3.2 Zoom-in training
   2.3.2.1 Subpattern training from {Mi,j} using
      NEWFMi,j,M
   2.3.2.2 Refine training from {Ci,j} using NEWFMi,j,C
  2.3.3 Zoom-in output
 2.3.3.1 Output TSD {Ti,j,M} using NEWFMi,j,M from {Ii−1,j}
 2.3.3.2 Output TSD {Ti,j,C} using NEWFMi,j,C from {Ii−1,j}
 2.4 Split the TSDs {Ti,1,M, Ti,1,C, Ti,2,M, Ti,2,C, …, Ti,n,M, Ti,n,C}
   into {Ii,1}, {Ii,2}, …, {Ii,n}, where n is the number of ZLUs
   in the (i + 1)-th ZLU layer
 2.5 i = i + 1, k = n, and go to 2 until predefined i is reached
3 Output Layer
 3.1 Train by the NEWFMi using {Ii−1,1}, {Ii−1,2}, …, {Ii−1,k}
Algorithm 2: Test Algorithm of ZNN
1 Input Layer
 1.1 Split test input data with n features into the k subsets with
   n/k features such that: {I0,1}, {I0,2}, …, {I0,k} as in 1.1 of
   Train Algorithm of ZNN
2 i-th ZLU Layer (initially i = 1)
 2.1 For each {Ii−1,j}, where j = 1 to k
  2.1.1 Assign {Ii−1,j} to ZLUi,j as input
  2.1.2 Zoom-in test:
   2.2.2.1 Output TSD {Ti,j,M} using NEWFMi,j,M from {Ii−1,j}
   2.2.2.2 Output TSD {Ti,j,C} using NEWFMi,j,C from {Ii−1,j}
 2.2 Split the TSDs {Ti,1,M, Ti,1,C, Ti,2,M, Ti,2,C, …, Ti,n,M, Ti,n,C}
   into {Ii,1}, {Ii,2}, …, {Ii,n}, where n is the number of ZLUs
   in the (i + 1)-th ZLU layer
 2.3 i = i + 1, k = n, and go to 2 until the output layer is reached
3 Output Layer
 3.1 Output TSD {Ti} by the NEWFMi using {Ii−1,1}, {Ii−1,2}, …,
   {Ii−1,k}

3. Experimental Results

In this section, the experimental results for assessing persons with Alzheimer’s disease (AD) and mild cognitive impairment (MCI), compared with normal control (NC) persons, using the proposed ZNN deep-learning algorithm, are presented to evaluate the ZNN algorithm with the discriminative regions of interest in the Anatomical Automatic Labeling (AAL-90) model.

3.1. Dataset

The experimental dataset contains the resting-state functional magnetic resonance imaging (rs-fMRI) data concerning 34 AD patients, 89 MCI patients, and 45 NC persons. It was obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database [21]. The rs-fMRI dataset for each subject consisted of three-dimensional brain images with a 140-time series of voxels. Based on the AAL-90 model from 27 high-resolution T1-weighted images of a young male [22], the brain image was divided into 90 functional regions of interest (ROIs).
To obtain an overall change in each ROI with 2000 to 3000 voxels, the average of voxels (AOV) of each ROI was generated as a representative signal of the ROI. The features were extracted using the Haar wavelet transform (HWT) from the 140-time series of AOVs. In addition, the local and global graph measures [23] were used, as shown in Table 1.

3.2. Experimental Structure for Alzheimer’s Disease Assessments Using ZNN

The series of experimental structures led to a 140-time series of ROIs, AOVs, feature extraction, feature selection, region selection, and ZNN classification, as shown in Figure 3.
From the 140-time series of AOVi of ROIi (Figure 3a,b), the 32 HWT features of the d3, d4, and a4 coefficients and the 10 graphic features (Table 1) were extracted (Figure 3c). Then, 20 features were selected from among the 42 features extracted by the NEWFMi model (Figure 3d), producing classification accuracies for each ROIi.
Finally, the highly accurate ROIs that were candidates for discriminative ROIs for AD assessments were selected for input into the ZNN model (Figure 3e,f). In practice, the 16 best ROIs, each having 20 features, were the final inputs of the ZNN.
AD assessments by the proposed ZNN model have three pairs of classifications: AD versus NC-MCI (A-NM), NC versus AD-MCI (N-AM), and MCI versus AD-NC (M-AN). The ZNN model was run with a hold-out verification method by setting 23 subjects in the dataset as training sets and the rest (11 AD, 66 MCI, and 22 NC subjects) as test sets.
Figure 4 shows the ZNN model for Alzheimer’s disease classifications implemented from inputs of the 16 selected ROIs. The first ZLU layer was composed of 16 ZLUs that each received 20 features from an ROI; then, they were output to the second ZLU layer. Each ZLU in the first ZLU layer produced TSD {T1,j,M} and TSD {T1,j,C} as the second ZLU layer input. Then, the processes were recurrently executed in the second ZLU layer. Finally, the NEWFM of the output layer carried out an AD assessment using the second ZLU layer outputs. All of the detailed classification processes were performed according to the training algorithm of the ZNN and the test algorithm of the ZNN, as described in the previous section.

3.3. AD, MCI, and NC Classifications

ZNN yielded the three classification accuracies of A-NM, N-AM, and M-AN as 97.7%, 84.8%, and 72.7%, respectively, as shown in Table 2. Adding the layers of a ZNN, the accuracies were improved in cases of the three classifications, as shown in Table 3.
The increases in accuracy with the stacking of ZLU layers are shown in Figure 5. On an average, the second ZLU layer increased by 7.5% over the first ZLU layer, and the output layer improved by 16.3% over the second ZLU layer.
Ref. [33] shows 96.85% as the results of classifying AD and SCI using CNN. Although there are differences in the experimental environment, the comparison is meaningful in that the same ANDI data were used.
The 16 most prominent ROIs in the AAL-90 were selected in accordance with the accuracies of the ROIs in Figure 3d. The 16 ROIs were selected to operate a ZNN for each classification as an input layer in Figure 4. The final 16 ROIs for the three AD classifications of a ZNN are listed in Table 4.
Some of the ROIs in a column in Table 4 are also repeated in the other columns. The seven repeated discriminative ROIs in bold are defined as cingulum_mid_r, caudate_l, parietal_sup_l, frontal_mid_r, frontal_mid_l, parietal_inf_l, and postcentral_r for the AD assessments.
The most discriminating ROI in Table 4 is cingulum_mid_r, which plays a remarkable role in the progressive development of MCI and AD [34,35]. The eigenvector centrality shows significant differences in caudate_l between NC and AD [36]. In addition, the subcortical region of the caudate plays a key role in MCI and AD [37]. Other analyses reveal that effective connectivity from the right middle frontal gyrus to the left superior parietal (parietal_sup_l) cortex, as well as from the right to the parietal_sup_l gyrus, is decreased in prodromal AD patients [38]. AD patients atrophy in AD-specific regions related to cognitive performance in the caudal middle frontal gyrus related to frontal_mid_r and frontal_mid_l [39]. Functional connectivity differences of the postcentral gyrus are affected in early-onset AD within the sensory-motor system in the default-mode network [40]. As found in the references, the seven discriminative ROIs obtained by the chosen ZNN were validated as biomarkers for AD assessments.
Figure 6 shows the location of seven overlap areas of discriminative ROIs in the AAL-90 from the three AD classifications. From the experiments, we can see that the seven areas have important relations in the experiment and classification results. The BrainNet Viewer software package (URL: http://nitrc.org/projects/bnv accessed on January 2020) was used to create the images of the ROIs.

4. Discussion

In this paper, we proposed a deep neural network model, ZNN, to classify Alzheimer’s diseases, which is a new feedforward deep-learning model using brain-inspired metacognitive learning. The ZNN, model without a gradient descending problem in backprop, simulates activity-dependent learning using ZLU units, such as the synaptic plasticity in the brain. A neuroscientific basis for the learning scheme of the metacognitive learning in the human brain [20] is implemented by the subpattern and refined learning in a ZLU. The absence of a backprop, together with metacognitive learning, are the main technical elements that make this paper original.
In addition, the ZLU units enhance the connectivity of the neural networks by connecting the block-type neural network units in a flexible manner. In the proposed model, by using 10 graphic features and h\Haar wavelet coefficients together as classification features, we obtained the effect that the regional characteristics of the brain were more clearly reflected in the classification through the Haar wavelet. The proposed model can also be used to evaluate other diseases. The features used in this method are features that can be extracted from images and numerical data. As the data used for the classification of diseases are mostly image or numerical data, it is possible to apply this model.
The proposed algorithm yields a 97.7% classification accuracy for AD versus SCI-MCI, 84.8% classification accuracy for SCI versus AD-MCI, and 72.7% classification accuracy for MCI versus AD-SCI, using the discriminative ROI specifications in the AAL-90. Thus, ZNN learns more efficiently through the zoom-in learning processes following a human-like approach in the way it acts. A further matter to be considered is that AD and SCI are classifications in which symptoms are clearly distinguished. However, as MCI is an intermediate state between AD and SCI, the accuracy of the classification of MCI and others is lower than that of AD and SCI. Another document that shows that the MCI classification results are relatively low is [33].
The importance of classification of MCI is gradually increasing. Accurate classification of MCI enables early diagnosis or prevention of dementia. This can slow the progression of dementia, alleviate symptoms, or increase the probability of a cure. Therefore, improving the classification accuracy of MCI is a future research task.
One limitation in this study was that the number of data used was not balanced. We need to balance the data with additional up-to-date data from ANDI. Recent data are continuously accumulating on the ANDI site, and it is necessary to experiment with the accumulated data. We have one more limitation: As the datasets collected from ANDI were not all from the same machine [41], they may have contained errors in the measurements.

Author Contributions

Conceptualization, J.S.L. and B.W.; methodology, J.S.L.; software, J.S.L.; validation, B.W.; formal analysis, J.S.L.; investigation, J.S.L.; writing—original draft preparation, J.S.L.; writing—review and editing, B.W.; visualization, B.W.; supervision, B.W.; project administration, J.S.L.; funding acquisition, J.S.L. All authors have read and agreed to the published version of the manuscript.

Funding

1. This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education(2020R1I1A1A01066599). 2. This research was supported by the MSIT(Ministry of Science and ICT), Korea, under the ITRC(Information Technology Research Center) support program(IITP-2022-2017-0-01630) supervised by the IITP(Institute for Information & communications Technology Promotion).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

https://adni.loni.usc.edu/data-samples/ accessed on 17 October 2022.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sutskever, I.; Martens, J. On the importance of initialization and momentum in deep learning. In Proceedings of the 30th International Conference on Machine Learning, JMLR Atlanta, GA, USA, June 2013; pp. 16–21. [Google Scholar]
  2. Crick, F. The recent excitement about neural networks. Nature 1989, 337, 129–132. [Google Scholar] [CrossRef] [PubMed]
  3. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  4. Schiess, M.; Urbanczik, R.; Senn, W. Somato-dendritic Synaptic Plasticity and Error-backpropagation in Active Dendrites. PLoS Comput. Biol. 2016, 12, e1004638. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Bengio, Y. Learning deep architectures for AI. Found. Trends Mach. Learn. 2009, 2, 1–127. [Google Scholar] [CrossRef]
  6. Hinton, G.E.; Salakhutdinov, R. Reducing the dimensionality of data with neural networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Hinton, G.E.; Osindero, S.; The, Y.-W. A fast learning algorithm for deep belief nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef]
  8. Sutskever, I.; Martens, J.; Hinton, G. Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on Machine Learning, ICML 2011, Bellevue, WA, USA, 28 June–2 July 2011. [Google Scholar]
  9. LeCun, Y.; Boser, B.; Denker, J.S.; Henderson, D.; Howard, R.E.; Hubbard, W.; Jackel, L.D. Backpropagation applied to handwritten zip code recognition. Neural Comput. 1989, 1, 541–551. [Google Scholar] [CrossRef]
  10. Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A.A.; Veness, J.; Bellemare, M.G.; Graves, A.; Riedmiller, M.; Fidjeland, A.K.; Ostrovski, G.; et al. Human-level control through deep reinforcement learning. Nature 2015, 518, 529–533. [Google Scholar] [CrossRef]
  11. Lu, J.; Yuan, X.; Yahagi, T. A method of face recognition based on fuzzy clustering and parallel neural networks. Signal Process. 2006, 86, 2026–2039. [Google Scholar] [CrossRef]
  12. Hochreiter, S. The Vanishing Gradient Problem during Learning Recurrent Neural Nets and Problem Solutions. Int. J. Unc. Fuzz. Knowl. Based Syst. 1998, 6, 107. [Google Scholar] [CrossRef]
  13. Lillicrap, T.P.; Cownden, D.; Tweed, D.B.; Akerman, C.J. Random synaptic feedback weights support error backpropagation for deep learning. Nat. Commun. 2016, 7, 13276. [Google Scholar] [CrossRef] [PubMed]
  14. Qu, L.; Lim Joon, S. Alzheimer’s Disease and Mild Cognitive Impairment Detection Using Zoom-in Neural Network. Basic Clin. Pharmacol. Toxicol. 2019, 124, S3. [Google Scholar]
  15. Lim, J.S. Finding Features for Real-Time Premature Ventricular Contraction Detection Using a Fuzzy Neural Network System. IEEE Trans. Neural Netw. 2009, 20, 522–527. [Google Scholar] [CrossRef] [PubMed]
  16. Takagi, T.; Sugeno, M. Fuzzy Identification of Systems and Its Applications to Modeling and Control. IEEE Trans. Syst. Man. Cybern. 1985, 15, 116–132. [Google Scholar] [CrossRef]
  17. Son, S.-Y.; Lee, S.-H.; Chung, K.; Lim, J.S. Feature selection for daily peak load forecasting using a neuro-fuzzy system. Multimed. Tools Appl. 2015, 74, 2321–2336. [Google Scholar] [CrossRef]
  18. Cao, Y.; Summerfield, C.; Park, H.; Giordano, B.L.; Kayser, C. Causal Inference in the Multisensory Brain. Neuron 2019, 102, 1076–1087.e8. [Google Scholar] [CrossRef] [Green Version]
  19. DiCarlo, J.J.; Zoccolan, D.; Rust, N.C. How does the brain solve visual object recognition? Neuron 2012, 73, 415–434. [Google Scholar] [CrossRef] [Green Version]
  20. Qiu, L.; Su, J.; Ni, Y.; Bai, Y.; Zhang, X.; Li, X.; Wan, X. The neural system of metacognition accompanying decision-making in the prefrontal cortex. PLoS Biol. 2018, 16, e2004037. [Google Scholar] [CrossRef]
  21. Jack, C.R., Jr.; Bernstein, M.A.; Fox, N.C.; Thompson, P.; Alexander, G.; Harvey, D.; Borowski, B.; Britson, P.J.; Whitwell, J.L.; Ward, C.; et al. The Alzheimer’s Disease Neuroimaging Initiative (ADNI): MRI Methods. J. Magn. Reson. Imaging 2008, 27, 685–691. [Google Scholar] [CrossRef] [Green Version]
  22. Tzourio-Mazoyer, N.; Landeau, B.; Papathanassiou, D.; Crivello, F.; Etard, O.; Delcroix, N.; Mazoyer, B.; Joliot, M. Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRIsingle-subject brain. Neuroimage 2002, 15, 273–289. [Google Scholar] [CrossRef]
  23. Khazaee, A.; Ebrahimzadeh, A.; Babajani-Feremi, A. Application of advanced machine learning methods on resting-state fMRI network for identification of mild cognitive impairment and Alzheimer’s disease. Brain Imaging Behav. 2016, 10, 799–817. [Google Scholar] [CrossRef] [PubMed]
  24. Guimera, R.; Sales-Pardo, M.; Amaral, L.A. Classes of complex networks defined by role-to-role connectivity profiles. Nat. Phys. 2007, 3, 63–69. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Rubinov, M.; Sporns, O. Weight-conserving characterization of complex functional brain networks. NeuroImage 2011, 56, 2068–2079. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Brandes, U. A faster algorithm for betweenness centrality*. J. Math. Sociol. 2001, 25, 163–177. [Google Scholar] [CrossRef]
  27. Hagmann, P.; Cammoun, L.; Gigandet, X.; Meuli, R.; Honey, C.J.; Wedeen, V.J.; Sporns, O. Mapping the structural core of human cerebral cortex. PLoS Biol. 2008, 6, e159. [Google Scholar] [CrossRef]
  28. Estrada, E.; Higham, D.J. Network properties revealed through matrix functions. SIAM Rev. 2010, 52, 696–714. [Google Scholar] [CrossRef]
  29. Newman, M.E.J. Mathematics of networks. In The New Palgrave Dictionary of Economics; Durlauf, S.N., Blume, L.E., Eds.; Palgrave Macmillan: London, UK, 2008. [Google Scholar]
  30. Boldi, P.; Santini, M.; Vigna, S. PageRank: Functional dependencies. ACM Trans. Inf. Syst. 2009, 27, 19. [Google Scholar] [CrossRef]
  31. Foster, J.G.; Foster, D.V.; Grassberger, P.; Paczuski, M. Edge direction and the structure of networks. Proc. Natl. Acad. Sci. USA 2010, 107, 10815–10820. [Google Scholar] [CrossRef] [Green Version]
  32. Humphries, M.D.; Gurney, K. Network ‘small-world-ness’: A quantitative method for determining canonical network equivalence. PLoS ONE 2008, 3, e0002051. [Google Scholar] [CrossRef]
  33. Saleem, T.J.; Zahra, S.R.; Wu, F.; Alwakeel, A.; Alwakeel, M.; Jeribi, F.; Hijji, M. Deep Learning-Based Diagnosis of Alzheimer’s Disease. J. Pers. Med. 2022, 12, 815. [Google Scholar] [CrossRef]
  34. Bozzali, M.; Giulietti, G.; Basile, B.; Serra, L.; Spano, B.; Perri, R.; Giubilei, F.; Marra, C.; Caltagirone, C.; Cercignani, M. Damage to the cingulum contributes to Alzheimer’s disease pathophysiology by deafferentation mechanism. Hum. Brain. Mapp. 2012, 33, 1295–1308. [Google Scholar] [CrossRef] [PubMed]
  35. Bubb, E.J.; Metzler-Baddeley, C.; Aggleton, J.P. The cingulum bundle: Anatomy, function, and dysfunction. Neurosci. Biobehav. Rev. 2018, 92, 104–127. [Google Scholar] [CrossRef] [PubMed]
  36. Son, S.-J.; Kim, J.; Park, H. Structural and functional connectional fingerprints in mild cognitive impairment and Alzheimer’s disease patients. PLoS ONE 2017, 12, e0173426. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. Zhao, H.; Li, X.; Wu, W.; Li, Z.; Qian, L.; Li, S.; Zhang, B.; Xu, Y. Atrophic patterns of the frontal-subcortical circuits in patients with mild cognitive impairment and Alzheimer’s disease. PLoS ONE 2015, 10, 1–14. [Google Scholar] [CrossRef] [Green Version]
  38. Neufang, S.; Akhrif, A.; Riedl, V.; Förstl, H.; Kurz, A.; Zimmer, C.; Sorg, C.; Wohlschläger, A.M. Disconnection of Frontal and Parietal Areas Contributes to Impaired Attention in Very Early Alzheimer’s Disease. J. Alzheimer’S Dis. 2011, 25, 309–321. [Google Scholar] [CrossRef] [PubMed]
  39. Bakkour, A.; Morris, J.C.; Wolk, D.A.; Dickerson, B.C. The effects of aging and Alzheimer’s disease on cerebral cortical anatomy: Specificity and differential relationships with cognition. Neuroimage 2013, 76, 332–344. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Adriaanse, S.M.; Binnewijzend, M.A.; Ossenkoppele, R.; Tijms, B.M.; van der Flier, W.M.; Koene, T.; Smits, L.L.; Wink, A.M.; Scheltens, P.; van Berckel, B.N.; et al. Widespread Disruption of Functional Brain Organization in Early-Onset Alzheimer’s Disease. PLoS ONE 2014, 9, e102995. [Google Scholar] [CrossRef]
  41. Jitsuishi, T.; Yamaguch, A. Searching for optimal machine learning model to classify mild cognitive impairment (MCI) subtypes using multimodal MRI data. Sci. Rep. 2022, 12, 4284. [Google Scholar] [CrossRef]
Figure 1. Structure of the Zoom-in Neural Network (ZNN).
Figure 1. Structure of the Zoom-in Neural Network (ZNN).
Sensors 22 08887 g001
Figure 2. Structure of the Zoom-in Learning Unit (ZLU) (solid lines denote training and dotted lines denote tests).
Figure 2. Structure of the Zoom-in Learning Unit (ZLU) (solid lines denote training and dotted lines denote tests).
Sensors 22 08887 g002
Figure 3. Schematic diagram for Alzheimer’s disease assessments.
Figure 3. Schematic diagram for Alzheimer’s disease assessments.
Sensors 22 08887 g003
Figure 4. ZNN model for Alzheimer’s disease classification.
Figure 4. ZNN model for Alzheimer’s disease classification.
Sensors 22 08887 g004
Figure 5. Performance improvements by stacking layers of the ZNN for three AD assessments (%).
Figure 5. Performance improvements by stacking layers of the ZNN for three AD assessments (%).
Sensors 22 08887 g005
Figure 6. 7 Overlapped discriminative ROI specifications in the AAL-90 from the three AD classifications.
Figure 6. 7 Overlapped discriminative ROI specifications in the AAL-90 from the three AD classifications.
Sensors 22 08887 g006
Table 1. 10 graphic features representing the connectivity of ROIs.
Table 1. 10 graphic features representing the connectivity of ROIs.
Feature NameReferenceMeaning
Degree[24]The number of edges incident to the vertex
Node strength[24]Strength of node
Diversity coefficient[25]Coefficient to measure the diversity of vertex
Betweenness centrality[26]The number of times a node acts as a bridge along the shortest path between two other nodes
K-coreness centrality[27]Used to identify the most important vertices within a graph use idea of K-core
Subgraph centrality[28]Used to identify the most important vertices within subgraph
Eigenvector centrality[29]A measure of the influence of a node in a network
PageRank centrality[30]Used to identify the most important vertices within a graph use idea of PageRank
Assortativity[31]Correlations between nodes of similar degree
One measure of network small-worldness[32]A measure of a small-world network
Table 2. Performance comparisons of different classifiers for three AD assessments (%).
Table 2. Performance comparisons of different classifiers for three AD assessments (%).
ClassifierZNN
(Train/Test)
NEWFM
(Train/Test)
SVM
(Train/Test)
A-NM98.9/97.788.6/87.386.9/83.8
N-AM93.4/84.877.1/72.783.6/81.8
M-AN82.6/72.767.6/63.876.1/71.7
Table 3. Accuracies by layers of ZNN for three AD assessments (%).
Table 3. Accuracies by layers of ZNN for three AD assessments (%).
Layer1st ZLU Layer
(Train/Test)
2nd ZLU Layer (Train/Test)Output Layer
(Train/Test)
A-NM82.9/79.793.7/82.898.9/97.7
N-AM75.0/64.884.7/72.4593.4/84.8
M-AN64.1/59.573.8/64.182.6/72.7
Table 4. 16 Selected discriminative ROIs for three AD assessments.
Table 4. 16 Selected discriminative ROIs for three AD assessments.
Accuracy Rank of ROIA-NMN-AMM-AN
1cingulum_mid_rcingulum_mid_rcingulum_mid_r
2caudate_rcaudate_rpostcentral_l
3caudate_lamygdala_lcaudate_l
4parietal_sup_lfrontal_sup_orb_lparietal_sup_l
5frontal_mid_rcaudate_lfrontal_mid_r
6parietal_inf_lparietal_sup_lparietal_inf_l
7frontal_mid_llingual_lfrontal_mid_l
8cuneus_rparahippocampal_llingual_l
9postcentral_rfrontal_mid_rpostcentral_r
10cuneus_lparietal_inf_lcuneus_l
11frontal_inf_oper_rthalamus_lfrontal_inf_oper_r
12frontal_inf_tri_rfrontal_inf_oper_lfrontal_inf_tri_r
13temporal_inf_rfrontal_mid_ltemporal_inf_r
14parietal_sup_rcuneus_rparietal_sup_r
15cingulum_mid_linsula_lcingulum_mid_l
16frontal_sup_medial_rpostcentral_rthalamus_l
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, B.; Lim, J.S. Zoom-In Neural Network Deep-Learning Model for Alzheimer’s Disease Assessments. Sensors 2022, 22, 8887. https://doi.org/10.3390/s22228887

AMA Style

Wang B, Lim JS. Zoom-In Neural Network Deep-Learning Model for Alzheimer’s Disease Assessments. Sensors. 2022; 22(22):8887. https://doi.org/10.3390/s22228887

Chicago/Turabian Style

Wang, Bohyun, and Joon S. Lim. 2022. "Zoom-In Neural Network Deep-Learning Model for Alzheimer’s Disease Assessments" Sensors 22, no. 22: 8887. https://doi.org/10.3390/s22228887

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop