mathematics-logo

Journal Browser

Journal Browser

Advanced Applications of Deep Learning Methods: Interdisciplinary Perspectives

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "E3: Mathematical Biology".

Deadline for manuscript submissions: closed (31 March 2026) | Viewed by 29849

Special Issue Editors

School of Computer Science and Engineering, Beihang University, Beijing, China
Interests: mobile computing; complex networks; machine learning; data mining; image processing

E-Mail Website
Guest Editor
School of Electronic Engineering, Dublin City University, Collins Avenue Extension, D09 D209 Dublin, Ireland
Interests: green communication and networking; network security; hardware acceleration for ML and A; AI technology in education
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

A multitude of medical data from patients provide valuable information for diagnosing various diseases. However, it is cumbersome for clinicians to examine these data manually. In recent years, with advances in computer technology and mathematical methods, deep learning methods have been widely used in medical diagnosis; thus, medical diagnosis based on deep learning has become an important research direction.

This Special Issue, entitled "Advanced Applications of Deep Learning Methods in Medical Diagnosis", aims to highlight the latest advances in the field of deep learning for medical diagnosis. We invite authors to submit original research articles as well as review articles, focusing on (but not limited to) the following topics:

  • Deep learning of multimodal medical data;
  • Deep learning for lesion recognition and localization in medical images;
  • Deep learning for medical image processing;
  • Interpretability of medical diagnosis in deep learning;
  • Semi-supervised learning in medical diagnosis;
  • Medical generation models;
  • Computer-aided diagnosis systems based on deep learning.

Dr. Chao Tong
Dr. Xiaojun Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • machine learning
  • medical diagnosis
  • medical data
  • medical image processing
  • medical system

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 1712 KB  
Article
Decoding Cognitive States via Riemannian Geometry-Informed Channel Clustering for EEG Transformers
by Luoyi Feng and Gangxing Yan
Mathematics 2026, 14(8), 1327; https://doi.org/10.3390/math14081327 - 15 Apr 2026
Viewed by 122
Abstract
Electroencephalography (EEG) provides a non-invasive and high-temporal-resolution modality for decoding cognitive states, but high-density recordings remain challenging for Transformer-based models because self-attention scales quadratically with the number of channels. In addition, conventional Euclidean representations do not fully capture the intrinsic geometry of EEG [...] Read more.
Electroencephalography (EEG) provides a non-invasive and high-temporal-resolution modality for decoding cognitive states, but high-density recordings remain challenging for Transformer-based models because self-attention scales quadratically with the number of channels. In addition, conventional Euclidean representations do not fully capture the intrinsic geometry of EEG covariance features, which may limit robustness in cross-subject settings. To address these issues, we propose EEG-RCformer, a Riemannian geometry-informed channel clustering Transformer for EEG decoding. The model first computes per-channel symmetric positive definite (SPD) covariance matrices from windowed EEG features and uses the affine-invariant Riemannian metric (AIRM) to identify trial-specific functional hubs. These hubs are then integrated with capacity-constrained spatial clustering to generate anatomically plausible and computationally efficient channel groups, which are encoded as tokens for a Transformer classifier. We evaluated EEG-RCformer on the MODMA and SEED datasets under both subject-dependent and -independent paradigms, achieving area under the curve (AUC) values of 0.9802 and 0.7154 on MODMA and 0.8541 and 0.8011 on SEED, respectively. Paired statistical tests further showed significant gains for MODMA in both the subject-dependent and -independent settings and for SEED in the subject-dependent setting, while SEED still showed a positive but non-significant mean improvement in the subject-independent setting. Full article
Show Figures

Figure 1

20 pages, 1156 KB  
Article
Enhancing Graph Summarization Using Node Importance and Graph Attention Networks
by Krista Rizman Žalik, Domen Mongus and Mitja Žalik
Mathematics 2026, 14(8), 1283; https://doi.org/10.3390/math14081283 - 12 Apr 2026
Viewed by 331
Abstract
As the scale of graph-structured data continues to grow, graph summarization has become an important technique for storage efficiency and high-level visualization. This study investigates a Node Importance (NI) approach to graph summarization that prioritizes structural integrity over simple size reduction. The NI [...] Read more.
As the scale of graph-structured data continues to grow, graph summarization has become an important technique for storage efficiency and high-level visualization. This study investigates a Node Importance (NI) approach to graph summarization that prioritizes structural integrity over simple size reduction. The NI approach selects super nodes by ranking vertices through centrality and propagation metrics. Experimental results demonstrate that the proposed NI method achieves compression rates comparable to or slightly lower than traditional Minimum Description Length (MDL) methods across various datasets while maintaining structural integrity. However, today, the high dimensionality and complexity of modern graph data are making deep learning techniques more popular. Great progress in deep learning summarization techniques is achieved with Graph Neural Networks (GNNs). This study investigates the structure and suitability of different GNN architectures for graph summarization using the NI approach. Graph Attention Networks (GATs) and their variants are discussed as a flexible, learned notion of node importance via attention. We present an examination of GATs, covering both diverse approaches and improvements. This study also discusses extensions that enhance the concept of node importance established by the GAT model, GAT variants for node importance estimation, and application-specific GAT research. Full article
Show Figures

Figure 1

42 pages, 551 KB  
Article
AI Reasoning in Deep Learning Era: From Symbolic AI to Neural–Symbolic AI
by Baoyu Liang, Yuchen Wang and Chao Tong
Mathematics 2025, 13(11), 1707; https://doi.org/10.3390/math13111707 - 23 May 2025
Cited by 23 | Viewed by 26236
Abstract
The pursuit of Artificial General Intelligence (AGI) demands AI systems that not only perceive but also reason in a human-like manner. While symbolic systems pioneered early breakthroughs in logic-based reasoning, such as MYCIN and DENDRAL, they suffered from brittleness and poor scalability. Conversely, [...] Read more.
The pursuit of Artificial General Intelligence (AGI) demands AI systems that not only perceive but also reason in a human-like manner. While symbolic systems pioneered early breakthroughs in logic-based reasoning, such as MYCIN and DENDRAL, they suffered from brittleness and poor scalability. Conversely, modern deep learning architectures have achieved remarkable success in perception tasks, yet continue to fall short in interpretable and structured reasoning. This dichotomy has motivated growing interest in Neural–Symbolic AI, a paradigm that integrates symbolic logic with neural computation to unify reasoning and learning. This survey provides a comprehensive and technically grounded overview of AI reasoning in the deep learning era, with a particular focus on Neural–Symbolic AI. Beyond a historical narrative, we introduce a formal definition of AI reasoning and propose a novel three-dimensional taxonomy that organizes reasoning paradigms by representation form, task structure, and application context. We then systematically review recent advances—including Differentiable Logic Programming, abductive learning, program induction, logic-aware Transformers, and LLM-based symbolic planning—highlighting their technical mechanisms, capabilities, and limitations. In contrast to prior surveys, this work bridges symbolic logic, neural computation, and emergent generative reasoning, offering a unified framework to understand and compare diverse approaches. We conclude by identifying key open challenges such as symbolic–continuous alignment, dynamic rule learning, and unified architectures, and we aim to provide a conceptual foundation for future developments in general-purpose reasoning systems. Full article
16 pages, 2866 KB  
Article
Novel Automatic Classification of Human Adult Lung Alveolar Type II Cells Infected with SARS-CoV-2 through the Deep Transfer Learning Approach
by Turki Turki, Sarah Al Habib and Y-h. Taguchi
Mathematics 2024, 12(10), 1573; https://doi.org/10.3390/math12101573 - 17 May 2024
Viewed by 2097
Abstract
Transmission electron microscopy imaging provides a unique opportunity to inspect the detailed structure of infected lung cells with SARS-CoV-2. Unlike previous studies, this novel study aims to investigate COVID-19 classification at the lung cellular level in response to SARS-CoV-2. Particularly, differentiating between healthy [...] Read more.
Transmission electron microscopy imaging provides a unique opportunity to inspect the detailed structure of infected lung cells with SARS-CoV-2. Unlike previous studies, this novel study aims to investigate COVID-19 classification at the lung cellular level in response to SARS-CoV-2. Particularly, differentiating between healthy and infected human alveolar type II (hAT2) cells with SARS-CoV-2. Hence, we explore the feasibility of deep transfer learning (DTL) and introduce a highly accurate approach that works as follows: First, we downloaded and processed 286 images pertaining to healthy and infected hAT2 cells obtained from the electron microscopy public image archive. Second, we provided processed images to two DTL computations to induce ten DTL models. The first DTL computation employs five pre-trained models (including DenseNet201 and ResNet152V2) trained on more than one million images from the ImageNet database to extract features from hAT2 images. Then, it flattens and provides the output feature vectors to a trained, densely connected classifier with the Adam optimizer. The second DTL computation works in a similar manner, with a minor difference in that we freeze the first layers for feature extraction in pre-trained models while unfreezing and jointly training the next layers. The results using five-fold cross-validation demonstrated that TFeDenseNet201 is 12.37× faster and superior, yielding the highest average ACC of 0.993 (F1 of 0.992 and MCC of 0.986) with statistical significance (P<2.2×1016 from a t-test) compared to an average ACC of 0.937 (F1 of 0.938 and MCC of 0.877) for the counterpart (TFtDenseNet201), showing no significance results (P=0.093 from a t-test). Full article
Show Figures

Figure 1

Back to TopTop