Advances in Neural Network/Deep Learning and Symmetry/Asymmetry

A special issue of Symmetry (ISSN 2073-8994). This special issue belongs to the section "Computer".

Deadline for manuscript submissions: 31 October 2025 | Viewed by 1653

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Engineering, Federal University of Pará, Belém 66075-110, Brazil
Interests: artificial intelligence; artificial neural networks and evolutionary computing

E-Mail Website
Guest Editor
Department of Electrical Engineering, Federal University of Rio Grande do Norte, Natal 59072-97, Brazil
Interests: computational intelligence; data clustering; neural networks and bio-inspired algorithms, strategy and management; digital law in society

E-Mail Website
Guest Editor
Department of Computer Science, Santa Catarina State University, Joinville 89219-710, Brazil
Interests: computer science; bio-inspired algorithms; evolutionary computation; swarm intelligence

E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, Federal University of Bahia, Salvador 40210-630, Brazil
Interests: signal processing and electronics; intelligent classification systems; deep learning; independent components analysis; online filtering in high energy physics; digital and power electronics

Special Issue Information

Dear Colleagues,

We are presently experiencing a profound utilization of Artificial Intelligence techniques across human society. At the core of this epochal shift stands the Deep Learning methodology, serving as a pivotal enabling technology. Deep neural networks thrive on the abundance of extensive datasets and accessible computing resources. Deep Learning grapples with vast volumes of data, extracting pertinent information and latent knowledge embedded within. Its pervasive influence extends across virtually all facets of contemporary society, notably revolutionizing voice and image recognition, healthcare, and, more recently, natural language processing.

The development of human language is inherent and evolves continuously over a lifetime. However, machines lack this innate ability to evolve without the aid of advanced Deep Learning algorithms. Efforts to refine machine language comprehension have transitioned from statistical to neural language models. Recently, the expansion of pre-trained language models, including Transformer models, has notably boosted Deep Learning's prowess in natural language processing (NLP) tasks by leveraging extensive datasets, enhancing model capacity, and refining performance. The emergence of Large Language Models (LLMs) has significantly impacted both the AI community and broader public spheres, offering the potential for transformative advancements in the development and application of AI algorithms.

On the other hand, symmetry/Asymmetry is a fundamental tool in the exploration of a broad range of complex systems. In deep learning symmetry has been explored in both models and data. Models that satisfy the symmetries of the problem are not only correct but also can produce predictions with smaller errors, using a small amount of training points.

However, even in the modern era, many ponder: what lies beyond? What breakthrough awaits us next? Despite the considerable advancements in Deep Learning, numerous obstacles remain. Deep Learning algorithms typically hinge on extensive datasets for training. Furthermore, the energy consumption required to run these algorithms on conventional computers is significant. Training such networks for practical applications presents a formidable challenge, requiring the utilization of cloud computing resources and powerful GPUs. Thus, if future demand involves deploying these algorithms, for example, in mobile devices, autonomous vehicles, and everyday sensors crucial to digital transformation, the field will have to overcome several substantial hurdles and consider symmetry/asymmetry concepts.

This Special Issue aims to provide a platform for researchers to share their latest advances in neural networks, deep learning, generative adversarial networks, symmetry/asymmetry, and their applications in solving real-world problems.

Topics of interest for this Special Issue include, but are not limited to:

  • New architectures and algorithms for neural networks and deep learning;
  • Advances in fuzzy neural networks, deep learning and ensemble;
  • Advances of symmetry/asymmetry in neural networks and deep learning;
  • Applications of neural networks and deep learning in computer vision, speech recognition, natural language processing, and robotics;
  • Transferring learning techniques in neural networks and deep learning;
  • Interpretable and explainable neural networks and deep learning models;
  • Neural network optimization and regularization techniques;
  • Automation of training of neural networks and deep learning (automated machine learning, including evolutionary algorithms);
  • Deep learning for data analysis and prediction;
  • Extracting understanding from large-scale and heterogeneous data;
  • Collection of datasets and training of deep learning models;
  • Generative Adversarial Network and its applications;
  • Trustworthy AI.

We invite researchers to submit their original research articles, reviews, and short communications related to the above topics. All submissions will undergo a rigorous peer-review process, and accepted papers will be published in the Special Issue of Symmetry.

Prof. Dr. Roberto Celio Limao de Oliveira
Prof. Dr. José Alfredo F. Costa
Prof. Dr. Rafael Stubs Parpinelli
Prof. Dr. Eduardo F. Simas Filho
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Symmetry is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • applicable neural networks theory
  • augmented intelligence
  • computer vision and image processing
  • deep learning
  • deep learning applications
  • ethical deep learning
  • explainable deep learning
  • generative adversarial network
  • generative pre-trained transformer
  • hybrid intelligent systems
  • large language models
  • neural networks
  • neuro-fuzzy systems
  • symmetry/asymmetry
  • prediction analysis
  • supervised and unsupervised learning methods
  • transfer learning
  • trustworthy deep learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 1301 KB  
Article
Divide-and-Merge Parallel Hierarchical Ensemble DNNs with Local Knowledge Augmentation
by Zhibin Jiang, Shuai Dong, Kaining Liu, Jie Zhou and Xiongtao Zhang
Symmetry 2025, 17(8), 1362; https://doi.org/10.3390/sym17081362 - 20 Aug 2025
Viewed by 445
Abstract
Traditional deep neural networks (DNNs) often suffer from a time-consuming training process, which is restricted by accumulation of excessive network layers and a large amount of parameters. More neural units are required to be stacked to achieve desirable performance. Specifically, when dealing with [...] Read more.
Traditional deep neural networks (DNNs) often suffer from a time-consuming training process, which is restricted by accumulation of excessive network layers and a large amount of parameters. More neural units are required to be stacked to achieve desirable performance. Specifically, when dealing with large-scale datasets, a single DNN can hardly obtain the best performance on the available limited computing resources. To address the issues above, in this paper, a novel Parallel Hierarchical Ensemble Deep Neural Network (PH-E-DNN) is proposed to improve accuracy and efficiency of the deep network. Firstly, the fuzzy C-means algorithm (FCM) is adopted so that the large-scale dataset is separated into several small data partitions. As a benefit of the fuzzy partitioning of the FCM, several sub-models can be obtained through learning their respective data partitions and isolating them from the others. Secondly, the prediction results of each sub-model in the current level are used as the discriminative knowledge appended to original regional subsets, and predictions from each level symmetrically augment inputs for the next level. In the PH-E-DNN architecture, predictions from each level symmetrically augment inputs for the next level, creating a symmetrical flow of discriminative knowledge across the hierarchical structure. Finally, multiple regional subsets are merged to form a global augmented dataset, while multi-level parallel sub-models are stacked to organize a large-scale deep ensemble network. More importantly, only the multiple DNNs in the last level are ensembled to generate the decision result of the proposed PH-E-DNN. Extensive experiments demonstrate that the PH-E-DNN is superior to some traditional and deep learning models, only requiring a few parameters to be set, which demonstrates its efficiency and flexibility. Full article
(This article belongs to the Special Issue Advances in Neural Network/Deep Learning and Symmetry/Asymmetry)
Show Figures

Figure 1

32 pages, 2983 KB  
Article
TS-SMOTE: An Improved SMOTE Method Based on Symmetric Triangle Scoring Mechanism for Solving Class-Imbalanced Problems
by Shihao Song and Sibo Yang
Symmetry 2025, 17(8), 1326; https://doi.org/10.3390/sym17081326 - 14 Aug 2025
Viewed by 464
Abstract
The imbalanced classification problem is a key research in machine learning as the relevant algorithms tend to focus on the features and patterns of the majority class instead of insufficient learning of the minority class, resulting in unsatisfactory performance of machine learning. Scholars [...] Read more.
The imbalanced classification problem is a key research in machine learning as the relevant algorithms tend to focus on the features and patterns of the majority class instead of insufficient learning of the minority class, resulting in unsatisfactory performance of machine learning. Scholars have attempted to solve this problem and proposed many ideas at the data and algorithm levels. The SMOTE (Synthetic Minority Over-sampling Technique) method is an effective approach at the data level. In this paper, we propose an oversampling method based on SMOTE and symmetric regular triangles scoring mechanism. This method uses symmetrical triangles to flatten the plane, and then establishes a suitable scoring mechanism to select the minority samples that participate in the synthesis. After selecting the minority samples, it conducts multiple linear interpolations according to the established rules to generate new minority samples. In the experimental section, we select 30 imbalanced datasets to test their performance of the proposed method and some classical oversampling methods under different indicators. In order to demonstrate the performance of these oversampling methods with classifiers, we select three different classifiers and test their performance. The experimental results show that the TS-SMOTE method has the best performance. Full article
(This article belongs to the Special Issue Advances in Neural Network/Deep Learning and Symmetry/Asymmetry)
Show Figures

Figure 1

Back to TopTop