electronics-logo

Journal Browser

Journal Browser

Deep Learning Approaches for Natural Language Processing

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (15 March 2026) | Viewed by 12816

Special Issue Editors


E-Mail Website
Guest Editor
Institute of Automatic Control and Robotics, Faculty of Control, Robotics and Electrical Engineering, Poznan University of Technology, ul. Piotrowo 3A, 60-965 Poznań, Poland
Interests: machine learning; deep learning; artificial neural networks; natural language processing; graph neural networks
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Artificial Intelligence, Wroclaw University of Science and Technology, 50-370 Wroclaw, Poland
Interests: machine learning; artificial intelligence; biomedical data processing; brain–computer interfaces; neurocomputing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Computer Science, Kazimierz Wielki University, 85-064 Bydgoszcz, Poland
Interests: artificial intelligence; biomedical data processing; brain–computer interfaces; healthcare informatics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Rapid advancements in deep learning have revolutionized the field of natural language processing (NLP), enabling unprecedented capabilities in understanding, generating, and interacting with human language. This Special Issue of Electronics will focus on exploring the latest developments, challenges, and applications in deep learning in NLP. It will bring together researchers, practitioners, and industry experts to share cutting-edge methodologies, innovative models, and transformative insights.

Deep learning has led to the introduction of powerful architectures, such as recurrent neural networks (RNNs), convolutional neural networks (CNNs), transformers, and large-scale pre-trained language models, such as GPT, BERT, and T5, that have significantly advanced core NLP tasks. These include machine translation, text summarization, question answering, sentiment analysis, and named entity recognition. While these techniques have reshaped the boundaries of NLP performance, they also present challenges related to computational demands, data scarcity, interpretability, and fairness.

This issue invites contributions that address these challenges and expand the horizons of deep learning in NLP. Topics of interest include, but are not limited to, the following:

  • Novel neural architectures and optimization techniques for NLP;
  • Advances in pre-training, fine-tuning, and transfer learning for linguistic tasks;
  • Resource-efficient deep learning methods for NLP on edge devices;
  • Multilingual and cross-lingual models for diverse language applications;
  • Ethical concerns, including bias mitigation, fairness, and transparency in NLP systems;
  • Case studies highlighting real-world applications in industries such as healthcare, education, and finance.

Additionally, this issue encourages submissions that bridge deep learning and linguistics, offering insights into how neural models align with, or diverge from, human language processing. Explorations of hybrid systems that integrate symbolic reasoning and deep learning for more robust language understanding are also welcome.

By providing a platform for groundbreaking research and practical advancements, this Special Issue will foster innovation and collaboration, driving the next generation of NLP systems. Researchers and practitioners are invited to submit original research articles, comprehensive reviews, and insightful case studies to contribute to this vibrant area of study.

Prof. Dr. Aleksandra Świetlicka
Prof. Dr. Aleksandra Kawala-Sterniuk
Dr. Dariusz Mikołajewski
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

 

Keywords

  • natural language processing
  • large language models
  • machine learning
  • speech analysis

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

33 pages, 2017 KB  
Article
GTHL-Emo: Adaptive Imbalance-Aware and Correlation-Aligned Training for Arabic Multi-Label Emotion Detection
by Mashary N. Alrasheedy, Sabrina Tiun and Fariza Fauzi
Electronics 2026, 15(6), 1169; https://doi.org/10.3390/electronics15061169 - 11 Mar 2026
Viewed by 351
Abstract
Multi-label emotion detection (MLED) suffers from long-tailed label distributions and structured inter-label correlations, which jointly suppress rare label recall and yield incoherent predictions. We present Graph Neural Network-Enhanced Transformer with Hybrid Loss Weighting (GTHL-Emo), a unified framework that addresses both challenges without heavy [...] Read more.
Multi-label emotion detection (MLED) suffers from long-tailed label distributions and structured inter-label correlations, which jointly suppress rare label recall and yield incoherent predictions. We present Graph Neural Network-Enhanced Transformer with Hybrid Loss Weighting (GTHL-Emo), a unified framework that addresses both challenges without heavy additional machinery. First, an adaptive imbalance-aware training scheme combines binary cross-entropy, asymmetric focal, and pairwise ranking losses under a learned batch-wise controller, emphasizing rare labels while stabilizing thresholding. Second, a lightweight correlation alignment module learns transformer-based label embeddings and aligns their predicted affinities with empirical co-occurrence via Kullback–Leibler (KL) regularization, smoothing rare label predictions through correlated frequent labels. A transformer encoder with learnable attention pooling provides semantic representations, and a dynamic GraphSAGE layer captures inter-instance structural dependencies. Comprehensive evaluation across three Arabic benchmarks—SemEval-2018-Ec-Ar, ExaAEC, and SemEval-2025 (Track A, Arq)—demonstrates competitive or leading performance. On SemEval-2018-Ec-Ar, GTHL-Emo attained a Jaccard accuracy of 58.70%, micro-F1 score of 71.02%, and macro-F1 score of 60.48%. On ExaAEC, it achieved a Jaccard accuracy of 65.99%, micro-F1 score of 70.72%, and macro-F1 score of 68.71%. On SemEval-2025-Arq, it obtained a Jaccard accuracy of 41.47%, micro-F1 score of 56.78%, and macro-F1 score of 56.69%. Ablation studies revealed that the GraphSAGE structure and ranking loss contributed most significantly (1.45% and 1.46% Jaccard accuracy drops, respectively), while label correlation alignment provided consistent improvements across the scales. These findings demonstrate that jointly optimizing imbalance-aware objectives and label dependencies yields robust Arabic MLED with minimal overhead. Full article
(This article belongs to the Special Issue Deep Learning Approaches for Natural Language Processing)
Show Figures

Figure 1

38 pages, 4310 KB  
Article
Designing Trustworthy Recommender Systems: A Glass-Box, Interpretable, and Auditable Approach
by Parisa Vahdatian, Majid Latifi and Mominul Ahsan
Electronics 2025, 14(24), 4890; https://doi.org/10.3390/electronics14244890 - 12 Dec 2025
Cited by 1 | Viewed by 848
Abstract
Recommender systems are widely deployed across digital platforms, yet their opacity raises concerns about auditability, fairness, and user trust. To address the gap between predictive accuracy and model interpretability, this study proposes a glass-box architecture for trustworthy recommendation, designed to reconcile predictive performance [...] Read more.
Recommender systems are widely deployed across digital platforms, yet their opacity raises concerns about auditability, fairness, and user trust. To address the gap between predictive accuracy and model interpretability, this study proposes a glass-box architecture for trustworthy recommendation, designed to reconcile predictive performance with interpretability. The framework integrates interpretable tree ensemble model (Random Forest, XGBoost), an NLP sub-model for tag sentiment, prioritising transparency from feature engineering through to explanation. Additionally, a Reality Check mechanism enforces strict temporal separation and removes already-popular items, compelling the model to forecast latent growth signals rather than mimic popularity thresholds. Evaluated on the MovieLens dataset, the glass-box architectures demonstrated superior discrimination capabilities, with the Random Forest and XGBoost models achieving ROC-AUC scores of 0.92 and 0.91, respectively. These tree ensembles notably outperformed the standard Logistic Regression (0.89) and the neural baseline (MLP model with 0.86). Beyond accuracy, the design implements governance through a multi-layered Governance Stack: (i) attribution and traceability via exact TreeSHAP values, (ii) stability verification using ICE plots and sensitivity analysis across policy configurations, and (iii) fairness audits detecting genre and temporal bias. Dynamic threshold optimisation further improves recall for emerging items under severe class imbalance. Cross-domain validation on Amazon Electronics test dataset confirmed architectural generalisability (AUC = 0.89), demonstrating robustness in sparse, high-friction environments. These findings challenge the perceived trade-off between accuracy and interpretability, offering a practical blueprint for Safe-by-Design recommender systems that embed fairness, accountability, and auditability as intrinsic properties rather than post hoc add-ons. Full article
(This article belongs to the Special Issue Deep Learning Approaches for Natural Language Processing)
Show Figures

Figure 1

63 pages, 3502 KB  
Article
A Novel Architecture for Understanding, Context Adaptation, Intentionality and Experiential Time in Emerging Post-Generative AI Through Sophimatics
by Gerardo Iovane and Giovanni Iovane
Electronics 2025, 14(24), 4812; https://doi.org/10.3390/electronics14244812 - 7 Dec 2025
Cited by 2 | Viewed by 1011
Abstract
Contemporary artificial intelligence is dominated by generative systems that excel at extracting patterns but fail to grasp meaning, sense, context, and experiential temporality. This limitation highlights the need for new computational wisdom that combines philosophical insights with advanced models to produce AI systems [...] Read more.
Contemporary artificial intelligence is dominated by generative systems that excel at extracting patterns but fail to grasp meaning, sense, context, and experiential temporality. This limitation highlights the need for new computational wisdom that combines philosophical insights with advanced models to produce AI systems capable of authentic understanding. Sophimatics, as elaborated upon in this article, is introduced as a science of computational wisdom that rejects the purely syntactic manipulation of symbols characteristic of classical physical symbolic systems and addresses the shortcomings of generative statistical approaches. Building on philosophical foundations of dynamic ontology, intentionality and dialectical reasoning, Sophimatics integrates complex temporality, multidimensional semantic modeling, hybrid symbolic–connectionist logic, and layered memory structures to that the AI can perceive, remember, reason, and act in ethically grounded ways. This article, which is part of a set of papers, summarizes the theoretical framework underlying Sophimatics and outlines the conceptual results of the materials and methods, illustrating the potential of this approach to improve interpretability, contextual adaptation, and ethical deliberation compared to basic generative models. This is followed by a methodology and a complete formal model for translating philosophical categories into an operational model and specific architecture. This article represents Phase 1 of a six-phase research program, providing mathematical foundations for the architectural implementation and empirical validation presented in companion publications. Following this, several use cases are outlined, and then the Discussion Section anticipates the main results and perspectives for post-generative AI solutions within the Sophimatic paradigm. Full article
(This article belongs to the Special Issue Deep Learning Approaches for Natural Language Processing)
Show Figures

Figure 1

23 pages, 884 KB  
Article
Large Language Models for Structured Information Processing in Construction and Facility Management
by Kyrylo Buga, Ratko Tesic, Elif Koyuncu and Thomas Hanne
Electronics 2025, 14(20), 4106; https://doi.org/10.3390/electronics14204106 - 20 Oct 2025
Cited by 1 | Viewed by 1354
Abstract
This study examines how the integration of structured information affects the performance of large language models (LLMs) in the context of facility management. The aim is to determine to what extent structured data such as maintenance schedules, room information, and asset inventories can [...] Read more.
This study examines how the integration of structured information affects the performance of large language models (LLMs) in the context of facility management. The aim is to determine to what extent structured data such as maintenance schedules, room information, and asset inventories can improve the accuracy, correctness, and contextual relevance of LLM-generated responses. We focused on scenarios involving function calling of a database with building information. Three use cases were developed to reflect different combinations of structured and unstructured input and output. The research follows a design science methodology and includes the implementation of a modular testing prototype, incorporating empirical experiments using various LLMs (Gemini, Llama, Qwen, and Mistral). The evaluation pipeline consists of three steps: user query translation (natural language into SQL), query execution, and final response (translating the SQL query results into natural language). The evaluation was based on defined criteria such as SQL execution validity, semantic correctness, contextual relevance, and hallucination rate. The study found that the use cases involving function calling are mostly successful. The execution validity improved up to 67% when schema information is provided. Full article
(This article belongs to the Special Issue Deep Learning Approaches for Natural Language Processing)
Show Figures

Figure 1

14 pages, 1646 KB  
Article
Arabic WikiTableQA: Benchmarking Question Answering over Arabic Tables Using Large Language Models
by Fawaz Alsolami and Asmaa Alrayzah
Electronics 2025, 14(19), 3829; https://doi.org/10.3390/electronics14193829 - 27 Sep 2025
Viewed by 1180
Abstract
Table-based question answering (TableQA) has made significant progress in recent years; however, most advancements have focused on English datasets and SQL-based techniques, leaving Arabic TableQA largely unexplored. This gap is especially critical given the widespread use of structured Arabic content in domains such [...] Read more.
Table-based question answering (TableQA) has made significant progress in recent years; however, most advancements have focused on English datasets and SQL-based techniques, leaving Arabic TableQA largely unexplored. This gap is especially critical given the widespread use of structured Arabic content in domains such as government, education, and media. The main challenge lies in the absence of benchmark datasets and the difficulty that large language models (LLMs) face when reasoning over long, complex tables in Arabic, due to token limitations and morphological complexity. To address this, we introduce Arabic WikiTableQA, the first large-scale dataset for non-SQL Arabic TableQA, constructed from the WikiTableQuestions dataset and enriched with natural questions and gold-standard answers. We developed three methods to evaluate this dataset: a direct input approach, a sub-table selection strategy using SQL-like filtering, and a knowledge-guided framework that filters the table using semantic graphs. Experimental results with an LLM show that the graph-guided approach outperforms the others, achieving 74% accuracy, compared to 64% for sub-table selection and 45% for direct input, demonstrating its effectiveness in handling long and complex Arabic tables. Full article
(This article belongs to the Special Issue Deep Learning Approaches for Natural Language Processing)
Show Figures

Figure 1

15 pages, 5650 KB  
Article
Enhancing Interprofessional Communication in Healthcare Using Large Language Models: Study on Similarity Measurement Methods with Weighted Noun Embeddings
by Ji-Young Yeo, Sungkwan Youm and Kwang-Seong Shin
Electronics 2025, 14(11), 2240; https://doi.org/10.3390/electronics14112240 - 30 May 2025
Viewed by 1104
Abstract
Large language models (LLMs) are increasingly applied to specialized domains like medical education, necessitating tailored approaches to evaluate structured responses such as SBAR (Situation, Background, Assessment, Recommendation). This study developed an evaluation tool for nursing student responses using LLMs, focusing on word-based learning [...] Read more.
Large language models (LLMs) are increasingly applied to specialized domains like medical education, necessitating tailored approaches to evaluate structured responses such as SBAR (Situation, Background, Assessment, Recommendation). This study developed an evaluation tool for nursing student responses using LLMs, focusing on word-based learning and assessment methods to align automated scoring with expert evaluations. We propose a three-stage biasing approach: (1) integrating reference answers into the training corpus; (2) incorporating high-scoring student responses; (3) applying domain-critical token weighting through Weighted Noun Embeddings to enhance similarity measurements. By assigning higher weights to critical medical nouns and lower weights to less relevant terms, the embeddings prioritize domain-specific terminology. Employing Word2Vec and FastText models trained on general conversation, medical, and reference answer corpora alongside Sentence-BERT for comparison, our results demonstrate that biasing with reference answers, high-scoring responses, and weighted embeddings improves alignment with human evaluations. Word-based models, particularly after biasing, effectively distinguish high-performing responses from lower ones, as evidenced by increased cosine similarity differences. These findings validate that the proposed methodology enhances the precision and objectivity of evaluating descriptive answers, offering a practical solution for educational settings where fairness and consistency are paramount. Full article
(This article belongs to the Special Issue Deep Learning Approaches for Natural Language Processing)
Show Figures

Figure 1

Review

Jump to: Research

24 pages, 1006 KB  
Review
Adaptation of Fuzzy Systems Based on Ordered Fuzzy Numbers: A Review of Applications and Development Prospects
by Olga Małolepsza, Dariusz Mikołajewski and Piotr Prokopowicz
Electronics 2025, 14(12), 2341; https://doi.org/10.3390/electronics14122341 - 7 Jun 2025
Cited by 1 | Viewed by 1718
Abstract
This paper presents a comprehensive overview of the adaptation of fuzzy systems based on Ordered Fuzzy Numbers (OFNs), an extension of classical fuzzy set theory that allows for more accurate modeling of uncertainty and variability across diverse domains. Key adaptation techniques—including genetic algorithms, [...] Read more.
This paper presents a comprehensive overview of the adaptation of fuzzy systems based on Ordered Fuzzy Numbers (OFNs), an extension of classical fuzzy set theory that allows for more accurate modeling of uncertainty and variability across diverse domains. Key adaptation techniques—including genetic algorithms, evolutionary programming, learning algorithms, reinforcement learning, and online adaptation—are systematically analyzed and compared in terms of their strengths, limitations, and application areas. The analysis reveals that, despite the considerable potential of OFN-based systems in fields such as engineering and the social sciences, current adaptation methods encounter challenges related to computational complexity, scalability, and real-time implementation. This work aims to provide a comprehensive overview of the state of the art in the field and inspire further research on OFN applications in various areas of science and technology. Full article
(This article belongs to the Special Issue Deep Learning Approaches for Natural Language Processing)
Show Figures

Figure 1

20 pages, 912 KB  
Review
Deep Learning Approaches to Natural Language Processing for Digital Twins of Patients in Psychiatry and Neurological Rehabilitation
by Emilia Mikołajewska and Jolanta Masiak
Electronics 2025, 14(10), 2024; https://doi.org/10.3390/electronics14102024 - 16 May 2025
Cited by 6 | Viewed by 4276
Abstract
Deep learning (DL) approaches to natural language processing (NLP) offer powerful tools for creating digital twins (DTs) of patients in psychiatry and neurological rehabilitation by processing unstructured textual data such as clinical notes, therapy transcripts, and patient-reported outcomes. Techniques such as transformer models [...] Read more.
Deep learning (DL) approaches to natural language processing (NLP) offer powerful tools for creating digital twins (DTs) of patients in psychiatry and neurological rehabilitation by processing unstructured textual data such as clinical notes, therapy transcripts, and patient-reported outcomes. Techniques such as transformer models (e.g., BERT, GPT) enable the analysis of nuanced language patterns to assess mental health, cognitive impairment, and emotional states. These models can capture subtle linguistic features that correlate with symptoms of degenerative disorders (e.g., aMCI) and mental disorders such as depression or anxiety, providing valuable insights for personalized treatment. In neurological rehabilitation, NLP models help track progress by analyzing a patient’s language during therapy, such as recovery from aphasia or cognitive decline caused by neurological deficits. DL methods integrate multimodal data by combining NLP with speech, gesture, and sensor data to create holistic DTs that simulate patient behavior and health trajectories. Recurrent neural networks (RNNs) and attention mechanisms are commonly used to analyze time-series conversational data, enabling long-term tracking of a patient’s mental health. These approaches support predictive analytics and early diagnosis by predicting potential relapses or adverse events by identifying patterns in patient communication over time. However, it is important to note that ethical considerations such as ensuring data privacy, avoiding bias, and ensuring explainability are crucial when implementing NLP models in clinical settings to ensure patient trust and safety. NLP-based DTs can facilitate collaborative care by summarizing patient insights and providing actionable recommendations to medical staff in real time. By leveraging DL, these DTs offer scalable, data-driven solutions to promote personalized care and improve outcomes in psychiatry and neurological rehabilitation. Full article
(This article belongs to the Special Issue Deep Learning Approaches for Natural Language Processing)
Show Figures

Figure 1

Back to TopTop