Topic Editors

Department of Informatics and Computer Engineering, University of West Attica, 12243 Athens, Greece
Department of Informatics, Ionian University, 491 32 Corfu, Greece
Department of Informatics, Ionian University, 491 32 Corfu, Greece

Artificial Intelligence Models, Tools and Applications: 2nd Edition

Abstract submission deadline
30 September 2026
Manuscript submission deadline
30 November 2026
Viewed by
4045

Topic Information

Dear Colleagues,

In recent years, the need for efficient Artificial Intelligence models, tools, and applications has become increasingly evident. Machine Learning and data science, along with the vast amounts of data they generate, provide a powerful new source of valuable insights. New and innovative approaches are needed to address the complex research challenges in this field. Within this context, Artificial Intelligence stands out as one of the most important research areas of our time. The research community, in particular, faces significant challenges in terms of data management and must resolve these by integrating emerging disciplines in information processing, as well as developing related tools and applications.

This Topic aims to bring together interdisciplinary approaches that focus on innovative applications and established Artificial Intelligence methodologies. Given that data is often heterogeneous and dynamic in nature, computer science researchers are encouraged to develop new or adapt existing AI models, tools, and applications to effectively address these challenges. The Topic welcomes submissions from anyone wishing to contribute relevant research work.

Dr. Phivos Mylonas
Dr. Katia Lida Kermanidis
Prof. Dr. Manolis Maragoudakis
Topic Editors

Keywords

  • artificial intelligence
  • machine learning
  • smart tools and applications
  • computational logic
  • multi-agent systems
  • cross-disciplinary AI applications

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Applied Sciences
applsci
2.5 5.5 2011 16 Days CHF 2400 Submit
Computers
computers
4.2 7.5 2012 17.5 Days CHF 1800 Submit
Digital
digital
- 4.8 2021 27.7 Days CHF 1200 Submit
Electronics
electronics
2.6 6.1 2012 16.4 Days CHF 2400 Submit
Smart Cities
smartcities
5.5 14.7 2018 25.2 Days CHF 2000 Submit

Preprints.org is a multidisciplinary platform offering a preprint service designed to facilitate the early sharing of your research. It supports and empowers your research journey from the very beginning.

MDPI Topics is collaborating with Preprints.org and has established a direct connection between MDPI journals and the platform. Authors are encouraged to take advantage of this opportunity by posting their preprints at Preprints.org prior to publication:

  1. Share your research immediately: disseminate your ideas prior to publication and establish priority for your work.
  2. Safeguard your intellectual contribution: Protect your ideas with a time-stamped preprint that serves as proof of your research timeline.
  3. Boost visibility and impact: Increase the reach and influence of your research by making it accessible to a global audience.
  4. Gain early feedback: Receive valuable input and insights from peers before submitting to a journal.
  5. Ensure broad indexing: Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (8 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
14 pages, 1388 KB  
Article
Table-Aware Row-Level RAG for Classical Chinese Understanding
by Zhihao Liu and Waiyie Leong
Computers 2026, 15(4), 221; https://doi.org/10.3390/computers15040221 - 2 Apr 2026
Viewed by 201
Abstract
The classical Chinese language is characterized by a high density of meaning, wide use of polysemy, and strong dependence on history and culture, which pose challenges to existing large language models (LLMs). Retrieval-augmented generation (RAG) technology has become a prevailing option that could [...] Read more.
The classical Chinese language is characterized by a high density of meaning, wide use of polysemy, and strong dependence on history and culture, which pose challenges to existing large language models (LLMs). Retrieval-augmented generation (RAG) technology has become a prevailing option that could address these issues without retraining the model, but most of the existing RAG systems regard structured tables as unstructured text, encoding a whole table into one vector. Such a schema usually hides the row-level semantic information and raises the reasoning cost for LLMs. In this study, we propose a new table-aware row-wise retrieval system in which each row of a table is treated as an individual semantic unit, explicitly (instead of implicitly) reasoning at generation time. We organize the table into row-level vector representations, which makes retrieval more deterministic and semantically interpretable, in particular, for pedagogical or philological datasets. Based on LangChain and integrated with Qwen LLMs, our system can be evaluated experimentally for classical Chinese learning tasks, where we find that compared with the traditional RAG systems, this system improves on retrieval performance, semantic consistency, and explainability, with no model training or extra computation time required. Full article
Show Figures

Figure 1

18 pages, 37747 KB  
Article
Factually Consistent Prompting with LLMs for Cross-Lingual Dialogue Summarization
by Zhongtian Bao, Wenjian Ding, Yao Zhang, Jun Wang, Zhe Sun, Andrzej Cichocki and Zhenglu Yang
Computers 2026, 15(3), 197; https://doi.org/10.3390/computers15030197 - 21 Mar 2026
Viewed by 275
Abstract
Recent breakthroughs in large language models have made it feasible to effectively summarize cross-lingual dialogue information, proving essential for the global communication context. However, existing methodologies encounter difficulties in maintaining factual consistency across multiple dialogue exchanges and lack clear explanations of the summarization [...] Read more.
Recent breakthroughs in large language models have made it feasible to effectively summarize cross-lingual dialogue information, proving essential for the global communication context. However, existing methodologies encounter difficulties in maintaining factual consistency across multiple dialogue exchanges and lack clear explanations of the summarization process. This paper presents a novel factually consistent prompting technology with large language models to address these challenges in cross-lingual dialogue summarization. First, we propose a factual replacement mechanism to enhance information analysis by incorporating noise information into summarization candidates. We adopt a self-guidance framework to enforce factual consistency, enhancing information flow tracking in cross-lingual hybrid dialogue scenarios with the assistance of GPT-based models. Furthermore, we introduce a view-aware chain-of-thought-driven architecture to improve the interpretability and transparency of the cross-lingual dialogue summarization process. Comprehensive experimental evaluations on cross-lingual summarization tasks, spanning English, French, Spanish, Russian, Chinese, and Arabic, and hybrid cross-lingual tasks substantiate that the proposed model achieves superior performance relative to state-of-the-art baselines. Full article
Show Figures

Figure 1

16 pages, 5787 KB  
Article
USTGCN: A Unified Spatio-Temporal Graph Convolutional Network for Stock-Ranking Prediction
by Wenjie Yao, Lele Gao, Xiangzhou Zhang, Haotao Chen, Mingzhe Liu and Yong Hu
Electronics 2026, 15(6), 1317; https://doi.org/10.3390/electronics15061317 - 21 Mar 2026
Viewed by 259
Abstract
Stock-ranking prediction is an important task in quantitative finance because it directly influences portfolio construction and alpha generation. Recent Graph Neural Network (GNN) models provide a promising way to describe inter-stock dependencies, but many existing methods still have difficulty balancing rapidly changing market [...] Read more.
Stock-ranking prediction is an important task in quantitative finance because it directly influences portfolio construction and alpha generation. Recent Graph Neural Network (GNN) models provide a promising way to describe inter-stock dependencies, but many existing methods still have difficulty balancing rapidly changing market interactions with relatively stable structural relationships. They are also easily affected by financial micro-structure noise. To address these issues, this paper proposes USTGCN, a Unified Spatio-Temporal Graph Convolutional Network for stock-ranking prediction. USTGCN adopts a dual-stream temporal encoder based on ALSTM and GRU to capture short-term dynamic patterns and longer-horizon structural information, respectively. We further introduce a rolling-window correlation smoothing strategy to build a more stable dynamic graph, and then integrate the dynamic and structural graph views through a shared fusion layer. Skip connections are used to preserve original temporal information during spatial aggregation. Experiments on the CSI100 and CSI300 benchmark datasets show that USTGCN achieves IC values of 0.141 and 0.154, respectively, and exhibits improved drawdown control during stressed market periods, indicating its practical value for quantitative trading. Full article
Show Figures

Figure 1

18 pages, 1924 KB  
Article
A Hybrid Optimization Model for Transformer Fault Diagnosis Based on Gas Classification
by Junju Lai, Dongpeng Weng, Feng Xian, Yuandong Xie, Yujie Chen, Qian Zhou and Chao Yuan
Digital 2026, 6(1), 24; https://doi.org/10.3390/digital6010024 - 10 Mar 2026
Viewed by 416
Abstract
Dissolved gas analysis (DGA) provides valuable information for transformer condition monitoring, yet accurate multi-class fault identification remains challenging due to overlapping gas patterns and the sensitivity of classifier hyperparameters. This study proposes a hybrid optimization framework that combines Particle Swarm Optimization and Grey [...] Read more.
Dissolved gas analysis (DGA) provides valuable information for transformer condition monitoring, yet accurate multi-class fault identification remains challenging due to overlapping gas patterns and the sensitivity of classifier hyperparameters. This study proposes a hybrid optimization framework that combines Particle Swarm Optimization and Grey Wolf Optimization to tune the hyperparameters of a Support Vector Machine (SVM) for transformer fault diagnosis based on gas classification. The model is evaluated on a DGA dataset using a strict protocol that separates cross-validation–based tuning from held-out test assessment. Experimental results show that the proposed hybrid PSO-GWO-SVM achieves superior diagnostic performance and more stable convergence compared with representative single-optimizer baselines, demonstrating its potential for practical transformer fault identification. Full article
Show Figures

Figure 1

17 pages, 761 KB  
Article
Obstacle Avoidance in Mobile Robotics: A CNN-Based Approach Using CMYD Fusion of RGB and Depth Images
by Chaymae El Mechal, Mostefa Mesbah and Najiba El Amrani El Idrissi
Digital 2026, 6(1), 20; https://doi.org/10.3390/digital6010020 - 2 Mar 2026
Viewed by 374
Abstract
Over the last few years, deep neural networks have achieved outstanding results in computer vision, and have been widely integrated into mobile robot obstacle avoidance systems, where perception-driven classification supports navigation decisions. Most existing approaches rely on either color images (RGB) or depth [...] Read more.
Over the last few years, deep neural networks have achieved outstanding results in computer vision, and have been widely integrated into mobile robot obstacle avoidance systems, where perception-driven classification supports navigation decisions. Most existing approaches rely on either color images (RGB) or depth images (D) as the primary source of information, which limits their ability to jointly exploit appearance and geometric cues. This paper proposes a deep learning-based classification approach that simultaneously exploits RGB and depth information for mobile robot obstacle avoidance. The method adopts an early-stage fusion strategy in which RGB images are first converted into the CMYK color space, after which the K (black) channel is replaced by a normalized depth map to form a four-channel CMYD representation. This representation preserves chromatic information while embedding geometric structure in an intensity-consistent channel and is used as input to a convolutional neural network (CNN). The proposed method is evaluated using locally acquired data under different training options and hyperparameter settings. Experimental results show that, when using the baseline CNN architecture, the proposed fusion strategy achieves an overall classification accuracy of 93.3%, outperforming depth-only inputs (86.5%) and RGB-only images (92.9%). When the refined CNN architecture is employed, classification accuracy is further improved across all tested input representations, reaching approximately 93.9% for RGB images, 91.0% for depth-only inputs, 94.6% for the CMYK color space, and 96.2% for the proposed CMYD fusion. These results demonstrate that combining appearance and depth information through CMYD fusion is beneficial regardless of the network variant, while the refined CNN architecture further enhances the effectiveness of the fused representation for robust obstacle avoidance. Full article
Show Figures

Figure 1

22 pages, 4427 KB  
Article
Target Detection in Underground Mines Based on Low-Light Image Enhancement
by Haodong Guo, Kaibo Lu, Shanning Zhan, Jiangtao Li and Zhifei Wu
Digital 2026, 6(1), 13; https://doi.org/10.3390/digital6010013 - 25 Feb 2026
Viewed by 504
Abstract
Underground mines’ complex environments with dim lighting and high dust and humidity hamper feature extraction and reduce detection accuracy. To address this, we propose a low-light image enhancement-based target detection algorithm. Firstly, LIENet enhances low-light image quality and brightness via a dual-gamma curve [...] Read more.
Underground mines’ complex environments with dim lighting and high dust and humidity hamper feature extraction and reduce detection accuracy. To address this, we propose a low-light image enhancement-based target detection algorithm. Firstly, LIENet enhances low-light image quality and brightness via a dual-gamma curve and non-reference loss function-guided iterations. Secondly, the hierarchical feature extraction (HFE) method with a dual-branch structure captures long-term and local correlations, focusing on critical corner regions. Finally, HFE is combined with a feature pyramid structure for comprehensive feature representation through a top-down global adjustment. Our method, validated on a self-built dataset, outperforms other algorithms with an mAP@0.5 of 96.96% and mAP@0.5:0.95 of 71.1%, proving excellent low-light detection performance in mines. Full article
Show Figures

Figure 1

25 pages, 6684 KB  
Article
Physics-Guided Dynamic Sparse Attention Network for Gravitational Wave Detection Across Ground and Space-Based Observatories
by Tiancong Zhang and Wei Bian
Electronics 2026, 15(4), 838; https://doi.org/10.3390/electronics15040838 - 15 Feb 2026
Viewed by 382
Abstract
Ground-based and space-based gravitational wave (GW) detectors cover complementary frequency bands, laying the foundation for future multi-band collaborative observations. Detecting weak signals within non-stationary noise remains challenging. To address this, we propose a Physics-Guided Dynamic Sparse Attention (PGDSA) framework. The framework introduces a [...] Read more.
Ground-based and space-based gravitational wave (GW) detectors cover complementary frequency bands, laying the foundation for future multi-band collaborative observations. Detecting weak signals within non-stationary noise remains challenging. To address this, we propose a Physics-Guided Dynamic Sparse Attention (PGDSA) framework. The framework introduces a differentiable wavelet layer to explicitly embed sensitive frequency bands and time–frequency priors while utilizing intra-block Top-K sparse attention for efficient long-range temporal modeling. Training is performed on space-based simulation data with joint optimization for signal detection and waveform reconstruction. We then evaluate detection performance and zero-shot transfer capability on ground-based data. Experimental results show that PGDSA achieves an ROC-AUC of 0.886 on the Kaggle G2Net private leaderboard. On GWOSC O3 real data, the model yields high confidence scores for confirmed binary black hole events. On LISA simulation data, the framework achieves detection rates exceeding 99% for multiple signal types (SNR = 50, FAR = 1%) with waveform reconstruction Overlap comparable to baseline methods. These results demonstrate that PGDSA enables unified modeling across both space-based and ground-based scenarios. Full article
Show Figures

Figure 1

21 pages, 15857 KB  
Article
LogPPO: A Log-Based Anomaly Detector Aided with Proximal Policy Optimization Algorithms
by Zhihao Wang, Jiachen Dong and Chuanchuan Yang
Smart Cities 2026, 9(1), 5; https://doi.org/10.3390/smartcities9010005 - 26 Dec 2025
Cited by 1 | Viewed by 841
Abstract
Cloud-based platforms form the backbone of smart city ecosystems, powering essential services such as transportation, energy management, and public safety. However, their operational complexity generates vast volumes of system logs, making manual anomaly detection infeasible and raising reliability concerns. This study addresses the [...] Read more.
Cloud-based platforms form the backbone of smart city ecosystems, powering essential services such as transportation, energy management, and public safety. However, their operational complexity generates vast volumes of system logs, making manual anomaly detection infeasible and raising reliability concerns. This study addresses the challenge of data scarcity in log anomaly detection by leveraging Large Language Models (LLMs) to enhance domain-specific classification tasks. We empirically validate that domain-adapted classifiers preserve strong natural language understanding, and introduce a Proximal Policy Optimization (PPO)-based approach to align semantic patterns between LLM outputs and classifier preferences. Experiments were conducted using three Transformer-based baselines under few-shot conditions across four public datasets. Results indicate that integrating natural language analyses improves anomaly detection F1-Scores by 5–86% over the baselines, while iterative PPO refinement boosts classifier’s “confidence” in label prediction. This research pioneers a novel framework for few-shot log anomaly detection, establishing an innovative paradigm in resource-constrained diagnostic systems in smart city infrastructures. Full article
Show Figures

Figure 1

Back to TopTop