Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (503)

Search Parameters:
Keywords = open source intelligence

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 1588 KB  
Review
A Review of Dynamic Traffic Flow Prediction Methods for Global Energy-Efficient Route Planning
by Pengyang Qi, Chaofeng Pan, Xing Xu, Jian Wang, Jun Liang and Weiqi Zhou
Sensors 2025, 25(17), 5560; https://doi.org/10.3390/s25175560 - 5 Sep 2025
Abstract
Urbanization and traffic congestion caused by the surge in car ownership have exacerbated energy consumption and carbon emissions, and dynamic traffic flow prediction and energy-saving route planning have become the key to solving this problem. Dynamic traffic flow prediction accurately captures the spatio-temporal [...] Read more.
Urbanization and traffic congestion caused by the surge in car ownership have exacerbated energy consumption and carbon emissions, and dynamic traffic flow prediction and energy-saving route planning have become the key to solving this problem. Dynamic traffic flow prediction accurately captures the spatio-temporal changes of traffic flow through advanced algorithms and models, providing prospective information for traffic management and travel decision-making. Energy-saving route planning optimizes travel routes based on prediction results, reduces the time vehicles spend on congested road sections, thereby reducing fuel consumption and exhaust emissions. However, there are still many shortcomings in the current relevant research, and the existing research is mostly isolated and applies a single model, and there is a lack of systematic comparison of the adaptability, generalization ability and fusion potential of different models in various scenarios, and the advantages of heterogeneous graph neural networks in integrating multi-source heterogeneous data in traffic have not been brought into play. This paper systematically reviews the relevant global studies from 2020 to 2025, focuses on the integration path of dynamic traffic flow prediction methods and energy-saving route planning, and reveals the advantages of LSTM, graph neural network and other models in capturing spatiotemporal features by combing the application of statistical models, machine learning, deep learning and mixed methods in traffic forecasting, and comparing their performance with RMSE, MAPE and other indicators, and points out that the potential of heterogeneous graph neural networks in multi-source heterogeneous data integration has not been fully explored. Aiming at the problem of disconnection between traffic prediction and path planning, an integrated framework is constructed, and the real-time prediction results are integrated into path algorithms such as A* and Dijkstra through multi-objective cost functions to balance distance, time and energy consumption optimization. Finally, the challenges of data quality, algorithm efficiency, and multimodal adaptation are analyzed, and the development direction of standardized evaluation platform and open source toolkit is proposed, providing theoretical support and practical path for the sustainable development of intelligent transportation systems. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

34 pages, 999 KB  
Review
Robotic Prostheses and Neuromuscular Interfaces: A Review of Design and Technological Trends
by Pedro Garcia Batista, André Costa Vieira and Pedro Dinis Gaspar
Machines 2025, 13(9), 804; https://doi.org/10.3390/machines13090804 - 3 Sep 2025
Viewed by 175
Abstract
Neuromuscular robotic prostheses have emerged as a critical convergence point between biomedical engineering, machine learning, and human–machine interfaces. This work provides a narrative state-of-the-art review regarding recent developments in robotic prosthetic technology, emphasizing sensor integration, actuator architectures, signal acquisition, and algorithmic strategies for [...] Read more.
Neuromuscular robotic prostheses have emerged as a critical convergence point between biomedical engineering, machine learning, and human–machine interfaces. This work provides a narrative state-of-the-art review regarding recent developments in robotic prosthetic technology, emphasizing sensor integration, actuator architectures, signal acquisition, and algorithmic strategies for intent decoding. Special focus is given to non-invasive biosignal modalities, particularly surface electromyography (sEMG), as well as invasive approaches involving direct neural interfacing. Recent developments in AI-driven signal processing, including deep learning and hybrid models for robust classification and regression of user intent, are also examined. Furthermore, the integration of real-time adaptive control systems with surgical techniques like Targeted Muscle Reinnervation (TMR) is evaluated for its role in enhancing proprioception and functional embodiment. Finally, this review highlights the growing importance of modular, open-source frameworks and additive manufacturing in accelerating prototyping and customization. Progress in this domain will depend on continued interdisciplinary research bridging artificial intelligence, neurophysiology, materials science, and real-time embedded systems to enable the next generation of intelligent prosthetic devices. Full article
Show Figures

Figure 1

34 pages, 1992 KB  
Article
Future Skills in the GenAI Era: A Labor Market Classification System Using Kolmogorov–Arnold Networks and Explainable AI
by Dimitrios Christos Kavargyris, Konstantinos Georgiou, Eleanna Papaioannou, Theodoros Moysiadis, Nikolaos Mittas and Lefteris Angelis
Algorithms 2025, 18(9), 554; https://doi.org/10.3390/a18090554 - 2 Sep 2025
Viewed by 158
Abstract
Generative Artificial Intelligence (GenAI) is widely recognized for its profound impact on labor market demand, supply, and skill dynamics. However, due to its transformative nature, GenAI increasingly overlaps with traditional AI roles, blurring boundaries and intensifying the need to reassess workforce competencies. To [...] Read more.
Generative Artificial Intelligence (GenAI) is widely recognized for its profound impact on labor market demand, supply, and skill dynamics. However, due to its transformative nature, GenAI increasingly overlaps with traditional AI roles, blurring boundaries and intensifying the need to reassess workforce competencies. To address this challenge, this paper introduces KANVAS (Kolmogorov–Arnold Network Versatile Algorithmic Solution)—a framework based on Kolmogorov–Arnold Networks (KANs), which utilize B-spline-based, compact, and interpretable neural units—to distinguish between traditional AI roles and emerging GenAI-related positions. The aim of the study is to develop a reliable and interpretable labor market classification system that differentiates these roles using explainable machine learning. Unlike prior studies that emphasize predictive performance, our work is the first to employ KANs as an explanatory tool for labor classification, to reveal how GenAI-related and European Skills, Competences, Qualifications, and Occupations (ESCO)-aligned skills differentially contribute to distinguishing modern from traditional AI job roles. Using raw job vacancy data from two labor market platforms, KANVAS implements a hybrid pipeline combining a state-of-the-art Large Language Model (LLM) with Explainable AI (XAI) techniques, including Shapley Additive Explanations (SHAP), to enhance model transparency. The framework achieves approximately 80% classification consistency between traditional and GenAI-aligned roles, while also identifying the most influential skills contributing to each category. Our findings indicate that GenAI positions prioritize competencies such as prompt engineering and LLM integration, whereas traditional roles emphasize statistical modeling and legacy toolkits. By surfacing these distinctions, the framework offers actionable insights for curriculum design, targeted reskilling programs, and workforce policy development. Overall, KANVAS contributes a novel, interpretable approach to understanding how GenAI reshapes job roles and skill requirements in a rapidly evolving labor market. Finally, the open-source implementation of KANVAS is flexible and well-suited for HR managers and relevant stakeholders. Full article
Show Figures

Figure 1

19 pages, 15830 KB  
Article
LARS: A Light-Augmented Reality System for Collective Robotic Interaction
by Mohsen Raoufi, Pawel Romanczuk and Heiko Hamann
Sensors 2025, 25(17), 5412; https://doi.org/10.3390/s25175412 - 2 Sep 2025
Viewed by 218
Abstract
Collective robotics systems hold great potential for future education and public engagement; however, only a few are utilized in these contexts. One reason is the lack of accessible tools to convey their complex, embodied interactions. In this work, we introduce the Light-Augmented Reality [...] Read more.
Collective robotics systems hold great potential for future education and public engagement; however, only a few are utilized in these contexts. One reason is the lack of accessible tools to convey their complex, embodied interactions. In this work, we introduce the Light-Augmented Reality System (LARS), an open-source, marker-free, cross-platform tool designed to support experimentation, education, and outreach in collective robotics. LARS employs Extended Reality (XR) to project dynamic visual objects into the physical environment. This enables indirect robot–robot communication through stigmergy while preserving the physical and sensing constraints of the real robots, and enhances robot–human interaction by making otherwise hidden information visible. The system is low-cost, easy to deploy, and platform-independent without requiring hardware modifications. By projecting visible information in real time, LARS facilitates reproducible experiments and bridges the gap between abstract collective dynamics and observable behavior. We demonstrate that LARS can serve both as a research tool and as a means to motivate students and the broader public to engage with collective robotics. Its accessibility and flexibility make it an effective platform for illustrating complex multi-robot interactions, promoting hands-on learning, and expanding public understanding of collective, embodied intelligence. Full article
Show Figures

Figure 1

12 pages, 842 KB  
Article
Developing a Local Generative AI Teaching Assistant System: Utilizing Retrieval-Augmented Generation Technology to Enhance the Campus Learning Environment
by Jing-Wen Wu and Ming-Hseng Tseng
Electronics 2025, 14(17), 3402; https://doi.org/10.3390/electronics14173402 - 27 Aug 2025
Viewed by 350
Abstract
The rapid advancement of AI technologies and the emergence of large language models (LLMs) such as ChatGPT have facilitated the integration of intelligent question-answering systems into education. However, students often hesitate to ask questions, which negatively affects learning outcomes. To address this issue, [...] Read more.
The rapid advancement of AI technologies and the emergence of large language models (LLMs) such as ChatGPT have facilitated the integration of intelligent question-answering systems into education. However, students often hesitate to ask questions, which negatively affects learning outcomes. To address this issue, this study proposes a closed, locally deployed generative AI teaching assistant system that enables instructors to upload course PDFs to generate customized Q&A platforms. The system is based on a Retrieval-Augmented Generation (RAG) architecture and was developed through a comparative evaluation of components, including open-source large language models, embedding models, and vector databases to determine the optimal setup. The implementation integrates RAG with responsive web technologies and is evaluated using a standardized test question bank. Experimental results demonstrate that the system achieves an average answer accuracy of up to 86%, indicating a strong performance in an educational context. These findings suggest the feasibility of the system as an effective, privacy-preserving AI teaching aid, offering a scalable technical solution to improve digital learning in on-premise environments. Full article
Show Figures

Figure 1

23 pages, 2723 KB  
Article
Dairy DigiD: An Edge-Cloud Framework for Real-Time Cattle Biometrics and Health Classification
by Shubhangi Mahato and Suresh Neethirajan
AI 2025, 6(9), 196; https://doi.org/10.3390/ai6090196 - 22 Aug 2025
Viewed by 605
Abstract
Digital livestock farming faces a critical deployment challenge: bridging the gap between cutting-edge AI algorithms and practical implementation in resource-constrained agricultural environments. While deep learning models demonstrate exceptional accuracy in laboratory settings, their translation to operational farm systems remains limited by computational constraints, [...] Read more.
Digital livestock farming faces a critical deployment challenge: bridging the gap between cutting-edge AI algorithms and practical implementation in resource-constrained agricultural environments. While deep learning models demonstrate exceptional accuracy in laboratory settings, their translation to operational farm systems remains limited by computational constraints, connectivity issues, and user accessibility barriers. Dairy DigiD addresses these challenges through a novel edge-cloud AI framework integrating YOLOv11 object detection with DenseNet121 physiological classification for cattle monitoring. The system employs YOLOv11-nano architecture optimized through INT8 quantization (achieving 73% model compression with <1% accuracy degradation) and TensorRT acceleration, enabling 24 FPS real-time inference on NVIDIA Jetson edge devices while maintaining 94.2% classification accuracy. Our key innovation lies in intelligent confidence-based offloading: routine detections execute locally at the edge, while ambiguous cases trigger cloud processing for enhanced accuracy. An entropy-based active learning pipeline using Roboflow reduces the annotation overhead by 65% while preserving 97% of the model performance. The Gradio interface democratizes system access, reducing technician training requirements by 84%. Comprehensive validation across ten commercial dairy farms in Atlantic Canada demonstrates robust performance under diverse environmental conditions (seasonal, lighting, weather variations). The framework achieves mAP@50 of 0.947 with balanced precision-recall across four physiological classes, while consuming 18% less energy than baseline implementations through attention-based optimization. Rather than proposing novel algorithms, this work contributes a systems-level integration methodology that transforms research-grade AI into deployable agricultural solutions. Our open-source framework provides a replicable blueprint for precision livestock farming adoption, addressing practical barriers that have historically limited AI deployment in agricultural settings. Full article
Show Figures

Figure 1

31 pages, 433 KB  
Review
A Comprehensive Survey of 6G Simulators: Comparison, Integration, and Future Directions
by Evgeniya Evgenieva, Atanas Vlahov, Antoni Ivanov, Vladimir Poulkov and Agata Manolova
Electronics 2025, 14(16), 3313; https://doi.org/10.3390/electronics14163313 - 20 Aug 2025
Viewed by 831
Abstract
Modern wireless networks are rapidly advancing through research into novel applications that push the boundaries of information and communication systems to satisfy the increasing user demand. To facilitate this process, the development of communication network simulators is necessary due to the high cost [...] Read more.
Modern wireless networks are rapidly advancing through research into novel applications that push the boundaries of information and communication systems to satisfy the increasing user demand. To facilitate this process, the development of communication network simulators is necessary due to the high cost and difficulty of real-world testing, with many new simulation tools having emerged in recent years. This paper surveys the latest developments in simulators that support Sixth-Generation (6G) technologies, which aim to surpass the current wireless standards by delivering Artificial Intelligence (AI) empowered networks with ultra-low latency, terabit-per-second data rates, high mobility, and extended reality. Novel features such as Reconfigurable Intelligent Surfaces (RISs), Open Radio Access Network (O-RAN), and Integrated Space–Terrestrial Networks (ISTNs) need to be integrated into the simulation environment. The reviewed simulators and emulators are classified into general-purpose and specialized according to their type of link-level, system-level, and network-level categories. They are then compared based on scalability, computational efficiency, and 6G-specific technological considerations, with specific emphasis on open-source solutions as they are growing in prominence. The study highlights the strengths and limitations of the reviewed simulators, as well as the use cases in which they are applied, offering insights into their suitability for 6G system design. Based on the review, the challenges and future directions for simulators’ development are described, aiming to facilitate the accurate and effective modeling of future communication networks. Full article
(This article belongs to the Special Issue 6G and Beyond: Architectures, Challenges, and Opportunities)
Show Figures

Figure 1

24 pages, 2009 KB  
Article
Artificial Intelligence and Sustainable Practices in Coastal Marinas: A Comparative Study of Monaco and Ibiza
by Florin Ioras and Indrachapa Bandara
Sustainability 2025, 17(16), 7404; https://doi.org/10.3390/su17167404 - 15 Aug 2025
Viewed by 541
Abstract
Artificial intelligence (AI) is playing an increasingly important role in driving sustainable change across coastal and marine environments. Artificial intelligence offers strong support for environmental decision-making by helping to process complex data, anticipate outcomes, and fine-tune day-to-day operations. In busy coastal zones such [...] Read more.
Artificial intelligence (AI) is playing an increasingly important role in driving sustainable change across coastal and marine environments. Artificial intelligence offers strong support for environmental decision-making by helping to process complex data, anticipate outcomes, and fine-tune day-to-day operations. In busy coastal zones such as the Mediterranean where tourism and boating place significant strain on marine ecosystems, AI can be an effective means for marinas to reduce their ecological impact without sacrificing economic viability. This research examines the contribution of artificial intelligence toward the development of environmental sustainability in marina management. It investigates how AI can potentially reconcile economic imperatives with ecological conservation, especially in high-traffic coastal areas. Through a focus on the impact of social and technological context, this study emphasizes the way in which local conditions constrain the design, deployment, and reach of AI systems. The marinas of Ibiza and Monaco are used as a comparative backdrop to depict these dynamics. In Monaco, efforts like the SEA Index® and predictive maintenance for superyachts contributed to a 28% drop in CO2 emissions between 2020 and 2025. In contrast, Ibiza focused on circular economy practices, reaching an 85% landfill diversion rate using solar power, AI-assisted waste systems, and targeted biodiversity conservation initiatives. This research organizes AI tools into three main categories: supervised learning, anomaly detection, and rule-based systems. Their effectiveness is assessed using statistical techniques, including t-test results contextualized with Cohen’s d to convey practical effect sizes. Regression R2 values are interpreted in light of real-world policy relevance, such as thresholds for energy audits or emissions certification. In addition to measuring technical outcomes, this study considers the ethical concerns, the role of local communities, and comparisons to global best practices. The findings highlight how artificial intelligence can meaningfully contribute to environmental conservation while also supporting sustainable economic development in maritime contexts. However, the analysis also reveals ongoing difficulties, particularly in areas such as ethical oversight, regulatory coherence, and the practical replication of successful initiatives across diverse regions. In response, this study outlines several practical steps forward: promoting AI-as-a-Service models to lower adoption barriers, piloting regulatory sandboxes within the EU to test innovative solutions safely, improving access to open-source platforms, and working toward common standards for the stewardship of marine environmental data. Full article
Show Figures

Figure 1

18 pages, 1160 KB  
Review
Machine Learning for the Optimization of the Bioplastics Design
by Neelesh Ashok, Pilar Garcia-Diaz, Marta E. G. Mosquera and Valentina Sessini
Macromol 2025, 5(3), 38; https://doi.org/10.3390/macromol5030038 - 14 Aug 2025
Viewed by 354
Abstract
Biodegradable polyesters have gained attention due to their sustainability benefits, considering the escalating environmental challenges posed by synthetic polymers. Advances in artificial intelligence (AI), including machine learning (ML) and deep learning (DL), are expected to significantly accelerate research in polymer science. This review [...] Read more.
Biodegradable polyesters have gained attention due to their sustainability benefits, considering the escalating environmental challenges posed by synthetic polymers. Advances in artificial intelligence (AI), including machine learning (ML) and deep learning (DL), are expected to significantly accelerate research in polymer science. This review article explores “bio” polymer informatics by harnessing insights from the AI techniques used to predict structure–property relationships and to optimize the synthesis of bioplastics. This review also discusses PolyID, a machine learning-based tool that employs message-passing graph neural networks to provide a framework capable of accelerating the discovery of bioplastics. An extensive literature review is conducted on explainable AI (XAI) and generative AI techniques, as well as on benchmarking data repositories in polymer science. The current state-of-the art in ML methods for ring-opening polymerizations and the synthesizability of biodegradable polyesters is also presented. This review offers an in-depth insight and comprehensive knowledge of current AI-based models for polymerizations, molecular descriptors, structure–property relationships, predictive modeling, and open-source benchmarked datasets for sustainable polymers. This study serves as a reference and provides critical insights into the capabilities of AI for the accelerated design and discovery of green polymers aimed at achieving a sustainable future. Full article
Show Figures

Figure 1

21 pages, 5215 KB  
Article
A Cyber-Physical Integrated Framework for Developing Smart Operations in Robotic Applications
by Tien-Lun Liu, Po-Chun Chen, Yi-Hsiang Chao and Kuan-Chun Huang
Electronics 2025, 14(15), 3130; https://doi.org/10.3390/electronics14153130 - 6 Aug 2025
Viewed by 333
Abstract
The traditional manufacturing industry is facing the challenge of digital transformation, which involves the enhancement of intelligence and production efficiency. Many robotic applications have been discussed to enable collaborative robots to perform operations smartly rather than just automatically. This article tackles the issues [...] Read more.
The traditional manufacturing industry is facing the challenge of digital transformation, which involves the enhancement of intelligence and production efficiency. Many robotic applications have been discussed to enable collaborative robots to perform operations smartly rather than just automatically. This article tackles the issues of intelligent robots with cognitive and coordination capability by introducing cyber-physical integration technology. The authors propose a system architecture with open-source software and low-cost hardware based on the 5C hierarchy and then conduct experiments to verify the proposed framework. These experiments involve the collection of real-time data using a depth camera, object detection to recognize obstacles, simulation of collision avoidance for a robotic arm, and cyber-physical integration to perform a robotic task. The proposed framework realizes the scheme of the 5C architecture of Industry 4.0 and establishes a digital twin in cyberspace. By utilizing connection, conversion, calculation, simulation, verification, and operation, the robotic arm is capable of making independent judgments and appropriate decisions to successfully complete the assigned task, thereby verifying the proposed framework. Such a cyber-physical integration system is characterized by low cost but good effectiveness. Full article
(This article belongs to the Topic Innovation, Communication and Engineering)
Show Figures

Figure 1

18 pages, 1582 KB  
Article
Design of an ASIC Vector Engine for a RISC-V Architecture
by Miguel Bucio-Macías, Luis Pizano-Escalante and Omar Longoria-Gandara
Chips 2025, 4(3), 33; https://doi.org/10.3390/chips4030033 - 5 Aug 2025
Viewed by 782
Abstract
Nowadays, Graphical Processor Units (GPUs) are a great technology to implement Artificial Intelligence (AI) processes; however, a challenge arises when the inclusion of a GPU is not feasible due to the cost, power consumption, or the size of the hardware. This issue is [...] Read more.
Nowadays, Graphical Processor Units (GPUs) are a great technology to implement Artificial Intelligence (AI) processes; however, a challenge arises when the inclusion of a GPU is not feasible due to the cost, power consumption, or the size of the hardware. This issue is particularly relevant for portable devices, such as laptops or smartphones, where the inclusion of a dedicated GPU is not the best option. One possible solution to that problem is the use of a CPU with AI capabilities, i.e., parallelism and high performance. In particular, RISC-V architecture is considered a good open-source candidate to support such tasks. These capabilities are based on vector operations that, by definition, operate over many elements at the same time, allowing for the execution of SIMD instructions that can be used to implement typical AI routines and procedures. In this context, the main purpose of this proposal is to develop an ASIC Vector Engine RISC-V architecture compliant that implements a minimum set of the Vector Extension capable of the parallel processing of multiple data elements with a single instruction. These instructions operate on vectors and involve addition, multiplication, logical, comparison, and permutation operations. Especially, the multiplication was implemented using the Vedic multiplication algorithm. Contributions include the description of the design, synthesis, and validation processes to develop the ASIC, and a performance comparison between the FPGA implementation and the ASIC using different nanometric technologies, where the best performance of 110 MHz, and the best implementation in terms of silicon area, was achieved by 7 nm technology. Full article
Show Figures

Figure 1

31 pages, 3464 KB  
Article
An Intelligent Method for C++ Test Case Synthesis Based on a Q-Learning Agent
by Serhii Semenov, Oleksii Kolomiitsev, Mykhailo Hulevych, Patryk Mazurek and Olena Chernyk
Appl. Sci. 2025, 15(15), 8596; https://doi.org/10.3390/app15158596 - 2 Aug 2025
Viewed by 440
Abstract
Ensuring software quality during development requires effective regression testing. However, test suites in open-source libraries often grow large, redundant, and difficult to maintain. Most traditional test suite optimization methods treat test cases as atomic units, without analyzing the utility of individual instructions. This [...] Read more.
Ensuring software quality during development requires effective regression testing. However, test suites in open-source libraries often grow large, redundant, and difficult to maintain. Most traditional test suite optimization methods treat test cases as atomic units, without analyzing the utility of individual instructions. This paper presents an intelligent method for test case synthesis using a Q-learning agent. The agent learns to construct compact test cases by interacting with an execution environment and receives rewards based on branch coverage improvements and simultaneous reductions in test case length. The training process includes a pretraining phase that transfers knowledge from the original test suite, followed by adaptive learning episodes on individual test cases. As a result, the method requires no formal documentation or API specifications and uses only execution traces of the original test cases. An explicit synthesis algorithm constructs new test cases by selecting API calls from a learned policy encoded in a Q-table. Experiments were conducted on two open-source C++ libraries of differing API complexity and original test suite size. The results show that the proposed method can reach up to 67% test suite reduction while preserving branch coverage, confirming its effectiveness for regression test suite minimization in resource-constrained or specification-limited environments. Full article
Show Figures

Figure 1

20 pages, 1253 KB  
Article
Multimodal Detection of Emotional and Cognitive States in E-Learning Through Deep Fusion of Visual and Textual Data with NLP
by Qamar El Maazouzi and Asmaa Retbi
Computers 2025, 14(8), 314; https://doi.org/10.3390/computers14080314 - 2 Aug 2025
Viewed by 696
Abstract
In distance learning environments, learner engagement directly impacts attention, motivation, and academic performance. Signs of fatigue, negative affect, or critical remarks can warn of growing disengagement and potential dropout. However, most existing approaches rely on a single modality, visual or text-based, without providing [...] Read more.
In distance learning environments, learner engagement directly impacts attention, motivation, and academic performance. Signs of fatigue, negative affect, or critical remarks can warn of growing disengagement and potential dropout. However, most existing approaches rely on a single modality, visual or text-based, without providing a general view of learners’ cognitive and affective states. We propose a multimodal system that integrates three complementary analyzes: (1) a CNN-LSTM model augmented with warning signs such as PERCLOS and yawning frequency for fatigue detection, (2) facial emotion recognition by EmoNet and an LSTM to handle temporal dynamics, and (3) sentiment analysis of feedback by a fine-tuned BERT model. It was evaluated on three public benchmarks: DAiSEE for fatigue, AffectNet for emotion, and MOOC Review (Coursera) for sentiment analysis. The results show a precision of 88.5% for fatigue detection, 70% for emotion detection, and 91.5% for sentiment analysis. Aggregating these cues enables an accurate identification of disengagement periods and triggers individualized pedagogical interventions. These results, although based on independently sourced datasets, demonstrate the feasibility of an integrated approach to detecting disengagement and open the door to emotionally intelligent learning systems with potential for future work in real-time content personalization and adaptive learning assistance. Full article
(This article belongs to the Special Issue Present and Future of E-Learning Technologies (2nd Edition))
Show Figures

Figure 1

16 pages, 1873 KB  
Systematic Review
A Systematic Review of GIS Evolution in Transportation Planning: Towards AI Integration
by Ayda Zaroujtaghi, Omid Mansourihanis, Mohammad Tayarani, Fatemeh Mansouri, Moein Hemmati and Ali Soltani
Future Transp. 2025, 5(3), 97; https://doi.org/10.3390/futuretransp5030097 - 1 Aug 2025
Viewed by 683
Abstract
Previous reviews have examined specific facets of Geographic Information Systems (GIS) in transportation planning, such as transit-focused applications and open source geospatial tools. However, this study offers the first systematic, PRISMA-guided longitudinal evaluation of GIS integration in transportation planning, spanning thematic domains, data [...] Read more.
Previous reviews have examined specific facets of Geographic Information Systems (GIS) in transportation planning, such as transit-focused applications and open source geospatial tools. However, this study offers the first systematic, PRISMA-guided longitudinal evaluation of GIS integration in transportation planning, spanning thematic domains, data models, methodologies, and outcomes from 2004 to 2024. This study addresses this gap through a longitudinal analysis of GIS-based transportation research from 2004 to 2024, adhering to PRISMA guidelines. By conducting a mixed-methods analysis of 241 peer-reviewed articles, this study delineates major trends, such as increased emphasis on sustainability, equity, stakeholder involvement, and the incorporation of advanced technologies. Prominent domains include land use–transportation coordination, accessibility, artificial intelligence, real-time monitoring, and policy evaluation. Expanded data sources, such as real-time sensor feeds and 3D models, alongside sophisticated modeling techniques, enable evidence-based, multifaceted decision-making. However, challenges like data limitations, ethical concerns, and the need for specialized expertise persist, particularly in developing regions. Future geospatial innovations should prioritize the responsible adoption of emerging technologies, inclusive capacity building, and environmental justice to foster equitable and efficient transportation systems. This review highlights GIS’s evolution from a supplementary tool to a cornerstone of data-driven, sustainable urban mobility planning, offering insights for researchers, practitioners, and policymakers to advance transportation strategies that align with equity and sustainability goals. Full article
Show Figures

Figure 1

29 pages, 482 KB  
Review
AI in Maritime Security: Applications, Challenges, Future Directions, and Key Data Sources
by Kashif Talpur, Raza Hasan, Ismet Gocer, Shakeel Ahmad and Zakirul Bhuiyan
Information 2025, 16(8), 658; https://doi.org/10.3390/info16080658 - 31 Jul 2025
Viewed by 1475
Abstract
The growth and sustainability of today’s global economy heavily relies on smooth maritime operations. The increasing security concerns to marine environments pose complex security challenges, such as smuggling, illegal fishing, human trafficking, and environmental threats, for traditional surveillance methods due to their limitations. [...] Read more.
The growth and sustainability of today’s global economy heavily relies on smooth maritime operations. The increasing security concerns to marine environments pose complex security challenges, such as smuggling, illegal fishing, human trafficking, and environmental threats, for traditional surveillance methods due to their limitations. Artificial intelligence (AI), particularly deep learning, has offered strong capabilities for automating object detection, anomaly identification, and situational awareness in maritime environments. In this paper, we have reviewed the state-of-the-art deep learning models mainly proposed in recent literature (2020–2025), including convolutional neural networks, recurrent neural networks, Transformers, and multimodal fusion architectures. We have highlighted their success in processing diverse data sources such as satellite imagery, AIS, SAR, radar, and sensor inputs from UxVs. Additionally, multimodal data fusion techniques enhance robustness by integrating complementary data, yielding more detection accuracy. There still exist challenges in detecting small or occluded objects, handling cluttered scenes, and interpreting unusual vessel behaviours, especially under adverse sea conditions. Additionally, explainability and real-time deployment of AI models in operational settings are open research areas. Overall, the review of existing maritime literature suggests that deep learning is rapidly transforming maritime domain awareness and response, with significant potential to improve global maritime security and operational efficiency. We have also provided key datasets for deep learning models in the maritime security domain. Full article
(This article belongs to the Special Issue Advances in Machine Learning and Intelligent Information Systems)
Show Figures

Figure 1

Back to TopTop