Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (753)

Search Parameters:
Keywords = Edge AI

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 2865 KiB  
Perspective
Toward Sustainable Mars Exploration: A Perspective on Collaborative Intelligent Systems
by Thomas Janssen, Ritesh Kumar Singh, Phil Reiter, Anuj Justus Rajappa, Priyesh Pappinisseri Puluckul, Mohmmadsadegh Mokhtari, Mohammad Hasan Rahmani, Erik Mannens, Jeroen Famaey and Maarten Weyn
Aerospace 2025, 12(5), 432; https://doi.org/10.3390/aerospace12050432 - 13 May 2025
Viewed by 67
Abstract
Mars has long captivated the human imagination as a potential destination for settlement and scientific exploration. After deploying individual rovers, the next step in our journey to Mars is the autonomous exploration of the Red Planet using a collaborative swarm of rovers, drones, [...] Read more.
Mars has long captivated the human imagination as a potential destination for settlement and scientific exploration. After deploying individual rovers, the next step in our journey to Mars is the autonomous exploration of the Red Planet using a collaborative swarm of rovers, drones, and satellites. This concept paper envisions a sustainable Mars exploration scenario featuring energy-aware, collaborative, and autonomous vehicles, including rovers, drones, and satellites, operating around Mars. The proposed framework is designed to address key challenges in energy management, edge intelligence, communication, sensing, resource-aware task scheduling, and radiation hardening. This work not only identifies these critical areas of research but also proposes novel technological solutions drawn from terrestrial advancements to extend their application to extraterrestrial exploration. Full article
Show Figures

Figure 1

18 pages, 9071 KiB  
Article
Spatiotemporal Dynamics of Ecosystem Service Value and Its Linkages with Landscape Pattern Changes in Xiong’an New Area, China (2014–2022)
by Xinyang Ji, Dong Chen, Guangwei Li, Jingkai Guo, Jiafeng Liu, Jing Tong, Xiyong Sun, Xiaomin Du and Wenkai Zhang
Appl. Sci. 2025, 15(10), 5399; https://doi.org/10.3390/app15105399 - 12 May 2025
Viewed by 110
Abstract
As China’s third national-level new area, Xiong’an New Area plays a pivotal strategic role in relocating non-capital functions from Beijing while serving as a model for sustainable urban development. This study investigates the spatiotemporal evolution of ecosystem service value (ESV) and landscape patterns [...] Read more.
As China’s third national-level new area, Xiong’an New Area plays a pivotal strategic role in relocating non-capital functions from Beijing while serving as a model for sustainable urban development. This study investigates the spatiotemporal evolution of ecosystem service value (ESV) and landscape patterns in Xiong’an before (2014–2016) and after (2017–2022) its establishment, assessing the policy-driven impacts of green development initiatives. Using remote sensing data, random forest classification, and landscape pattern analysis, we quantified land use dynamics, landscape index, and ESV variations. Key findings reveal significant land use transformations, with cultivated land declining by 7.51% and coniferous forest expanding by 189.84%, driven by urbanization and afforestation efforts. The comprehensive land use dynamic degree reached 4.96% (2014–2022), while the land use intensity index decreased by 20.95%. Concurrently, the fragmentation index increased significantly (Diversity Index (SHDI) +45%; Edge Density (ED) +66.23%). Despite these changes, ESV surged by 57.51% (CNY 334.63 billion), primarily due to wetland and forest expansion. Statistical analysis revealed positive correlations between ESV and the fragmentation index (ED, NP, and SHDI), whereas the aggregated index (CONTAG and AI) exhibited negative correlations. The findings substantiate the policy effectiveness of Xiong’an’s ecological initiatives, revealing how strategic landscape planning can balance urban development with ecosystem protection, offering valuable guidance for sustainable urbanization in Xiong’an and comparable regions. Full article
(This article belongs to the Section Ecology Science and Engineering)
Show Figures

Figure 1

26 pages, 9421 KiB  
Article
Machine-Learning-Based Classification of Electronic Devices Using an IoT Smart Meter
by Paulo Eugênio da Costa Filho, Leonardo Augusto de Aquino Marques, Israel da S. Felix de Lima, Ewerton Leandro de Sousa, Márcio Eduardo Kreutz, Augusto V. Neto, Eduardo Nogueira Cunha and Dario Vieira
Informatics 2025, 12(2), 48; https://doi.org/10.3390/informatics12020048 - 12 May 2025
Viewed by 180
Abstract
This study investigates the implementation of artificial intelligence (AI) algorithms on resource-constrained edge devices, such as ESP32 and Raspberry Pi, within the context of smart grid (SG) applications. Specifically, it proposes a smart-meter-based system capable of classifying and detecting the Internet of Things [...] Read more.
This study investigates the implementation of artificial intelligence (AI) algorithms on resource-constrained edge devices, such as ESP32 and Raspberry Pi, within the context of smart grid (SG) applications. Specifically, it proposes a smart-meter-based system capable of classifying and detecting the Internet of Things (IoT) electronic devices at the extreme edge. The smart meter developed in this work acquires real-time voltage and current signals from connected devices, which are used to train and deploy lightweight machine learning models—Multi-Layer Perceptron (MLP) and K-Nearest Neighbor (KNN)—directly on edge hardware. The proposed system is integrated into the Artificial Intelligence in the Internet of Things for Smart Grids IAIoSGT architecture, which supports edge–cloud processing and real-time decision-making. A literature review highlights the key gaps in the existing approaches, particularly the lack of embedded intelligence for load identification at the edge. The experimental results emphasize the importance of data preprocessing—especially normalization—in optimizing model performance, revealing distinct behavior between MLP and KNN models depending on the platform. The findings confirm the feasibility of performing accurate low-latency classification directly on smart meters, reinforcing the potential of scalable AI-powered energy monitoring systems in SG. Full article
Show Figures

Figure 1

12 pages, 345 KiB  
Article
NeuroAdaptiveNet: A Reconfigurable FPGA-Based Neural Network System with Dynamic Model Selection
by Achraf El Bouazzaoui, Omar Mouhib and Abdelkader Hadjoudja
Chips 2025, 4(2), 24; https://doi.org/10.3390/chips4020024 - 8 May 2025
Viewed by 141
Abstract
This paper presents NeuroAdaptiveNet, an FPGA-based neural network framework that dynamically self-adjusts its architectural configurations in real time to maximize performance across diverse datasets. The core innovation is a Dynamic Classifier Selection mechanism, which harnesses the k-Nearest Centroid algorithm to identify the most [...] Read more.
This paper presents NeuroAdaptiveNet, an FPGA-based neural network framework that dynamically self-adjusts its architectural configurations in real time to maximize performance across diverse datasets. The core innovation is a Dynamic Classifier Selection mechanism, which harnesses the k-Nearest Centroid algorithm to identify the most competent neural network model for each incoming data sample. By adaptively selecting the most suitable model configuration, NeuroAdaptiveNet achieves significantly improved classification accuracy and optimized resource usage compared to conventional, statically configured neural networks. Experimental results on four datasets demonstrate that NeuroAdaptiveNet can reduce FPGA resource utilization by as much as 52.85%, increase classification accuracy by 4.31%, and lower power consumption by up to 24.5%. These gains illustrate the clear advantage of real-time, per-input reconfiguration over static designs. These advantages are particularly crucial for edge computing and embedded applications, where computational constraints and energy efficiency are paramount. The ability of NeuroAdaptiveNet to tailor its neural network parameters and architecture on a per-input basis paves the way for more efficient and accurate AI solutions in resource-constrained environments. Full article
Show Figures

Figure 1

32 pages, 1346 KiB  
Article
Rough Set Theory and Soft Computing Methods for Building Explainable and Interpretable AI/ML Models
by Sami Naouali and Oussama El Othmani
Appl. Sci. 2025, 15(9), 5148; https://doi.org/10.3390/app15095148 - 6 May 2025
Viewed by 171
Abstract
This study introduces a novel framework leveraging Rough Set Theory (RST)-based feature selection—MLReduct, MLSpecialReduct, and MLFuzzyRoughSet—to enhance machine learning performance on uncertain data. Applied to a private cardiovascular dataset, our MLSpecialReduct algorithm achieves a peak Random Forest accuracy of 0.99 (versus 0.85 without [...] Read more.
This study introduces a novel framework leveraging Rough Set Theory (RST)-based feature selection—MLReduct, MLSpecialReduct, and MLFuzzyRoughSet—to enhance machine learning performance on uncertain data. Applied to a private cardiovascular dataset, our MLSpecialReduct algorithm achieves a peak Random Forest accuracy of 0.99 (versus 0.85 without feature selection), while MLFuzzyRoughSet improves accuracy to 0.83, surpassing our MLVarianceThreshold (0.72–0.77), an adaptation of the traditional VarianceThreshold method. We integrate these RST techniques with preprocessing (discretization, normalization, encoding) and compare them against traditional approaches across classifiers like Random Forest and Naive Bayes. The results underscore RST’s edge in accuracy, efficiency, and interpretability, with MLSpecialReduct leading in minimal attribute reduction. Against baseline classifiers without feature selection and MLVarianceThreshold, our framework delivers significant improvements, establishing RST as a vital tool for explainable AI (XAI) in healthcare diagnostics and IoT systems. These findings open avenues for future hybrid RST-ML models, providing a robust, interpretable solution for complex data challenges. Full article
(This article belongs to the Special Issue Data and Text Mining: New Approaches, Achievements and Applications)
Show Figures

Figure 1

5 pages, 155 KiB  
Editorial
New Advances in Low-Energy Processes for Geo-Energy Development
by Daoyi Zhu
Energies 2025, 18(9), 2357; https://doi.org/10.3390/en18092357 - 6 May 2025
Viewed by 226
Abstract
The development of geo-energy resources, including oil, gas, and geothermal reservoirs, is being transformed through the creation of low-energy processes and innovative technologies. This Special Issue compiles cutting-edge research aimed at enhancing efficiency, sustainability, and recovery during geo-energy extraction. The published studies explore [...] Read more.
The development of geo-energy resources, including oil, gas, and geothermal reservoirs, is being transformed through the creation of low-energy processes and innovative technologies. This Special Issue compiles cutting-edge research aimed at enhancing efficiency, sustainability, and recovery during geo-energy extraction. The published studies explore a diverse range of methodologies, such as the nanofluidic analysis of shale oil phase transitions, deep electrical resistivity tomography for geothermal exploration, and hybrid AI-driven production prediction models. Their key themes include hydraulic fracturing optimization, CO2 injection dynamics, geothermal reservoir simulation, and competitive gas–water adsorption in ultra-deep reservoirs, and these studies combine advanced numerical modeling, experimental techniques, and field applications to address challenges in unconventional reservoirs, geothermal energy exploitation, and enhanced oil recovery. By bridging theoretical insights with practical engineering solutions, this Special Issue provides a comprehensive foundation for future innovations in low-energy geo-energy development. Full article
(This article belongs to the Special Issue New Advances in Low-Energy Processes for Geo-Energy Development)
25 pages, 982 KiB  
Review
Harnessing Data Analytics for Enhanced Public Programming in Archives and Museums: A Scoping Review
by Mthokozisi Masumbika Ncube and Patrick Ngulube
Heritage 2025, 8(5), 163; https://doi.org/10.3390/heritage8050163 - 5 May 2025
Viewed by 390
Abstract
A notable lacuna exists in the extant research regarding the application of data analytics (DA) to augment public programming and cultivate robust connections between archives, museums, and their constituent communities. This scoping review aimed to address this gap by mapping the available literature [...] Read more.
A notable lacuna exists in the extant research regarding the application of data analytics (DA) to augment public programming and cultivate robust connections between archives, museums, and their constituent communities. This scoping review aimed to address this gap by mapping the available literature at the intersection of data analytics, archives, and museums. Adhering to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) guidelines, a two-stage selection process was employed, utilising a comprehensive search strategy across four databases and seven specialised journals. This search identified 37 publications that met the pre-defined inclusion criteria. Findings revealed a growing interest in data-driven approaches, with nearly half of the reviewed studies explicitly linking data analytics to public programming. The review identified diverse data analytics techniques employed, ranging from traditional methods to cutting-edge artificial intelligence (AI) applications, and highlighted the various data sources utilised. Furthermore, this study examined the transformative potential of data analytics across several key dimensions of public programming, including access, archival management, user experience, public engagement, and research methodologies. The review noted ethical considerations, data quality issues, preservation challenges, and accessibility concerns associated with leveraging data analytics in archives and museums. Full article
Show Figures

Figure 1

34 pages, 2078 KiB  
Review
Understanding the Functionality of Probiotics on the Edge of Artificial Intelligence (AI) Era
by Remziye Asar, Sinem Erenler, Dilara Devecioglu, Humeyra Ispirli, Funda Karbancioglu-Guler, Hale Inci Ozturk and Enes Dertli
Fermentation 2025, 11(5), 259; https://doi.org/10.3390/fermentation11050259 - 5 May 2025
Viewed by 690
Abstract
This review focuses on the potential utilization of artificial intelligence (AI) tools to deepen our understanding of probiotics, their mode of action, and technological characteristics such as survival. To that end, this review provides an overview of the current knowledge on probiotics as [...] Read more.
This review focuses on the potential utilization of artificial intelligence (AI) tools to deepen our understanding of probiotics, their mode of action, and technological characteristics such as survival. To that end, this review provides an overview of the current knowledge on probiotics as well as next-generation probiotics. AI-aided omics technologies, including genomics, transcriptomics, and proteomics, offer new insights into the genetic and functional properties of probiotics. Furthermore, AI can be used to elucidate key probiotic activities such as microbiota modulation, metabolite production, and immune system interactions to enable an improved understanding of their health impacts. Additionally, AI technologies facilitate precision in identifying probiotic health impacts, including their role in gut health, anticancer activity, and antiaging effects. Beyond health applications, AI can expand the technological use of probiotics, optimizing storage survival and broadening biotechnological approaches. In this context, this review addresses how AI-driven approaches can be facilitated by strengthening the evaluation of probiotic characteristics, explaining their mechanisms of action, and enhancing their technological applications. Moreover, the potential of AI to enhance the precision of probiotic health impact assessments and optimize industrial applications is highlighted, concluding with future perspectives on the transformative role of AI in probiotic research. Full article
Show Figures

Figure 1

22 pages, 25979 KiB  
Article
Advancing Early Wildfire Detection: Integration of Vision Language Models with Unmanned Aerial Vehicle Remote Sensing for Enhanced Situational Awareness
by Leon Seidel, Simon Gehringer, Tobias Raczok, Sven-Nicolas Ivens, Bernd Eckardt and Martin Maerz
Drones 2025, 9(5), 347; https://doi.org/10.3390/drones9050347 - 3 May 2025
Viewed by 309
Abstract
Early wildfire detection is critical for effective suppression efforts, necessitating rapid alerts and precise localization. While computer vision techniques offer reliable fire detection, they often lack contextual understanding. This paper addresses this limitation by utilizing Vision Language Models (VLMs) to generate structured scene [...] Read more.
Early wildfire detection is critical for effective suppression efforts, necessitating rapid alerts and precise localization. While computer vision techniques offer reliable fire detection, they often lack contextual understanding. This paper addresses this limitation by utilizing Vision Language Models (VLMs) to generate structured scene descriptions from Unmanned Aerial Vehicle (UAV) imagery. UAV-based remote sensing provides diverse perspectives for potential wildfires, and state-of-the-art VLMs enable rapid and detailed scene characterization. We evaluated both cloud-based (OpenAI, Google DeepMind) and open-weight, locally deployed VLMs on a novel evaluation dataset specifically curated for understanding forest fire scenes. Our results demonstrate that relatively compact, fine-tuned VLMs can provide rich contextual information, including forest type, fire state, and fire type. Specifically, our best-performing model, ForestFireVLM-7B (fine-tuned from Qwen2-5-VL-7B), achieved a 76.6% average accuracy across all categories, surpassing the strongest closed-weight baseline (Gemini 2.0 Pro at 65.5%). Furthermore, zero-shot evaluation on the publicly available FIgLib dataset demonstrated state-of-the-art smoke detection accuracy using VLMs. Our findings highlight the potential of fine-tuned, open-weight VLMs for enhanced wildfire situational awareness via detailed scene interpretation. Full article
Show Figures

Figure 1

21 pages, 2806 KiB  
Article
A Computer-Aided Approach to Canine Hip Dysplasia Assessment: Measuring Femoral Head–Acetabulum Distance with Deep Learning
by Pedro Franco-Gonçalo, Pedro Leite, Sofia Alves-Pimenta, Bruno Colaço, Lio Gonçalves, Vítor Filipe, Fintan McEvoy, Manuel Ferreira and Mário Ginja
Appl. Sci. 2025, 15(9), 5087; https://doi.org/10.3390/app15095087 - 3 May 2025
Viewed by 219
Abstract
Canine hip dysplasia (CHD) screening relies on radiographic assessment, but traditional scoring methods often lack consistency due to inter-rater variability. This study presents an AI-driven system for automated measurement of the femoral head center to dorsal acetabular edge (FHC/DAE) distance, a key metric [...] Read more.
Canine hip dysplasia (CHD) screening relies on radiographic assessment, but traditional scoring methods often lack consistency due to inter-rater variability. This study presents an AI-driven system for automated measurement of the femoral head center to dorsal acetabular edge (FHC/DAE) distance, a key metric in CHD evaluation. Unlike most AI models that directly classify CHD severity using convolutional neural networks, this system provides an interpretable, measurement-based output to support a more transparent evaluation. The system combines a keypoint regression model for femoral head center localization with a U-Net-based segmentation model for acetabular edge delineation. It was trained on 7967 images for hip joint detection, 571 for keypoints, and 624 for acetabulum segmentation, all from ventrodorsal hip-extended radiographs. On a test set of 70 images, the keypoint model achieved high precision (Euclidean Distance = 0.055 mm; Mean Absolute Error = 0.0034 mm; Mean Squared Error = 2.52 × 10−5 mm2), while the segmentation model showed strong performance (Dice Score = 0.96; Intersection over Union = 0.92). Comparison with expert annotations demonstrated strong agreement (Intraclass Correlation Coefficients = 0.97 and 0.93; Weighted Kappa = 0.86 and 0.79; Standard Error of Measurement = 0.92 to 1.34 mm). By automating anatomical landmark detection, the system enhances standardization, reproducibility, and interpretability in CHD radiographic assessment. Its strong alignment with expert evaluations supports its integration into CHD screening workflows for more objective and efficient diagnosis and CHD scoring. Full article
(This article belongs to the Special Issue Research on Machine Learning in Computer Vision)
Show Figures

Figure 1

29 pages, 6623 KiB  
Article
Exploring Smartphone-Based Edge AI Inferences Using Real Testbeds
by Matías Hirsch, Cristian Mateos and Tim A. Majchrzak
Sensors 2025, 25(9), 2875; https://doi.org/10.3390/s25092875 - 2 May 2025
Viewed by 308
Abstract
The increasing availability of lightweight pre-trained models and AI execution frameworks is causing edge AI to become ubiquitous. Particularly, deep learning (DL) models are being used in computer vision (CV) for performing object recognition and image classification tasks in various application domains requiring [...] Read more.
The increasing availability of lightweight pre-trained models and AI execution frameworks is causing edge AI to become ubiquitous. Particularly, deep learning (DL) models are being used in computer vision (CV) for performing object recognition and image classification tasks in various application domains requiring prompt inferences. Regarding edge AI task execution platforms, some approaches show a strong dependency on cloud resources to complement the computing power offered by local nodes. Other approaches distribute workload horizontally, i.e., by harnessing the power of nearby edge nodes. Many of these efforts experiment with real settings comprising SBC (Single-Board Computer)-like edge nodes only, but few of these consider nomadic hardware such as smartphones. Given the huge popularity of smartphones worldwide and the unlimited scenarios where smartphone clusters could be exploited for providing computing power, this paper sheds some light in answering the following question: Is smartphone-based edge AI a competitive approach for real-time CV inferences? To empirically answer this, we use three pre-trained DL models and eight heterogeneous edge nodes including five low/mid-end smartphones and three SBCs, and compare the performance achieved using workloads from three image stream processing scenarios. Experiments were run with the help of a toolset designed for reproducing battery-driven edge computing tests. We compared latency and energy efficiency achieved by using either several smartphone clusters testbeds or SBCs only. Additionally, for battery-driven settings, we include metrics to measure how workload execution impacts smartphone battery levels. As per the computing capability shown in our experiments, we conclude that edge AI based on smartphone clusters can help in providing valuable resources to contribute to the expansion of edge AI in application scenarios requiring real-time performance. Full article
Show Figures

Figure 1

18 pages, 2613 KiB  
Review
Research Advances in Underground Bamboo Shoot Detection Methods
by Wen Li, Qiong Shao, Fan Guo, Fangyuan Bian and Huimin Yang
Agronomy 2025, 15(5), 1116; https://doi.org/10.3390/agronomy15051116 - 30 Apr 2025
Viewed by 150
Abstract
Underground winter bamboo shoots, prized for their high nutritional value and economic significance, face harvesting challenges owing to inefficient manual methods and the lack of specialized detection technologies. This review systematically evaluates current detection approaches, including manual harvesting, microwave detection, resistivity methods, and [...] Read more.
Underground winter bamboo shoots, prized for their high nutritional value and economic significance, face harvesting challenges owing to inefficient manual methods and the lack of specialized detection technologies. This review systematically evaluates current detection approaches, including manual harvesting, microwave detection, resistivity methods, and biomimetic techniques. While manual methods remain dominant, they suffer from labor shortages, low efficiency, and high damage rates. Microwave-based technologies demonstrate high accuracy and good depths but are hindered by high costs and soil moisture interference. Resistivity methods show feasibility in controlled environments but struggle with field complexity and low resolution. Biomimetic approaches, though innovative, face limitations in odor sensitivity and real-time data processing. Key challenges include heterogeneous soil conditions, performance loss, and a lack of standardized protocols. To address these, an integrated intelligent framework is proposed: (1) three-dimensional modeling via multi-sensor fusion for subsurface mapping; (2) artificial intelligence (AI)-driven harvesting robots with adaptive excavation arms and obstacle avoidance; (3) standardized cultivation systems to optimize soil conditions; (4) convolution neural network–transformer hybrid models for visual-aided radar image analysis; and (5) aeroponic AI systems for controlled growth monitoring. These advancements aim to enhance detection accuracy, reduce labor dependency, and increase yields. Future research should prioritize edge-computing solutions, cost-effective sensor networks, and cross-disciplinary collaborations to bridge technical and practical gaps. The integration of intelligent technologies is poised to transform traditional bamboo forestry into automated, sustainable “smart forest farms”, addressing global supply demands while preserving ecological integrity. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

41 pages, 3199 KiB  
Review
Enhancing Safety in Autonomous Maritime Transportation Systems with Real-Time AI Agents
by Irmina Durlik, Tymoteusz Miller, Ewelina Kostecka, Polina Kozlovska and Wojciech Ślączka
Appl. Sci. 2025, 15(9), 4986; https://doi.org/10.3390/app15094986 - 30 Apr 2025
Viewed by 431
Abstract
The maritime transportation sector is undergoing a profound shift with the emergence of autonomous vessels powered by real-time artificial intelligence (AI) agents. This article investigates the pivotal role of these agents in enhancing the safety, efficiency, and sustainability of autonomous maritime systems. Following [...] Read more.
The maritime transportation sector is undergoing a profound shift with the emergence of autonomous vessels powered by real-time artificial intelligence (AI) agents. This article investigates the pivotal role of these agents in enhancing the safety, efficiency, and sustainability of autonomous maritime systems. Following a structured literature review, we examine the architecture of real-time AI agents, including sensor integration, communication systems, and computational infrastructure. We distinguish maritime AI agents from conventional systems by emphasizing their specialized functions, real-time processing demands, and resilience in dynamic environments. Key safety mechanisms—such as collision avoidance, anomaly detection, emergency coordination, and fail-safe operations—are analyzed to demonstrate how AI agents contribute to operational reliability. The study also explores regulatory compliance, focusing on emission control, real-time monitoring, and data governance. Implementation challenges, including limited onboard computational power, legal and ethical constraints, and interoperability issues, are addressed with practical solutions such as edge AI and modular architectures. Finally, the article outlines future research directions involving smart port integration, scalable AI models, and emerging technologies like federated and explainable AI. This work highlights the transformative potential of AI agents in advancing autonomous maritime transportation. Full article
(This article belongs to the Section Marine Science and Engineering)
Show Figures

Figure 1

36 pages, 2990 KiB  
Review
Advances in Multi-Source Navigation Data Fusion Processing Methods
by Xiaping Ma, Peimin Zhou and Xiaoxing He
Mathematics 2025, 13(9), 1485; https://doi.org/10.3390/math13091485 - 30 Apr 2025
Viewed by 145
Abstract
In recent years, the field of multi-source navigation data fusion has witnessed substantial advancements, propelled by the rapid development of multi-sensor technologies, Artificial Intelligence (AI) algorithms and enhanced computational capabilities. On one hand, fusion methods based on filtering theory, such as Kalman Filtering [...] Read more.
In recent years, the field of multi-source navigation data fusion has witnessed substantial advancements, propelled by the rapid development of multi-sensor technologies, Artificial Intelligence (AI) algorithms and enhanced computational capabilities. On one hand, fusion methods based on filtering theory, such as Kalman Filtering (KF), Particle Filtering (PF), and Federated Filtering (FF), have been continuously optimized, enabling effective handling of non-linear and non-Gaussian noise issues. On the other hand, the introduction of AI technologies like deep learning and reinforcement learning has provided new solutions for multi-source data fusion, particularly enhancing adaptive capabilities in complex and dynamic environments. Additionally, methods based on Factor Graph Optimization (FGO) have also demonstrated advantages in multi-source data fusion, offering better handling of global consistency problems. In the future, with the widespread adoption of technologies such as 5G, the Internet of Things, and edge computing, multi-source navigation data fusion is expected to evolve towards real-time processing, intelligence, and distributed systems. So far, fusion methods mainly include optimal estimation methods, filtering methods, uncertain reasoning methods, Multiple Model Estimation (MME), AI, and so on. To analyze the performance of these methods and provide a reliable theoretical reference and basis for the design and development of a multi-source data fusion system, this paper summarizes the characteristics of these fusion methods and their corresponding application scenarios. These results can provide references for theoretical research, system development, and application in the fields of autonomous driving, unmanned vehicle navigation, and intelligent navigation. Full article
Show Figures

Figure 1

29 pages, 4842 KiB  
Article
Assessing Agri-Food Digitalization: Insights from Bibliometric and Survey Analysis in Andalusia
by José Ramón Luque-Reyes, Ali Zidi, Adolfo Peña-Acevedo and Rosa Gallardo-Cobos
World 2025, 6(2), 57; https://doi.org/10.3390/world6020057 - 30 Apr 2025
Viewed by 312
Abstract
The agri-food sector is going through a massive digital transformation thanks to new technologies such as the Internet of Things (IoT), big data, and Artificial Intelligence (AI). Regional disparities and implementation barriers prevent widespread uptake despite significant research advances. Drawing on bibliometric and [...] Read more.
The agri-food sector is going through a massive digital transformation thanks to new technologies such as the Internet of Things (IoT), big data, and Artificial Intelligence (AI). Regional disparities and implementation barriers prevent widespread uptake despite significant research advances. Drawing on bibliometric and survey data collected up to the end of 2023, this study examines global research trends and stakeholder perceptions in Andalusia (Spain) to identify challenges and opportunities in agricultural digitalization. Bibliographic analysis revealed that research has moved from early remote sensing to precision agriculture, IoT, robotics and big data, and that AI has recently taken over in predictive analytics, automation, and decision-support systems. However, our survey of Andalusian stakeholders highlighted a limited adoption of cutting-edge tools such as AI, blockchain, and predictive models due to economic constraints, technical challenges, and skepticism. Participants emphasized the importance of trust-building, as well as the use of simple tools that require minimal input and provide immediate benefits. Priorities for the responders were also improving market transparency, optimizing resource use, and system interoperability. The findings show that closing the gap between research and practice requires developing digital solutions that are user-centered, simplified, and context-adapted, especially when dealing with complex technologies like AI and predictive systems. This must be supported by targeted public policies and collaborative innovation ecosystems, all essential elements to accelerate the integration of smart agricultural technologies and align scientific innovation with real-world needs. Full article
Show Figures

Figure 1

Back to TopTop