Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,751)

Search Parameters:
Keywords = artificial intelligence algorithm

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 1225 KB  
Review
Integrating Artificial Intelligence into Smart Infrastructure Management for Sustainable Urban Planning
by Abdulaziz I. Almulhim
Technologies 2025, 13(11), 481; https://doi.org/10.3390/technologies13110481 (registering DOI) - 23 Oct 2025
Abstract
This paper systematically reviewed studies on the integration of Artificial Intelligence (AI) into infrastructure management to support sustainable urban planning across three primary domains: predictive maintenance and energy optimization, traffic and mobility systems, and public participation with ethical considerations. Findings from thirty peer-reviewed [...] Read more.
This paper systematically reviewed studies on the integration of Artificial Intelligence (AI) into infrastructure management to support sustainable urban planning across three primary domains: predictive maintenance and energy optimization, traffic and mobility systems, and public participation with ethical considerations. Findings from thirty peer-reviewed studies underscore how AI-driven models enhance operational efficiency, sustainability, and governance in smart cities. Effective management of AI-driven smart infrastructure can transform urban planning by optimizing resources efficiency and predictive maintenance, including 15% energy savings, 25-30% cost reductions, 25% congestion reduction, and 18% decrease in travel times. Similarly, participatory digital twins and citizen-centric approaches are found to enhance public participation and help address ethical issues. The findings further reveal that AI-based predictive maintenance frameworks improve system reliability, while deep learning and hybrid models achieve up to 92% accuracy in traffic forecasting. Nonetheless, obstacles to equitable implementation, including the digital divide, privacy infringements, and algorithmic bias, persist. Establishing ethical and participatory frameworks, anchored in responsible AI governance, is therefore vital to promote transparency, accountability, and inclusivity. This study demonstrates that AI-enabled smart infrastructure management strengthens urban planning by enhancing efficiency, sustainability, and social responsiveness. It concludes that achieving sustainable and socially accepted smart cities depends on striking a balance between technological innovation, ethical responsibility, and inclusive governance. Full article
24 pages, 1273 KB  
Article
Awareness of the Impact of IT/AI on Energy Consumption in Enterprises: A Machine Learning-Based Modelling Towards a Sustainable Digital Transformation
by Jolanta Słoniec, Monika Kulisz, Marta Małecka-Dobrogowska, Zhadyra Konurbayeva and Łukasz Sobaszek
Energies 2025, 18(21), 5573; https://doi.org/10.3390/en18215573 (registering DOI) - 23 Oct 2025
Abstract
The integration of artificial intelligence (AI) and information technology (IT) is transforming business operations while increasing energy demand. A scalable and nonintrusive method for assessing the adoption of energy-conscious IT governance without direct measurements of energy use is lacking. To address this gap, [...] Read more.
The integration of artificial intelligence (AI) and information technology (IT) is transforming business operations while increasing energy demand. A scalable and nonintrusive method for assessing the adoption of energy-conscious IT governance without direct measurements of energy use is lacking. To address this gap, a machine learning framework is developed and validated that infers the presence of energy-conscious IT governance from five indicators of digital maturity and AI adoption. Enterprise survey data were used to train five classification algorithms—support vector machine, logistic regression, decision tree, neural network, and k-nearest neighbors—to identify organizations implementing energy-efficient IT/AI management. All models achieved strong predictive performance, with SVM achieving 90% test accuracy and an F1 score of 89.8%. The findings demonstrate that an enterprise’s technological profile can serve as a reliable proxy for assessing sustainable IT/AI practices, enabling rapid assessment, benchmarking, and targeted support for green digital transformation. This approach offers significant implications for policy design, ESG reporting, and managerial decision-making in energy-conscious governance, supporting the alignment of digital innovation with environmental objectives. Full article
(This article belongs to the Special Issue Energy Markets and Energy Economy)
Show Figures

Figure 1

28 pages, 3758 KB  
Article
A Lightweight, Explainable Spam Detection System with Rüppell’s Fox Optimizer for the Social Media Network X
by Haidar AlZeyadi, Rıdvan Sert and Fecir Duran
Electronics 2025, 14(21), 4153; https://doi.org/10.3390/electronics14214153 (registering DOI) - 23 Oct 2025
Abstract
Effective spam detection systems are essential in online social media networks (OSNs) and cybersecurity, and they directly influence the quality of decision-making pertaining to security. With today’s digital communications, unsolicited spam degrades user experiences and threatens platform security. Machine learning-based spam detection systems [...] Read more.
Effective spam detection systems are essential in online social media networks (OSNs) and cybersecurity, and they directly influence the quality of decision-making pertaining to security. With today’s digital communications, unsolicited spam degrades user experiences and threatens platform security. Machine learning-based spam detection systems offer an automated defense. Despite their effectiveness, such methods are frequently hindered by the “black box” problem, an interpretability deficiency that constrains their deployment in security applications, which, in order to comprehend the rationale of classification processes, is crucial for efficient threat evaluation and response strategies. However, their effectiveness hinges on selecting an optimal feature subset. To address these issues, we propose a lightweight, explainable spam detection model that integrates a nature-inspired optimizer. The approach employs clean data with data preprocessing and feature selection using a swarm-based, nature-inspired meta-heuristic Rüppell’s Fox Optimization (RFO) algorithm. To the best of our knowledge, this is the first time the algorithm has been adapted to the field of cybersecurity. The resulting minimal feature set is used to train a supervised classifier that achieves high detection rates and accuracy with respect to spam accounts. For the interpretation of model predictions, Shapley values are computed and illustrated through swarm and summary charts. The proposed system was empirically assessed using two datasets, achieving accuracies of 99.10%, 98.77%, 96.57%, and 92.24% on Dataset 1 using RFO with DT, KNN, AdaBoost, and LR and 98.94%, 98.67%, 95.04%, and 94.52% on Dataset 2, respectively. The results validate the efficacy of the suggested approach, providing an accurate and understandable model for spam account identification. This study represents notable progress in the field, offering a thorough and dependable resolution for spam account detection issues. Full article
Show Figures

Figure 1

27 pages, 29561 KB  
Article
UAV Remote Sensing for Integrated Monitoring and Model Optimization of Citrus Leaf Water Content and Chlorophyll
by Weiqi Zhang, Shijiang Zhu, Yun Zhong, Hu Li, Aihua Sun, Yanqun Zhang and Jian Zeng
Agriculture 2025, 15(21), 2197; https://doi.org/10.3390/agriculture15212197 - 23 Oct 2025
Abstract
Leaf water content (LWC) and chlorophyll content (CHL) are pivotal physiological indicators for assessing citrus growth and stress responses. However, conventional measurement techniques—such as fresh-to-dry weight ratio and spectrophotometry—are destructive, time-consuming, and limited in spatial and temporal resolution, making them unsuitable for large-scale [...] Read more.
Leaf water content (LWC) and chlorophyll content (CHL) are pivotal physiological indicators for assessing citrus growth and stress responses. However, conventional measurement techniques—such as fresh-to-dry weight ratio and spectrophotometry—are destructive, time-consuming, and limited in spatial and temporal resolution, making them unsuitable for large-scale monitoring. To achieve efficient large-scale monitoring, this study proposes a synergistic inversion framework integrating UAV multispectral remote sensing with intelligent optimization algorithms. Field experiments during the 2024 growing season (April–October) in western Hubei collected 263 ground measurements paired with multispectral images. Sensitive spectral bands and vegetation indices for LWC and CHL were identified through Pearson correlation analysis. Five modeling approaches—Partial Least Squares Regression (PLS); Extreme Learning Machine (ELM); and ELM optimized by Particle Swarm Optimization (PSO-ELM), Artificial Hummingbird Algorithm (AHA-ELM), and Grey Wolf Optimizer (GWO-ELM)—were evaluated. Results demonstrated that (1) VI-based models outperformed raw spectral band models; (2) the PSO-ELM synergistic inversion model using sensitive VIs achieved optimal accuracy (validation R2: 0.790 for LWC, 0.672 for CHL), surpassing PLS by 15.16% (LWC) and 53.78% (CHL), and standard ELM by 20.80% (LWC) and 25.84% (CHL), respectively; and (3) AHA-ELM and GWO-ELM also showed significant enhancements. This research provides a robust technical foundation for precision management of citrus orchards in drought-prone regions. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

23 pages, 7295 KB  
Article
An Artificial Intelligence-Driven Precipitation Downscaling Method Using Spatiotemporally Coupled Multi-Source Data
by Chao Li, Long Ma, Xing Huang, Chenyue Wang, Xinyuan Liu, Bolin Sun and Qiang Zhang
Atmosphere 2025, 16(11), 1226; https://doi.org/10.3390/atmos16111226 - 22 Oct 2025
Abstract
Addressing the challenges posed by sparse ground meteorological stations and the insufficient resolution and accuracy of reanalysis and satellite precipitation products, this study establishes a multi-source environmental feature system that precisely matches the target precipitation data resolution (1 km × 1 km). Based [...] Read more.
Addressing the challenges posed by sparse ground meteorological stations and the insufficient resolution and accuracy of reanalysis and satellite precipitation products, this study establishes a multi-source environmental feature system that precisely matches the target precipitation data resolution (1 km × 1 km). Based on this foundation, it innovatively proposes a Random Forest-based Dual-Spectrum Adaptive Threshold algorithm (RF-DSAT) for key factor screening and subsequently integrates Convolutional Neural Network (CNN) with Gated Recurrent Unit (GRU) to construct a Spatiotemporally Coupled Bias Correction Model for multi-source data (CGBCM). Furthermore, by integrating these technological components, it presents an Artificial Intelligence-driven Multi-source data Precipitation Downscaling method (AIMPD), capable of downscaling precipitation fields from 0.1° × 0.1° to high-precision 1 km × 1 km resolution. Taking the bend region of the Yellow River Basin in China as a case study, AIMPD demonstrates superior performance compared to bicubic interpolation, eXtreme Gradient Boosting (XGBoost), CNN, and Long Short-Term Memory (LSTM) networks, achieving improvements of approximately 1.73% to 40% in Nash-Sutcliffe Efficiency (NSE). It exhibits exceptional accuracy, particularly in extreme precipitation downscaling, while significantly enhancing computational efficiency, thereby offering novel insights for global precipitation downscaling research. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

40 pages, 33354 KB  
Review
Artificial Intelligence in Urban Planning: A Bibliometric Analysis and Hotspot Prediction
by Shuyu Si, Yeduozi Yao and Jing Wu
Land 2025, 14(11), 2100; https://doi.org/10.3390/land14112100 - 22 Oct 2025
Abstract
The accelerating global urbanization process has posed new challenges to urban planning. With the rapid advancement of artificial intelligence (AI) technology, the application of AI in urban planning has gradually emerged as a prominent research focus. This study systematically reviews the current state, [...] Read more.
The accelerating global urbanization process has posed new challenges to urban planning. With the rapid advancement of artificial intelligence (AI) technology, the application of AI in urban planning has gradually emerged as a prominent research focus. This study systematically reviews the current state, development trends, and challenges of AI applications in urban planning through a combination of bibliometric analysis using Citespace, AI-assisted reading based on generative models, and predictive analysis via support vector machine (SVM) algorithms. The findings reveal the following: (1) The application of AI in urban planning has undergone three stages—namely, the budding stage (January 1984 to January 2017), the rapid development stage (January 2017 to January 2023), and the explosive growth stage (January 2023 to January 2025). (2) Research hotspots have shifted from early-stage basic data integration and fundamental technology exploration to a continuous fusion and iteration of foundational and emerging technologies. (3) Globally, China, the United States, and India are the leading contributors to research in this field, with inter-country collaborations demonstrating regional clustering. (4) High-frequency keywords such as “deep learning,” “machine learning,” and “smart city” are prevalent in the literature, reflecting the application of AI technologies across both macro and micro urban planning scenarios. (5) Based on current research and predictive analysis, the application scenarios of technologies like deep learning and machine learning are expected to continue expanding. At the same time, emerging technologies, including generative AI and explainable AI, are also projected to become focal points of future research. This study offers a technical application guide for urban planning, promotes the scientific integration of AI technologies within the field, and provides both theoretical support and practical guidance for achieving efficient and sustainable urban development. Full article
(This article belongs to the Section Land Innovations – Data and Machine Learning)
31 pages, 8000 KB  
Review
Enhancing Biomedical Metal 3D Printing with AI and Nanomaterials Integration
by Jackie Liu, Jaison Jeevanandam and Michael K. Danquah
Metals 2025, 15(10), 1163; https://doi.org/10.3390/met15101163 - 21 Oct 2025
Abstract
The integration of artificial intelligence (AI) with nanomaterials is rapidly transforming metal three-dimensional (3D) printing for biomedical applications due to their unprecedented precision, customization, and functionality. This article discusses the role of AI in optimizing design parameters, predicting material behaviors, and controlling additive [...] Read more.
The integration of artificial intelligence (AI) with nanomaterials is rapidly transforming metal three-dimensional (3D) printing for biomedical applications due to their unprecedented precision, customization, and functionality. This article discusses the role of AI in optimizing design parameters, predicting material behaviors, and controlling additive manufacturing processes for metal-based implants and prosthetics. Nanomaterials, particularly metallic nanoparticles, enhance the mechanical strength, biocompatibility, and functional properties of 3D-printed structures. AI-driven models, including machine learning (ML) and deep learning algorithms, are increasingly used to forecast print quality, detect defects in real-time, and reduce material waste. Moreover, data-driven design approaches enable patient-specific implant development and predictive modeling of biological responses. We highlight recent advancements in AI-guided material discovery through microstructure–property correlations and multi-scale simulation. Challenges such as data scarcity, standardization, and integration across interdisciplinary domains are also discussed, along with emerging solutions based on federated learning and the digital twinning approach. Further, the article emphasizes the importance of AI and nanomaterials to revolutionize metal 3D printing to fabricate smarter, safer, and effective biomedical devices. Future perspectives covering the need for robust datasets, explainable AI frameworks, and regulatory frameworks to ensure the clinical translation of AI-enhanced additive manufacturing technologies are discussed. Full article
(This article belongs to the Special Issue Metal 3D Printing Techniques for Biomedical Applications)
Show Figures

Figure 1

17 pages, 1373 KB  
Article
TOXOS: Spinning Up Nonlinearity in On-Vehicle Inference with a RISC-V CORDIC Coprocessor
by Luigi Giuffrida, Guido Masera and Maurizio Martina
Technologies 2025, 13(10), 479; https://doi.org/10.3390/technologies13100479 - 21 Oct 2025
Abstract
The rapid advancement of artificial intelligence in automotive applications, particularly in Advanced Driver-Assistance Systems (ADAS) and smart battery management on electric vehicles, increases the demand for efficient near-sensor processing. While the problem of linear algebra in machine learning is well-addressed by existing accelerators, [...] Read more.
The rapid advancement of artificial intelligence in automotive applications, particularly in Advanced Driver-Assistance Systems (ADAS) and smart battery management on electric vehicles, increases the demand for efficient near-sensor processing. While the problem of linear algebra in machine learning is well-addressed by existing accelerators, the computation of nonlinear activation functions is usually delegated to the host CPU, resulting in energy inefficiency and high computational costs. This paper introduces TOXOS, a RISC-V-compliant coprocessor designed to address this challenge. TOXOS implements the COordinateRotation DIgital Computer (CORDIC) algorithm to efficiently compute nonlinear functions. Taking advantage of RISC-V modularity and extendability, TOXOS seamlessly integrates with existing computing architectures. The coprocessor’s configurability enables fine-tuning of the area-performance tradeoff by adjusting the internal parallelism, the CORDIC iteration count, and the overall latency. Our implementation on a 65nm technology demonstrates a significant improvement over CPU-based solutions, showcasing a considerable speedup compared to the glibc implementation of nonlinear functions. To validate TOXOS’s real-world impact, we integrated TOXOS in an actual RISC-V microcontroller targeting the on-vehicle execution of machine learning models. This work addresses a critical gap in transcendental function computation for AI, enabling real-time decision-making for autonomous driving systems, maintaining the power efficiency crucial for electric vehicles. Full article
(This article belongs to the Section Manufacturing Technology)
Show Figures

Figure 1

18 pages, 24042 KB  
Article
Remote Sensing and AI Coupled Approach for Large-Scale Archaeological Mapping in the Andean Arid Highlands: Case Study in Altos Arica, Chile
by Maria Elena Castiello, Jürgen Landauer and Thibault Saintenoy
Remote Sens. 2025, 17(20), 3499; https://doi.org/10.3390/rs17203499 - 21 Oct 2025
Abstract
Artificial intelligence algorithms for automated archaeological site detection have been scarcely applied in the Andean highlands, regions that preserve a significant amount of surface archaeological architecture but have not yet been fully explored or mapped due to the difficult terrain. This paper presents [...] Read more.
Artificial intelligence algorithms for automated archaeological site detection have been scarcely applied in the Andean highlands, regions that preserve a significant amount of surface archaeological architecture but have not yet been fully explored or mapped due to the difficult terrain. This paper presents a case study of the application of convolutional neural networks (CNNs) to automatically identify archaeological architecture in the Azapa valley in the Arica y Parinacota region of Chile. Using a high-resolution and big regional-scale archaeological geodatabase created through a systematic and detailed photo-interpretation survey of satellite imagery and fieldwork, our study demonstrates the efficiency of CNN-based automated detection in identifying archaeological stone structures such as roundhouses and corrals in the Chilean highlands. After outlining the technical protocol for automated detection, we present the results and discuss the potential of our AI model for archaeological mapping in arid highland environments, from a regional to a more extended and global perspective. Full article
Show Figures

Figure 1

24 pages, 2308 KB  
Review
Review on Application of Machine Vision-Based Intelligent Algorithms in Gear Defect Detection
by Dehai Zhang, Shengmao Zhou, Yujuan Zheng and Xiaoguang Xu
Processes 2025, 13(10), 3370; https://doi.org/10.3390/pr13103370 - 21 Oct 2025
Abstract
Gear defect detection directly affects the operational reliability of critical equipment in fields such as automotive and aerospace. Gear defect detection technology based on machine vision, leveraging the advantages of non-contact measurement, high efficiency, and cost-effectiveness, has become a key support for quality [...] Read more.
Gear defect detection directly affects the operational reliability of critical equipment in fields such as automotive and aerospace. Gear defect detection technology based on machine vision, leveraging the advantages of non-contact measurement, high efficiency, and cost-effectiveness, has become a key support for quality control in intelligent manufacturing. However, it still faces challenges including difficulties in semantic alignment of multimodal data, the imbalance between real-time detection requirements and computational resources, and poor model generalization in few-shot scenarios. This paper takes the paradigm evolution of gear defect detection technology as the main line, systematically reviews its development from traditional image processing to deep learning, and focuses on the innovative application of intelligent algorithms. A research framework of “technical bottleneck-breakthrough path-application verification” is constructed: for the problem of multimodal fusion, the cross-modal feature alignment mechanism based on Transformer network is deeply analyzed, clarifying its technical path of realizing joint embedding of visual and vibration signals by establishing global correlation mapping; for resource constraints, the performance of lightweight models such as MobileNet and ShuffleNet is quantitatively compared, verifying that these models reduce Parameters by 40–60% while maintaining the mean Average Precision essentially unchanged; for small-sample scenarios, few-shot generation models based on contrastive learning are systematically organized, confirming that their accuracy in the 10-shot scenario can reach 90% of that of fully supervised models, thus enhancing generalization ability. Future research can focus on the collaboration between few-shot generation and physical simulation, edge-cloud dynamic scheduling, defect evolution modeling driven by multiphysics fields, and standardization of explainable artificial intelligence. It aims to construct a gear detection system with autonomous perception capabilities, promoting the development of industrial quality inspection toward high-precision, high-robustness, and low-cost intelligence. Full article
Show Figures

Figure 1

18 pages, 1825 KB  
Article
Fast Deep Belief Propagation: An Efficient Learning-Based Algorithm for Solving Constraint Optimization Problems
by Shufeng Kong, Feifan Chen, Zijie Wang and Caihua Liu
Mathematics 2025, 13(20), 3349; https://doi.org/10.3390/math13203349 - 21 Oct 2025
Abstract
Belief Propagation (BP) is a fundamental heuristic for solving Constraint Optimization Problems (COPs), yet its practical applicability is constrained by slow convergence and instability in loopy factor graphs. While Damped BP (DBP) improves convergence by using manually tuned damping factors, its reliance on [...] Read more.
Belief Propagation (BP) is a fundamental heuristic for solving Constraint Optimization Problems (COPs), yet its practical applicability is constrained by slow convergence and instability in loopy factor graphs. While Damped BP (DBP) improves convergence by using manually tuned damping factors, its reliance on labor-intensive hyperparameter optimization limits scalability. Deep Attentive BP (DABP) addresses this by automating damping through recurrent neural networks (RNNs), but introduces significant memory overhead and sequential computation bottlenecks. To reduce memory usage and accelerate deep belief propagation, this paper introduces Fast Deep Belief Propagation (FDBP), a deep learning framework that improves COP solving through online self-supervised learning and graphics processing unit (GPU) acceleration. FDBP decouples the learning of damping factors from BP message passing, inferring all parameters for an entire BP iteration in a single step, and leverages mixed precision to further optimize GPU memory usage. This approach substantially improves both the efficiency and scalability of BP optimization. Extensive evaluations on synthetic and real-world benchmarks highlight the superiority of FDBP, especially for large-scale instances where DABP fails due to memory constraints. Moreover, FDBP achieves an average speedup of 2.87× over DABP with the same restart counts. Because BP for COPs is a mathematically grounded GPU-parallel message-passing framework that bridges applied mathematics, computing, and machine learning, and is widely applicable across science and engineering, our work offers a promising step toward more efficient solutions to these problems. Full article
(This article belongs to the Special Issue Applied Mathematics, Computing, and Machine Learning)
Show Figures

Figure 1

27 pages, 879 KB  
Review
A Literature Review of Automated Roadside Parking Monitoring Using Artificial Intelligence Algorithms
by Christina Georgopoulou and Panagiotis Papantoniou
Electronics 2025, 14(20), 4119; https://doi.org/10.3390/electronics14204119 - 21 Oct 2025
Abstract
The issue of parking has been a major concern in urban centers, primarily due to the increasing demand and daily traffic congestion. This paper endeavors to explore, process, and evaluate the existing literature on parking space detection methodologies, integrating photogrammetric techniques with deep [...] Read more.
The issue of parking has been a major concern in urban centers, primarily due to the increasing demand and daily traffic congestion. This paper endeavors to explore, process, and evaluate the existing literature on parking space detection methodologies, integrating photogrammetric techniques with deep learning models. Towards that end, various existing systems, applications, and models that have been studied were evaluated, and their impact on different test cases was assessed. The literature review was based on the guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA). Results indicated that smart parking systems significantly enhance dynamic parking management by leveraging deep learning techniques, particularly convolutional neural networks (CNNs). These systems process visual data from monitoring sources to generate statistics, diagrams, and maps that highlight occupied and available parking spaces, allowing for more efficient parking management and improved traffic flow. These methods contributed to improved urban mobility by providing real-time information to drivers about parking conditions along their routes. This not only enhanced convenience but also supported the development of smarter and more sustainable urban transportation solutions. Full article
(This article belongs to the Special Issue Automated Driving Systems: Latest Advances and Prospects)
Show Figures

Figure 1

25 pages, 5280 KB  
Article
Obstacle Avoidance Path Planning for Unmanned Aerial Vehicle in Workshops Based on Parameter-Optimized Artificial Potential Field A* Algorithm
by Xiaoling Meng, Zhikang Zhang, Xijing Zhu, Jing Zhao, Xiao Wu, Xiaoqiang Zhang and Jing Yang
Machines 2025, 13(10), 967; https://doi.org/10.3390/machines13100967 - 20 Oct 2025
Viewed by 90
Abstract
As the intelligent transformation of manufacturing accelerates, Unmanned Aerial Vehicles are increasingly being deployed for workshop operations, making efficient obstacle avoidance path planning a critical requirement. This paper introduces a parameter-optimized path planning method for the Unmanned Aerial Vehicle, termed the Artificial Potential [...] Read more.
As the intelligent transformation of manufacturing accelerates, Unmanned Aerial Vehicles are increasingly being deployed for workshop operations, making efficient obstacle avoidance path planning a critical requirement. This paper introduces a parameter-optimized path planning method for the Unmanned Aerial Vehicle, termed the Artificial Potential Field A* algorithm, which enhances the standard A* approach through the integration of an artificial potential field and a variable step size strategy. The variable step size mechanism allows dynamic adjustment of the search step size, while potential field values from the artificial potential field are embedded into the cost function to improve planning accuracy. Key parameters of the hybrid algorithm are subsequently optimized using response surface methodology, with a regression model built to analyze parameter interactions and determine the optimal configuration. Simulation results across multiple performance indicators confirm that the proposed Artificial Potential Field A* algorithm delivers superior outcomes in path length, attitude angle variation, and flight altitude stability. This approach provides an effective solution for enhancing Unmanned Aerial Vehicle operational efficiency in production workshops. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

12 pages, 573 KB  
Article
An Ensemble Model for Fundus Images to Aid in Age-Related Macular Degeneration Grading
by Roberto Romero-Oraá, María Herrero-Tudela, María Isabel López, Roberto Hornero, Pere Romero-Aroca and María García
Diagnostics 2025, 15(20), 2644; https://doi.org/10.3390/diagnostics15202644 - 20 Oct 2025
Viewed by 81
Abstract
Background: Age-related macular degeneration (AMD) is a leading cause of visual impairment in the elderly population. Periodic examinations through fundus image analysis are paramount for early diagnosis and adequate treatment. Automatic artificial intelligence algorithms have proven useful for AMD grading, with the ensemble [...] Read more.
Background: Age-related macular degeneration (AMD) is a leading cause of visual impairment in the elderly population. Periodic examinations through fundus image analysis are paramount for early diagnosis and adequate treatment. Automatic artificial intelligence algorithms have proven useful for AMD grading, with the ensemble strategies recently gaining special attention. Methods: This study presents an ensemble model that combines 2 individual models of a different nature. The first model was based on the ResNetRS architecture and supervised learning. The second model, known as RETFound, was based on a visual transformer architecture and self-supervised learning. Results: Our experiments were conducted using 149,819 fundus images from the Age-Related Eye Disease Study (AREDS) public dataset. An additional private dataset of 1679 images was used to validate our approach. The results on AREDS achieved a quadratic weighted kappa of 0.7364 and an accuracy of 66.03%, which outperforms the previous methods in the literature. Conclusions: The ensemble strategy presented in this study could be useful for the screening of AMD in a clinical setting. Consequently, eye care for AMD patients would be improved while clinical costs and workload would be reduced. Full article
Show Figures

Figure 1

21 pages, 2556 KB  
Article
Comparison of Machine Learning Models in Nonlinear and Stochastic Signal Classification
by Elzbieta Olejarczyk and Carlo Massaroni
Appl. Sci. 2025, 15(20), 11226; https://doi.org/10.3390/app152011226 - 20 Oct 2025
Viewed by 82
Abstract
This study aims to compare different classifiers in the context of distinguishing two classes of signals: nonlinear electrocardiography (ECG) signals and stochastic artifacts occurring in ECG signals. The ECG signals from a single-lead wearable Movesense device were analyzed with a set of eight [...] Read more.
This study aims to compare different classifiers in the context of distinguishing two classes of signals: nonlinear electrocardiography (ECG) signals and stochastic artifacts occurring in ECG signals. The ECG signals from a single-lead wearable Movesense device were analyzed with a set of eight features: variance (VAR), three fractal dimension measures (Higuchi fractal dimension (HFD), Katz fractal dimension (KFD), and Detrended Fluctuation Analysis (DFA)), and four entropy measures (approximate entropy (ApEn), sample entropy (SampEn), and multiscale entropy (MSE) for scales 1 and 2). The minimum-redundancy maximum-relevance algorithm was applied for evaluation of feature importance. A broad spectrum of machine learning models was considered for classification. The proposed approach allowed for comparison of classifier features, as well as providing a broader insight into the characteristics of the signals themselves. The most important features for classification were VAR, DFA, ApEn, and HFD. The best performance among 34 classifiers was obtained using an optimized RUSBoosted Trees ensemble classifier (sensitivity, specificity, and positive and negative predictive values were 99.8, 73.7%, 99.8, and 74.3, respectively). The accuracy of the Movesense device was very high (99.6%). Moreover, the multifractality of ECG during sleep was observed in the relationship between SampEn (or ApEn) and MSE. Full article
(This article belongs to the Special Issue New Advances in Electrocardiogram (ECG) Signal Processing)
Show Figures

Figure 1

Back to TopTop