Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (10,122)

Search Parameters:
Keywords = random processes

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 735 KB  
Article
Enhancing ESG Risk Assessment with Litigation Signals: A Legal-AI Hybrid Approach for Detecting Latent Risks
by Minjung Park
Systems 2025, 13(9), 783; https://doi.org/10.3390/systems13090783 - 5 Sep 2025
Abstract
Environmental, Social, and Governance (ESG) ratings are widely used for investment and regulatory decision-making, yet they often suffer from symbolic compliance and information asymmetry. To address these limitations, this study introduces a hybrid ESG risk assessment model that integrates court ruling data with [...] Read more.
Environmental, Social, and Governance (ESG) ratings are widely used for investment and regulatory decision-making, yet they often suffer from symbolic compliance and information asymmetry. To address these limitations, this study introduces a hybrid ESG risk assessment model that integrates court ruling data with traditional ESG ratings to detect latent sustainability risks. Using a dataset of 213 ESG-related U.S. court rulings from January 2023 to May 2025, we apply natural language processing (TF-IDF, Legal-BERT) and explainable AI (SHAP) techniques to extract structured features from legal texts. We construct and compare classification models—including Random Forest, XGBoost, and a Legal-BERT-based hybrid model—to predict firms’ litigation risk. The hybrid model significantly outperforms the baseline ESG-only model in all key metrics: F1-score (0.81), precision (0.79), recall (0.84), and AUC-ROC (0.87). SHAP analysis reveals that legal features such as regulatory sanctions and governance violations are the most influential predictors. This study demonstrates the empirical value of integrating adjudicated legal evidence into ESG modeling and offers a transparent, verifiable framework to enhance ESG risk evaluation and reduce information asymmetry in sustainability assessments. Full article
(This article belongs to the Special Issue Systems Analysis of Enterprise Sustainability: Second Edition)
Show Figures

Figure 1

27 pages, 2800 KB  
Article
A Hierarchical Multi-Feature Point Cloud Lithology Identification Method Based on Feature-Preserved Compressive Sampling (FPCS)
by Xiaolei Duan, Ran Jing, Yanlin Shao, Yuangang Liu, Binqing Gan, Peijin Li and Longfan Li
Sensors 2025, 25(17), 5549; https://doi.org/10.3390/s25175549 - 5 Sep 2025
Abstract
Lithology identification is a critical technology for geological resource exploration and engineering safety assessment. However, traditional methods suffer from insufficient feature representation and low classification accuracy due to challenges such as weathering, vegetation cover, and spectral overlap in complex sedimentary rock regions. This [...] Read more.
Lithology identification is a critical technology for geological resource exploration and engineering safety assessment. However, traditional methods suffer from insufficient feature representation and low classification accuracy due to challenges such as weathering, vegetation cover, and spectral overlap in complex sedimentary rock regions. This study proposes a hierarchical multi-feature random forest algorithm based on Feature-Preserved Compressive Sampling (FPCS). Using 3D laser point cloud data from the Manas River outcrop in the southern margin of the Junggar Basin as the test area, we integrate graph signal processing and multi-scale feature fusion to construct a high-precision lithology identification model. The FPCS method establishes a geologically adaptive graph model constrained by geodesic distance and gradient-sensitive weighting, employing a three-tier graph filter bank (low-pass, band-pass, and high-pass) to extract macroscopic morphology, interface gradients, and microscopic fracture features of rock layers. A dynamic gated fusion mechanism optimizes multi-level feature weights, significantly improving identification accuracy in lithological transition zones. Experimental results on five million test samples demonstrate an overall accuracy (OA) of 95.6% and a mean accuracy (mAcc) of 94.3%, representing improvements of 36.1% and 20.5%, respectively, over the PointNet model. These findings confirm the robust engineering applicability of the FPCS-based hierarchical multi-feature approach for point cloud lithology identification. Full article
(This article belongs to the Section Remote Sensors)
23 pages, 2435 KB  
Article
Explainable Deep Kernel Learning for Interpretable Automatic Modulation Classification
by Carlos Enrique Mosquera-Trujillo, Juan Camilo Lugo-Rojas, Diego Fabian Collazos-Huertas, Andrés Marino Álvarez-Meza and German Castellanos-Dominguez
Computers 2025, 14(9), 372; https://doi.org/10.3390/computers14090372 - 5 Sep 2025
Abstract
Modern wireless communication systems increasingly rely on Automatic Modulation Classification (AMC) to enhance reliability and adaptability, especially in the presence of severe signal degradation. However, despite significant progress driven by deep learning, many AMC models still struggle with high computational overhead, suboptimal performance [...] Read more.
Modern wireless communication systems increasingly rely on Automatic Modulation Classification (AMC) to enhance reliability and adaptability, especially in the presence of severe signal degradation. However, despite significant progress driven by deep learning, many AMC models still struggle with high computational overhead, suboptimal performance under low-signal-to-noise conditions, and limited interpretability, factors that hinder their deployment in real-time, resource-constrained environments. To address these challenges, we propose the Convolutional Random Fourier Features with Denoising Thresholding Network (CRFFDT-Net), a compact and interpretable deep kernel architecture that integrates Convolutional Random Fourier Features (CRFFSinCos), an automatic threshold-based denoising module, and a hybrid time-domain feature extractor composed of CNN and GRU layers. Our approach is validated on the RadioML 2016.10A benchmark dataset, encompassing eleven modulation types across a wide signal-to-noise ratio (SNR) spectrum. Experimental results demonstrate that CRFFDT-Net achieves an average classification accuracy that is statistically comparable to state-of-the-art models, while requiring significantly fewer parameters and offering lower inference latency. This highlights an exceptional accuracy–complexity trade-off. Moreover, interpretability analysis using GradCAM++ highlights the pivotal role of the Convolutional Random Fourier Features in the representation learning process, providing valuable insight into the model’s decision-making. These results underscore the promise of CRFFDT-Net as a lightweight and explainable solution for AMC in real-world, low-power communication systems. Full article
(This article belongs to the Special Issue AI in Complex Engineering Systems)
Show Figures

Figure 1

25 pages, 1035 KB  
Article
A Strength Allocation Bayesian Game Method for Swarming Unmanned Systems
by Lingwei Li and Bangbang Ren
Drones 2025, 9(9), 626; https://doi.org/10.3390/drones9090626 - 5 Sep 2025
Abstract
This paper investigates a swarming strength allocation Bayesian game approach under incomplete information to address the high-value targets protection problem of swarming unmanned systems. The swarming strength allocation Bayesian game model is established by analyzing the non-zero sum incomplete information game mechanism during [...] Read more.
This paper investigates a swarming strength allocation Bayesian game approach under incomplete information to address the high-value targets protection problem of swarming unmanned systems. The swarming strength allocation Bayesian game model is established by analyzing the non-zero sum incomplete information game mechanism during the protection process, considering high-tech and low-tech interception players. The model incorporates a game benefit quantification method based on an improved Lanchester equation. The method regards massive swarm individuals as a collective unit for overall cost calculation, thus avoiding the curse of dimensionality from increasing numbers of individuals. Based on it, a Bayesian Nash equilibrium solving approach is presented to determine the optimal swarming strength allocation for the protection player. Finally, compared with random allocation, greedy heuristic, rule-based assignment, and Colonel Blotto game, the simulations demonstrate the proposed method’s robustness in large-scale strength allocation. Full article
(This article belongs to the Collection Drones for Security and Defense Applications)
Show Figures

Figure 1

0 pages, 832 KB  
Proceeding Paper
Heart Failure Prediction Through a Comparative Study of Machine Learning and Deep Learning Models
by Mohid Qadeer, Rizwan Ayaz and Muhammad Ikhsan Thohir
Eng. Proc. 2025, 107(1), 61; https://doi.org/10.3390/engproc2025107061 - 4 Sep 2025
Abstract
The heart is essential to human life, so it is important to protect it and understand any kind of damage it can have. All the diseases related to hearts leads to heart failure. To help address this, a tool for predicting survival is [...] Read more.
The heart is essential to human life, so it is important to protect it and understand any kind of damage it can have. All the diseases related to hearts leads to heart failure. To help address this, a tool for predicting survival is needed. This study explores the use of several classification models for forecasting heart failure outcomes using the Heart Failure Clinical Records dataset. The outcome contrasts a deep learning (DL) model known as the Convolutional Neural Network (CNN) with many machine learning models, including Random Forest (RF), K-Nearest Neighbors (KNN), Decision Tree (DT), and Naïve Bayes (NB). Various data processing techniques, like standard scaling and Synthetic Minority Oversampling Technique (SMOTE), are used to improve prediction accuracy. The CNN model performs best by achieving 99%. In comparison, the best-performing ML model, Naïve Bayes, reaches 92.57%. This shows that deep learning provides better predictions of heart failure, making it a useful tool for early detection and better patient care. Full article
Show Figures

Figure 1

33 pages, 21287 KB  
Article
Interactive, Shallow Machine Learning-Based Semantic Segmentation of 2D and 3D Geophysical Data from Archaeological Sites
by Lieven Verdonck, Michel Dabas and Marc Bui
Remote Sens. 2025, 17(17), 3092; https://doi.org/10.3390/rs17173092 - 4 Sep 2025
Abstract
In recent decades, technological developments in archaeological geophysics have led to growing data volumes, so that an important bottleneck is now at the stage of data interpretation. The manual delineation and classification of anomalies are time-consuming, and different methods for (semi-)automatic image segmentation [...] Read more.
In recent decades, technological developments in archaeological geophysics have led to growing data volumes, so that an important bottleneck is now at the stage of data interpretation. The manual delineation and classification of anomalies are time-consuming, and different methods for (semi-)automatic image segmentation have been proposed, based on explicitly formulated rulesets or deep convolutional neural networks (DCNNs). So far, these have not been used widely in archaeological geophysics because of the complexity of the segmentation task (due to the low contrast between archaeological structures and background and the low predictability of the targets). Techniques based on shallow machine learning (e.g., random forests, RFs) have been explored very little in archaeological geophysics, although they are less case-specific than most rule-based methods, do not require large training sets as is the case for DCNNs, and can easily handle 3D data. In this paper, we show their potential for geophysical data analysis. For the classification on the pixel level, we use ilastik, an open-source segmentation tool developed in medical imaging. Algorithms for object classification, manual reclassification, post-processing, vectorisation, and georeferencing were brought together in a Jupyter Notebook, available on GitHub (version 7.3.2). To assess the accuracy of the RF classification applied to geophysical datasets, we compare it with manual interpretation. A quantitative evaluation using the mean intersection over union metric results in scores of ~60%, which only slightly increases after the manual correction of the RF classification results. Remarkably, a similar score results from the comparison between independent manual interpretations. This observation illustrates that quantitative metrics are not a panacea for evaluating machine-generated geophysical data interpretation in archaeology, which is characterised by a significant degree of uncertainty. It also raises the question of how the semantic segmentation of geophysical data (whether carried out manually or with the aid of machine learning) can best be evaluated. Full article
Show Figures

Figure 1

21 pages, 5406 KB  
Article
Optimizing Dam Detection in Large Areas: A Hybrid RF-YOLOv11 Framework with Candidate Area Delineation
by Chenyao Qu, Yifei Liu, Zhimin Wu and Wei Wang
Sensors 2025, 25(17), 5507; https://doi.org/10.3390/s25175507 - 4 Sep 2025
Abstract
As critical infrastructure for flood control and disaster mitigation, the completeness of a dam spatial database directly impacts regional emergency disaster response. However, existing dam data in some developing countries suffer from severe gaps and outdated information, particularly concerning small- and medium-sized dams, [...] Read more.
As critical infrastructure for flood control and disaster mitigation, the completeness of a dam spatial database directly impacts regional emergency disaster response. However, existing dam data in some developing countries suffer from severe gaps and outdated information, particularly concerning small- and medium-sized dams, hindering rapid response during disasters. There is an urgent need to improve the physical dam database and implement dynamic monitoring. Yet, current remote sensing identification methods face limitations, including a lack of diverse dam samples, limited analysis of geographical factors, and low efficiency in full-image processing, making it difficult to efficiently enhance dam databases. To address these issues, this study proposes a dam extraction framework integrating comprehensive geographical factor analysis with deep learning detection, validated in Sindh Province, Pakistan. Firstly, multiple geographical factors were fused using the Random Forest algorithm to generate a dam existence probability map. High-probability candidate areas were delineated using dynamic threshold segmentation (precision: 0.90, recall: 0.76, AUC: 0.86). Subsequently, OpenStreetMap (OSM) water body data excluded non-dam potential areas, further narrowing the candidate areas. Finally, a dam image dataset was constructed to train a dam identification model based on YOLOv11, achieving an mAP50 of 0.85. This trained model was then applied to high-resolution remote sensing imagery of the candidate areas for precise identification. Ultimately, 16 previously unrecorded small and medium-sized dams were identified in Sindh Province, enhancing its dam location database. Experiments demonstrate that this method, through the synergistic optimization of geographical constraints and deep learning, significantly improves the efficiency and reliability of dam identification. It provides high-precision data support for dam disaster emergency response and water resource management, exhibiting strong practical utility and regional scalability. Full article
Show Figures

Figure 1

18 pages, 8670 KB  
Article
Reconstructing Net Primary Productivity in Northern Greater Khingan Range Using Tree Rings
by Yuhang Yang, Yongchun Hua, Qiuliang Zhang and Fei Wang
Plants 2025, 14(17), 2768; https://doi.org/10.3390/plants14172768 - 4 Sep 2025
Abstract
As critically important global carbon sinks, the net primary productivity (NPP) of boreal forests is crucial for understanding the terrestrial carbon cycle. However, a lack of long-term, high-resolution data has hindered progress in this field. In this study, we used a standardized tree [...] Read more.
As critically important global carbon sinks, the net primary productivity (NPP) of boreal forests is crucial for understanding the terrestrial carbon cycle. However, a lack of long-term, high-resolution data has hindered progress in this field. In this study, we used a standardized tree ring chronology of Larix gmelinii to identify the dominant factors driving NPP changes in the Northern Greater Khingan Range, applying both Pearson correlation coefficients and SHAP importance values. We then integrated XGBoost and Extreme Random Forest (ERF) models to reconstruct interannual forest NPP across the region from 1968 to 2020. Our results reveal a significant correlation between NPP and tree radial growth, with both processes dominated by growing season drought. The combination of machine learning and tree ring methods proved to be a reliable approach, with the XGBoost model achieving higher reconstruction accuracy than the ERF model. The reconstructed NPP series showed strong regional correlation with MODIS NPP products (r > 0.6) and revealed interdecadal cycles of 10, 28, and 49 years, as well as shorter periodicities of 2–8 and 15–18 years. This study establishes a novel framework for high-resolution NPP reconstruction and clarifies the response mechanisms of the boreal forest carbon cycle to climate change. Full article
(This article belongs to the Section Plant Ecology)
Show Figures

Figure 1

22 pages, 4183 KB  
Article
Estimation of PM2.5 Vertical Profiles from MAX-DOAS Observations Based on Machine Learning Algorithms
by Qihua Li, Jinyi Luo, Hanwen Qin, Shun Xia, Zhiguo Zhang, Chengzhi Xing, Wei Tan, Haoran Liu and Qihou Hu
Remote Sens. 2025, 17(17), 3063; https://doi.org/10.3390/rs17173063 - 3 Sep 2025
Abstract
The vertical profile of PM2.5 is important for understanding its secondary formation, transport, and deposition at high altitudes; it also provides important data support for studying the causes and sources of PM2.5 near the ground. Based on machine learning methods, this [...] Read more.
The vertical profile of PM2.5 is important for understanding its secondary formation, transport, and deposition at high altitudes; it also provides important data support for studying the causes and sources of PM2.5 near the ground. Based on machine learning methods, this study fully utilized simultaneous Multi-Axis Differential Optical Absorption Spectroscopy measurements of multiple air pollutants in the atmosphere and employed the measured vertical profiles of aerosol extinction—as well as the vertical profiles of precursors such as NO2 and SO2—to evaluate the vertical distribution of PM2.5 concentration. Three machine learning models (eXtreme Gradient Boosting, Random Forest, and back-propagation neural network) were evaluated using Multi-Axis Differential Optical Absorption Spectroscopy instruments in four typical cities in China: Beijing, Lanzhou, Guangzhou, and Hefei. According to the comparison between estimated PM2.5 and in situ measurements on the ground surface in the four cities, the eXtreme Gradient Boosting model has the best estimation performance, with the Pearson correlation coefficient reaching 0.91. In addition, the in situ instrument mounted on the meteorological observation tower in Beijing was used to validate the estimated PM2.5 profile, and the Pearson correlation coefficient at each height was greater than 0.7. The average PM2.5 vertical profiles in the four typical cities all show an exponential pattern. In Beijing and Guangzhou, PM2.5 can diffuse to high altitudes between 500 and 1000 m; in Lanzhou, it can diffuse to around 1500 m, while it is primarily distributed between the near surface and 500 m in Hefei. Based on the vertical distribution of PM2.5 mass concentration in Beijing, a high-altitude PM2.5 pollutant transport event was identified from January 19th to 21st, 2021, which was not detected by ground-based in situ instruments. During this process, PM2.5 was transported from the 200 to 1500 m altitude level and then sank to the near surface, causing the concentration on the ground surface to continuously increase. The sinking process contributes to approximately 7% of the ground surface PM2.5 every hour. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

19 pages, 553 KB  
Article
Enhancing Pre-Service Teachers’ Reflective Competence Through Structured Video Annotation
by Tim Rogge and Bardo Herzig
Educ. Sci. 2025, 15(9), 1146; https://doi.org/10.3390/educsci15091146 - 3 Sep 2025
Abstract
We examined the effects of a digital reflection and feedback intervention for pre-service teachers during a five-month school placement (Praxissemester) in Germany. Three reflection formats were compared: text-based memory protocols (control), unguided viewing of self-recorded lessons, and a structured digital video [...] Read more.
We examined the effects of a digital reflection and feedback intervention for pre-service teachers during a five-month school placement (Praxissemester) in Germany. Three reflection formats were compared: text-based memory protocols (control), unguided viewing of self-recorded lessons, and a structured digital video annotation (DVA) format. Fifty-five secondary teacher candidates were randomized into the three conditions and completed a validated, video-based Analysis-Competence Test before and after the semester. Repeated-measures ANOVA and mixed models showed robust overall improvement in global analysis competence across all groups. For process-oriented reasoning (whole-lesson reflection), both video-based formats showed significant within-group gains that were descriptively larger than those of the text-based control, although between-condition differences were not statistically significant; for synthetic competence (focused on specific lesson situations), the annotation group and the text-only control improved significantly, whereas the video-only condition did not, with the structured annotation group achieving the largest within-group gains and a trend-level advantage in higher-order reflection. Between-group effects did not reach conventional significance in either rmANOVA or the mixed models, though trends favored the annotation scaffold. These findings suggest that time-stamped, theory-aligned scaffolds can help pre-service teachers move beyond surface-level description toward deeper, theory-informed reflection in practicum settings. Full article
(This article belongs to the Special Issue The Role of Reflection in Teaching and Learning)
Show Figures

Figure 1

26 pages, 2735 KB  
Article
Time Series Classification of Autism Spectrum Disorder Using the Light-Adapted Electroretinogram
by Sergey Chistiakov, Anton Dolganov, Paul A. Constable, Aleksei Zhdanov, Mikhail Kulyabin, Dorothy A. Thompson, Irene O. Lee, Faisal Albasu, Vasilii Borisov and Mikhail Ronkin
Bioengineering 2025, 12(9), 951; https://doi.org/10.3390/bioengineering12090951 - 2 Sep 2025
Viewed by 456
Abstract
The clinical electroretinogram (ERG) is a non-invasive diagnostic test used to assess the functional state of the retina by recording changes in the bioelectric potential following brief flashes of light. The recorded ERG waveform offers ways for diagnosing both retinal dystrophies and neurological [...] Read more.
The clinical electroretinogram (ERG) is a non-invasive diagnostic test used to assess the functional state of the retina by recording changes in the bioelectric potential following brief flashes of light. The recorded ERG waveform offers ways for diagnosing both retinal dystrophies and neurological disorders such as autism spectrum disorder (ASD), attention deficit hyperactivity disorder (ADHD), and Parkinson’s disease. In this study, different time-series-based machine learning methods were used to classify ERG signals from ASD and typically developing individuals with the aim of interpreting the decisions made by the models to understand the classification process made by the models. Among the time-series classification (TSC) algorithms, the Random Convolutional Kernel Transform (ROCKET) algorithm showed the most accurate results with the fewest number of predictive errors. For the interpretation analysis of the model predictions, the SHapley Additive exPlanations (SHAP) algorithm was applied to each of the models’ predictions, with the ROCKET and KNeighborsTimeSeriesClassifier (TS-KNN) algorithms showing more suitability for ASD classification as they provided better-defined explanations by discarding the uninformative non-physiological part of the ERG waveform baseline signal and focused on the time regions incorporating the clinically significant a- and b-waves of the ERG. With the potential broadening scope of practice for visual electrophysiology within neurological disorders, TSC may support the identification of important regions in the ERG time series to support the classification of neurological disorders and potential retinal diseases. Full article
(This article belongs to the Special Issue Retinal Biomarkers: Seeing Diseases in the Eye)
Show Figures

Figure 1

30 pages, 4526 KB  
Article
Multi-Strategy Honey Badger Algorithm for Global Optimization
by Delong Guo and Huajuan Huang
Biomimetics 2025, 10(9), 581; https://doi.org/10.3390/biomimetics10090581 - 2 Sep 2025
Viewed by 167
Abstract
The Honey Badger Algorithm (HBA) is a recently proposed metaheuristic optimization algorithm inspired by the foraging behavior of honey badgers. The search mechanism of this algorithm is divided into two phases: a mining phase and a honey-seeking phase, effectively emulating the processes of [...] Read more.
The Honey Badger Algorithm (HBA) is a recently proposed metaheuristic optimization algorithm inspired by the foraging behavior of honey badgers. The search mechanism of this algorithm is divided into two phases: a mining phase and a honey-seeking phase, effectively emulating the processes of exploration and exploitation within the search space. Despite its innovative approach, the Honey Badger Algorithm (HBA) faces challenges such as slow convergence rates, an imbalanced trade-off between exploration and exploitation, and a tendency to become trapped in local optima. To address these issues, we propose an enhanced version of the Honey Badger Algorithm (HBA), namely the Multi-Strategy Honey Badger Algorithm (MSHBA), which incorporates a Cubic Chaotic Mapping mechanism for population initialization. This integration aims to enhance the uniformity and diversity of the initial population distribution. In the mining and honey-seeking stages, the position of the honey badger is updated based on the best fitness value within the population. This strategy may lead to premature convergence due to population aggregation around the fittest individual. To counteract this tendency and enhance the algorithm’s global optimization capability, we introduce a random search strategy. Furthermore, an elite tangential search and a differential mutation strategy are employed after three iterations without detecting a new best value in the population, thereby enhancing the algorithm’s efficacy. A comprehensive performance evaluation, conducted across a suite of established benchmark functions, reveals that the MSHBA excels in 26 out of 29 IEEE CEC 2017 benchmarks. Subsequent statistical analysis corroborates the superior performance of the MSHBA. Moreover, the MSHBA has been successfully applied to four engineering design problems, highlighting its capability for addressing constrained engineering design challenges and outperforming other optimization algorithms in this domain. Full article
(This article belongs to the Special Issue Advances in Biological and Bio-Inspired Algorithms)
Show Figures

Figure 1

26 pages, 1299 KB  
Article
Integrated Information System for Parking Facilities Operations and Management
by Vasile Dragu, Eugenia Alina Roman, Mircea Augustin Roşca, Floriana Cristina Oprea, Andrei-Bogdan Mironescu and Oana Maria Dinu
Systems 2025, 13(9), 769; https://doi.org/10.3390/systems13090769 - 2 Sep 2025
Viewed by 182
Abstract
Parking management and operation represent a major challenge for both users and administrators, who seek to ensure efficient utilization, accommodate as many demands as possible, and reduce maintenance costs. This paper presents a theoretical model for an integrated IT system designed for parking [...] Read more.
Parking management and operation represent a major challenge for both users and administrators, who seek to ensure efficient utilization, accommodate as many demands as possible, and reduce maintenance costs. This paper presents a theoretical model for an integrated IT system designed for parking management and administration. The modeling process involved designing a parking facility using the AutoCAD Vehicle Tracking v25.00.2775 software package, in accordance with current design standards. To simulate system operation, a dedicated Python v2025.12.0 program was developed to assign parking spaces to arriving vehicles based on specific allocation criteria. Three allocation strategies were applied: random allocation, allocation aimed at minimizing the driving distance within the parking lot, and allocation aimed at reducing the walking distance from the assigned space to the destination. The simulation results show that, in the absence of allocation criteria, parking spaces are utilized in a quasi-uniform manner. The calculated values of variance and standard deviation are significantly lower in this case, increasing as allocation restrictions are introduced, but then returning to reduced values as the occupancy rate grows, since under intensive use the potential for controlled allocation decreases. The relationship between the number of allocations of each parking space and the applied allocation strategies was examined using Pearson and Spearman correlation coefficients. The results reveal a direct linear dependence under moderate demand and an inverse dependence under high demand—patterns consistent with situations observed in practice. The proposed software application provides a practical tool for effective parking management, contributing to the rational use of parking spaces, reduced travel distances within the facility, lower fuel consumption, and consequently, reduced pollution. Full article
(This article belongs to the Special Issue Modelling and Simulation of Transportation Systems)
Show Figures

Figure 1

18 pages, 3463 KB  
Article
EMG-Based Recognition of Lower Limb Movements in Athletes: A Comparative Study of Classification Techniques
by Kudratjon Zohirov, Sarvar Makhmudjanov, Feruz Ruziboev, Golib Berdiev, Mirjakhon Temirov, Gulrukh Sherboboyeva, Firuza Achilova, Gulmira Pardayeva and Sardor Boykobilov
Signals 2025, 6(3), 45; https://doi.org/10.3390/signals6030045 - 2 Sep 2025
Viewed by 266
Abstract
In this article, the classification of signals arising from the movements of the lower limb of the leg (LLL) based on electromyography (EMG) (walking, sitting, up and down the stairs) was carried out. In the data collection process, 25 athletes aged 15–22 were [...] Read more.
In this article, the classification of signals arising from the movements of the lower limb of the leg (LLL) based on electromyography (EMG) (walking, sitting, up and down the stairs) was carried out. In the data collection process, 25 athletes aged 15–22 were involved, and two types of data sets (DS-dataset) were formed using FreeEMG and Biosignalsplux devices. Six important time and frequency domain features were extracted from the EMG signals—RMS (Root Mean Square), MAV (Mean Absolute Value), WL (Waveform Length), ZC (Zero Crossing), MDF (Median Frequency), and SSCs (Slope Sign Changes). Several classification algorithms were used to detect and classify movements, including RF (Random Forest), NN (Neural Network), SVM (Support Vector Machine), k-NN (k-Nearest Neighbors), and LR (Logistic Regression) models. Analysis of the experimental results showed that the RF algorithm achieved the highest accuracy of 98.7% when classified with DS collected via the Biosignalsplux device, demonstrating an advantage in terms of performance in motion recognition. The results obtained from the open systems used in signal processing enable real-time monitoring of athletes’ physical condition, which plays a crucial role in accurately and rapidly determining the degree of muscle fatigue and the level of physical stress experienced during training sessions, thereby allowing for more effective control of performance and timely prevention of injuries. Full article
Show Figures

Figure 1

24 pages, 2532 KB  
Article
Improved Particle Swarm Optimization Based on Fuzzy Controller Fusion of Multiple Strategies for Multi-Robot Path Planning
by Jialing Hu, Yanqi Zheng, Siwei Wang and Changjun Zhou
Big Data Cogn. Comput. 2025, 9(9), 229; https://doi.org/10.3390/bdcc9090229 - 2 Sep 2025
Viewed by 168
Abstract
Robots play a crucial role in experimental smart cities and are ubiquitous in daily life, especially in complex environments where multiple robots are often needed to solve problems collaboratively. Researchers have found that the swarm intelligence optimization algorithm has a better performance in [...] Read more.
Robots play a crucial role in experimental smart cities and are ubiquitous in daily life, especially in complex environments where multiple robots are often needed to solve problems collaboratively. Researchers have found that the swarm intelligence optimization algorithm has a better performance in planning robot paths, but the traditional swarm intelligence algorithm cannot be targeted to solve the robot path planning problem in difficult problem. Therefore, this paper aims to introduce a fuzzy controller, mutation factor, exponential noise, and other strategies on the basis of particle swarm optimization to solve this problem. By judging the moving speed of different particles at different periods of the algorithm, the individual learning factor and social learning factor of the particles are obtained by fuzzy controller, and using the leader particle and random particle, designing a new dynamic balance of mutation factor, with the iterative process of the adaptation value of continuous non-updating counter and continuous updating counter to control the proportion of the elite individuals and random individuals. Finally, using exponential noise to update the matrix of the population every 50 iterations is a way to balance the local search ability and global exploration ability of the algorithm. In order to test the proposed algorithm, the main method of this paper is simulated on simple scenarios, complex scenarios, and random maps consisting of different numbers of static obstacles and dynamic obstacles, and the algorithm proposed in this paper is compared with eight other algorithms. The average path deviation error of the planned paths is smaller; the average distance of untraveled target is shorter; the number of steps of the robot movements is smaller, and the path is shorter, which is superior to the other eight algorithms. This superiority in solving multi-robot cooperative path planning has good practicality in many fields such as logistics and distribution, industrial automation operation, and so on. Full article
Show Figures

Figure 1

Back to TopTop