Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,039)

Search Parameters:
Keywords = state machine approach

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 1603 KB  
Article
EEG-Powered UAV Control via Attention Mechanisms
by Jingming Gong, He Liu, Liangyu Zhao, Taiyo Maeda and Jianting Cao
Appl. Sci. 2025, 15(19), 10714; https://doi.org/10.3390/app151910714 (registering DOI) - 4 Oct 2025
Abstract
This paper explores the development and implementation of a brain–computer interface (BCI) system that utilizes electroencephalogram (EEG) signals for real-time monitoring of attention levels to control unmanned aerial vehicles (UAVs). We propose an innovative approach that combines spectral power analysis and machine learning [...] Read more.
This paper explores the development and implementation of a brain–computer interface (BCI) system that utilizes electroencephalogram (EEG) signals for real-time monitoring of attention levels to control unmanned aerial vehicles (UAVs). We propose an innovative approach that combines spectral power analysis and machine learning classification techniques to translate cognitive states into precise UAV command signals. This method overcomes the limitations of traditional threshold-based approaches by adapting to individual differences and improving classification accuracy. Through comprehensive testing with 20 participants in both controlled laboratory environments and real-world scenarios, our system achieved an 85% accuracy rate in distinguishing between high and low attention states and successfully mapped these cognitive states to vertical UAV movements. Experimental results demonstrate that our machine learning-based classification method significantly enhances system robustness and adaptability in noisy environments. This research not only advances UAV operability through neural interfaces but also broadens the practical applications of BCI technology in aviation. Our findings contribute to the expanding field of neurotechnology and underscore the potential for neural signal processing and machine learning integration to revolutionize human–machine interaction in industries where dynamic relationships between cognitive states and automated systems are beneficial. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Graphical abstract

18 pages, 1641 KB  
Article
Using Non-Lipschitz Signum-Based Functions for Distributed Optimization and Machine Learning: Trade-Off Between Convergence Rate and Optimality Gap
by Mohammadreza Doostmohammadian, Amir Ahmad Ghods, Alireza Aghasi, Zulfiya R. Gabidullina and Hamid R. Rabiee
Math. Comput. Appl. 2025, 30(5), 108; https://doi.org/10.3390/mca30050108 (registering DOI) - 4 Oct 2025
Abstract
In recent years, the prevalence of large-scale datasets and the demand for sophisticated learning models have necessitated the development of efficient distributed machine learning (ML) solutions. Convergence speed is a critical factor influencing the practicality and effectiveness of these distributed frameworks. Recently, non-Lipschitz [...] Read more.
In recent years, the prevalence of large-scale datasets and the demand for sophisticated learning models have necessitated the development of efficient distributed machine learning (ML) solutions. Convergence speed is a critical factor influencing the practicality and effectiveness of these distributed frameworks. Recently, non-Lipschitz continuous optimization algorithms have been proposed to improve the slow convergence rate of the existing linear solutions. The use of signum-based functions was previously considered in consensus and control literature to reach fast convergence in the prescribed time and also to provide robust algorithms to noisy/outlier data. However, as shown in this work, these algorithms lead to an optimality gap and steady-state residual of the objective function in discrete-time setup. This motivates us to investigate the distributed optimization and ML algorithms in terms of trade-off between convergence rate and optimality gap. In this direction, we specifically consider the distributed regression problem and check its convergence rate by applying both linear and non-Lipschitz signum-based functions. We check our distributed regression approach by extensive simulations. Our results show that although adopting signum-based functions may give faster convergence, it results in large optimality gaps. The findings presented in this paper may contribute to and advance the ongoing discourse of similar distributed algorithms, e.g., for distributed constrained optimization and distributed estimation. Full article
87 pages, 2494 KB  
Systematic Review
A Systematic Review of Models for Fire Spread in Wildfires by Spotting
by Edna Cardoso, Domingos Xavier Viegas and António Gameiro Lopes
Fire 2025, 8(10), 392; https://doi.org/10.3390/fire8100392 - 3 Oct 2025
Abstract
Fire spotting (FS), the process by which firebrands are lofted, transported, and ignite new fires ahead of the main flame front, plays a critical role in escalating extreme wildfire events. This systematic literature review (SLR) analyzes peer-reviewed articles and book chapters published in [...] Read more.
Fire spotting (FS), the process by which firebrands are lofted, transported, and ignite new fires ahead of the main flame front, plays a critical role in escalating extreme wildfire events. This systematic literature review (SLR) analyzes peer-reviewed articles and book chapters published in English from 2000 to 2023 to assess the evolution of FS models, identify prevailing methodologies, and highlight existing gaps. Following a PRISMA-guided approach, 102 studies were selected from Scopus, Web of Science, and Google Scholar, with searches conducted up to December 2023. The results indicate a marked increase in scientific interest after 2010. Thematic and bibliometric analyses reveal a dominant research focus on integrating the FS model within existing and new fire spread models, as well as empirical research and individual FS phases, particularly firebrand transport and ignition. However, generation and ignition FS phases, physics-based FS models (encompassing all FS phases), and integrated operational models remain underexplored. Modeling strategies have advanced from empirical and semi-empirical approaches to machine learning and physical-mechanistic simulations. Despite advancements, most models still struggle to replicate the stochastic and nonlinear nature of spotting. Geographically, research is concentrated in the United States, Australia, and parts of Europe, with notable gaps in representation across the Global South. This review underscores the need for interdisciplinary, data-driven, and regionally inclusive approaches to improve the predictive accuracy and operational applicability of FS models under future climate scenarios. Full article
34 pages, 3263 KB  
Systematic Review
From Network Sensors to Intelligent Systems: A Decade-Long Review of Swarm Robotics Technologies
by Fouad Chaouki Refis, Nassim Ahmed Mahammedi, Chaker Abdelaziz Kerrache and Sahraoui Dhelim
Sensors 2025, 25(19), 6115; https://doi.org/10.3390/s25196115 - 3 Oct 2025
Abstract
Swarm Robotics (SR) is a relatively new field, inspired by the collective intelligence of social insects. It involves using local rules to control and coordinate large groups (swarms) of relatively simple physical robots. Important tasks that robot swarms can handle include demining, search, [...] Read more.
Swarm Robotics (SR) is a relatively new field, inspired by the collective intelligence of social insects. It involves using local rules to control and coordinate large groups (swarms) of relatively simple physical robots. Important tasks that robot swarms can handle include demining, search, rescue, and cleaning up toxic spills. Over the past decade, the research effort in the field of Swarm Robotics has intensified significantly in terms of hardware, software, and systems integrated developments, yet significant challenges remain, particularly regarding standardization, scalability, and cost-effective deployment. To contextualize the state of Swarm Robotics technologies, this paper provides a systematic literature review (SLR) of Swarm Robotic technologies published from 2014 to 2024, with an emphasis on how hardware and software subsystems have co-evolved. This work provides an overview of 40 studies in peer-reviewed journals along with a well-defined and replicable systematic review protocol. The protocol describes criteria for including and excluding studies and outlines a data extraction approach. We explored trends in sensor hardware, actuation methods, communication devices, and energy systems, as well as an examination of software platforms to produce swarm behavior, covering meta-heuristic algorithms and generic middleware platforms such as ROS. Our results demonstrate how dependent hardware and software are to achieve Swarm Intelligence, the lack of uniform standards for their design, and the pragmatic limits which hinder scalability and deployment. We conclude by noting ongoing challenges and proposing future directions for developing interoperable, energy-efficient Swarm Robotics (SR) systems incorporating machine learning (ML). Full article
(This article belongs to the Special Issue Cooperative Perception and Planning for Swarm Robot Systems)
Show Figures

Figure 1

32 pages, 2020 KB  
Article
From Trends to Insights: A Text Mining Analysis of Solar Energy Forecasting (2017–2023)
by Mohammed Asloune, Gilles Notton and Cyril Voyant
Energies 2025, 18(19), 5231; https://doi.org/10.3390/en18195231 - 1 Oct 2025
Abstract
This study aims to highlight key figures and organizations in solar energy forecasting research, including the most prominent authors, journals, and countries. It also clarifies commonly used abbreviations in the field, with a focus on forecasting methods and techniques, the form and type [...] Read more.
This study aims to highlight key figures and organizations in solar energy forecasting research, including the most prominent authors, journals, and countries. It also clarifies commonly used abbreviations in the field, with a focus on forecasting methods and techniques, the form and type of solar energy forecasting outputs, and the associated error metrics. Building on previous research that analyzed data up to 2017, the study updates findings to include information through 2023, incorporating metadata from 500 articles to identify key figures and organizations, along with 276 full-text articles analyzed for abbreviations. The application of text mining offers a concise yet comprehensive overview of the latest trends and insights in solar energy forecasting. The key findings of this study are threefold: First, China, followed by the United States of America and India, is the leading country in solar energy forecasting research, with shifts observed compared to the pre-2017 period. Second, numerous new abbreviations related to machine learning, particularly deep learning, have emerged in solar energy forecasting since before 2017, with Long Short-Term Memory, Convolutional Neural Networks, and Recurrent Neural Networks the most prominent. Finally, deterministic error metrics are mentioned nearly 11 times more frequently than probabilistic ones. Furthermore, perspectives on the practices and approaches of solar energy forecasting companies are also examined. Full article
(This article belongs to the Special Issue Solar Energy Utilization Toward Sustainable Urban Futures)
14 pages, 1037 KB  
Article
MMSE-Based Dementia Prediction: Deep vs. Traditional Models
by Yuyeon Jung, Yeji Park, Jaehyun Jo and Jinhyoung Jeong
Life 2025, 15(10), 1544; https://doi.org/10.3390/life15101544 - 1 Oct 2025
Abstract
Early and accurate diagnosis of dementia is essential to improving patient outcomes and reducing societal burden. The Mini-Mental State Examination (MMSE) is widely used to assess cognitive function, yet traditional statistical and machine learning approaches often face limitations in capturing nonlinear interactions and [...] Read more.
Early and accurate diagnosis of dementia is essential to improving patient outcomes and reducing societal burden. The Mini-Mental State Examination (MMSE) is widely used to assess cognitive function, yet traditional statistical and machine learning approaches often face limitations in capturing nonlinear interactions and subtle decline patterns. This study developed a novel deep learning-based dementia prediction model using MMSE data collected from domestic clinical settings and compared its performance with traditional machine learning models. A notable strength of this work lies in its use of item-level MMSE features combined with explainable AI (SHAP analysis), enabling both high predictive accuracy and clinical interpretability—an advancement over prior approaches that primarily relied on total scores or linear modeling. Data from 164 participants, classified into cognitively normal, mild cognitive impairment (MCI), and dementia groups, were analyzed. Individual MMSE items and total scores were used as input features, and the dataset was divided into training and validation sets (8:2 split). A fully connected neural network with regularization techniques was constructed and evaluated alongside Random Forest and support vector machine (SVM) classifiers. Model performance was assessed using accuracy, F1-score, confusion matrices, and receiver operating characteristic (ROC) curves. The deep learning model achieved the highest performance (accuracy 0.90, F1-score 0.90), surpassing Random Forest (0.86) and SVM (0.82). SHAP analysis identified Q11 (immediate memory), Q12 (calculation), and Q17 (drawing shapes) as the most influential variables, aligning with clinical diagnostic practices. These findings suggest that deep learning not only enhances predictive accuracy but also offers interpretable insights aligned with clinical reasoning, underscoring its potential utility as a reliable tool for early dementia diagnosis. However, the study is limited by the use of data from a single clinical site with a relatively small sample size, which may restrict generalizability. Future research should validate the model using larger, multi-institutional, and multimodal datasets to strengthen clinical applicability and robustness. Full article
(This article belongs to the Section Biochemistry, Biophysics and Computational Biology)
Show Figures

Figure 1

49 pages, 517 KB  
Review
A Comprehensive Review of Data-Driven Techniques for Air Pollution Concentration Forecasting
by Jaroslaw Bernacki and Rafał Scherer
Sensors 2025, 25(19), 6044; https://doi.org/10.3390/s25196044 - 1 Oct 2025
Abstract
Air quality is crucial for public health and the environment, which makes it important to both monitor and forecast the level of pollution. Polluted air, containing harmful substances such as particulate matter, nitrogen oxides, or ozone, can lead to serious respiratory and circulatory [...] Read more.
Air quality is crucial for public health and the environment, which makes it important to both monitor and forecast the level of pollution. Polluted air, containing harmful substances such as particulate matter, nitrogen oxides, or ozone, can lead to serious respiratory and circulatory diseases, especially in people at risk. Air quality forecasting allows for early warning of smog episodes and taking actions to reduce pollutant emissions. In this article, we review air pollutant concentration forecasting methods, analyzing both classical statistical approaches and modern techniques based on artificial intelligence, including deep models, neural networks, and machine learning, as well as advanced sensing technologies. This work aims to present the current state of research and identify the most promising directions of development in air quality modeling, which can contribute to more effective health and environmental protection. According to the reviewed literature, deep learning–based models, particularly hybrid and attention-driven architectures, emerge as the most promising approaches, while persistent challenges such as data quality, interpretability, and integration of heterogeneous sensing systems define the open issues for future research. Full article
(This article belongs to the Special Issue Smart Gas Sensor Applications in Environmental Change Monitoring)
Show Figures

Figure 1

43 pages, 1895 KB  
Article
Bi-Level Dependent-Chance Goal Programming for Paper Manufacturing Tactical Planning: A Reinforcement-Learning-Enhanced Approach
by Yassine Boutmir, Rachid Bannari, Abdelfettah Bannari, Naoufal Rouky, Othmane Benmoussa and Fayçal Fedouaki
Symmetry 2025, 17(10), 1624; https://doi.org/10.3390/sym17101624 - 1 Oct 2025
Abstract
Tactical production–distribution planning in paper manufacturing involves hierarchical decision-making under hybrid uncertainty, where aleatory randomness (demand fluctuations, machine variations) and epistemic uncertainty (expert judgments, market trends) simultaneously affect operations. Existing approaches fail to address the bi-level nature under hybrid uncertainty, treating production and [...] Read more.
Tactical production–distribution planning in paper manufacturing involves hierarchical decision-making under hybrid uncertainty, where aleatory randomness (demand fluctuations, machine variations) and epistemic uncertainty (expert judgments, market trends) simultaneously affect operations. Existing approaches fail to address the bi-level nature under hybrid uncertainty, treating production and distribution decisions independently or using single-paradigm uncertainty models. This research develops a bi-level dependent-chance goal programming framework based on uncertain random theory, where the upper level optimizes distribution decisions while the lower level handles production decisions. The framework exploits structural symmetries through machine interchangeability, symmetric transportation routes, and temporal symmetry, incorporating symmetry-breaking constraints to eliminate redundant solutions. A hybrid intelligent algorithm (HIA) integrates uncertain random simulation with a Reinforcement-Learning-enhanced Arithmetic Optimization Algorithm (RL-AOA) for bi-level coordination, where Q-learning enables adaptive parameter tuning. The RL component utilizes symmetric state representations to maintain solution quality across symmetric transformations. Computational experiments demonstrate HIA’s superiority over standard metaheuristics, achieving 3.2–7.8% solution quality improvement and 18.5% computational time reduction. Symmetry exploitation reduces search space by approximately 35%. The framework provides probability-based performance metrics with optimal confidence levels (0.82–0.87), offering 2.8–4.5% annual cost savings potential. Full article
Show Figures

Figure 1

24 pages, 18326 KB  
Article
A Human Intention and Motion Prediction Framework for Applications in Human-Centric Digital Twins
by Usman Asad, Azfar Khalid, Waqas Akbar Lughmani, Shummaila Rasheed and Muhammad Mahabat Khan
Biomimetics 2025, 10(10), 656; https://doi.org/10.3390/biomimetics10100656 - 1 Oct 2025
Abstract
In manufacturing settings where humans and machines collaborate, understanding and predicting human intention is crucial for enabling the seamless execution of tasks. This knowledge is the basis for creating an intelligent, symbiotic, and collaborative environment. However, current foundation models often fall short in [...] Read more.
In manufacturing settings where humans and machines collaborate, understanding and predicting human intention is crucial for enabling the seamless execution of tasks. This knowledge is the basis for creating an intelligent, symbiotic, and collaborative environment. However, current foundation models often fall short in directly anticipating complex tasks and producing contextually appropriate motion. This paper proposes a modular framework that investigates strategies for structuring task knowledge and engineering context-rich prompts to guide Vision–Language Models in understanding and predicting human intention in semi-structured environments. Our evaluation, conducted across three use cases of varying complexity, reveals a critical tradeoff between prediction accuracy and latency. We demonstrate that a Rolling Context Window strategy, which uses a history of frames and the previously predicted state, achieves a strong balance of performance and efficiency. This approach significantly outperforms single-image inputs and computationally expensive in-context learning methods. Furthermore, incorporating egocentric video views yields a substantial 10.7% performance increase in complex tasks. For short-term motion forecasting, we show that the accuracy of joint position estimates is enhanced by using historical pose, gaze data, and in-context examples. Full article
Show Figures

Graphical abstract

14 pages, 1349 KB  
Article
ProToxin, a Predictor of Protein Toxicity
by Yang Yang, Haohan Zhang and Mauno Vihinen
Toxins 2025, 17(10), 489; https://doi.org/10.3390/toxins17100489 - 1 Oct 2025
Abstract
Toxins are naturally poisonous small compounds, peptides and proteins that are produced in all three kingdoms of life. Venoms are animal toxins and can contain even hundreds of different compounds. Numerous approaches have been used to detect toxins, including prediction methods. We developed [...] Read more.
Toxins are naturally poisonous small compounds, peptides and proteins that are produced in all three kingdoms of life. Venoms are animal toxins and can contain even hundreds of different compounds. Numerous approaches have been used to detect toxins, including prediction methods. We developed a novel machine learning-based predictor for detecting protein toxins from their sequences. The gradient boosting method was trained on carefully selected training data. Initially, we tested 2614 features, which were reduced to 88 after a comprehensive feature selection procedure. Out of the four tested algorithms, XGBoost was chosen to train the final predictor. Comparison to available predictors indicated that ProToxin showed significant improvement compared to state-of-the-art predictors. On a blind test dataset, the accuracy was 0.906, the Matthews correlation coefficient was 0.796, and the overall performance measure was 0.796. ProToxin is a fast and efficient method and is freely available. It can be used for small and large numbers of sequences. Full article
Show Figures

Figure 1

35 pages, 12393 KB  
Article
A Multi-Teacher Knowledge Distillation Framework with Aggregation Techniques for Lightweight Deep Models
by Ahmed Hamdi, Hassan N. Noura and Joseph Azar
Appl. Syst. Innov. 2025, 8(5), 146; https://doi.org/10.3390/asi8050146 - 30 Sep 2025
Abstract
Knowledge Distillation (KD) is a machine learning technique in which a compact student model learns to replicate the performance of a larger teacher model by mimicking its output predictions. Multi-Teacher Knowledge Distillation extends this paradigm by aggregating knowledge from multiple teacher models to [...] Read more.
Knowledge Distillation (KD) is a machine learning technique in which a compact student model learns to replicate the performance of a larger teacher model by mimicking its output predictions. Multi-Teacher Knowledge Distillation extends this paradigm by aggregating knowledge from multiple teacher models to improve generalization and robustness. However, effectively integrating outputs from diverse teachers, especially in the presence of noise or conflicting predictions, remains a key challenge. In this work, we propose a Multi-Round Parallel Multi-Teacher Distillation (MPMTD) that systematically explores and combines multiple aggregation techniques. Specifically, we investigate aggregation at different levels, including loss-based and probability-distribution-based fusion. Our framework applies different strategies across distillation rounds, enabling adaptive and synergistic knowledge transfer. Through extensive experimentation, we analyze the strengths and weaknesses of individual aggregation methods and demonstrate that strategic sequencing across rounds significantly outperforms static approaches. Notably, we introduce the Byzantine-Resilient Probability Distribution aggregation method applied for the first time in a KD context, which achieves state-of-the-art performance, with an accuracy of 99.29% and an F1-score of 99.27%. We further identify optimal configurations in terms of the number of distillation rounds and the ordering of aggregation strategies, balancing accuracy with computational efficiency. Our contributions include (i) the introduction of advanced aggregation strategies into the KD setting, (ii) a systematic evaluation of their performance, and (iii) practical recommendations for real-world deployment. These findings have significant implications for distributed learning, edge computing, and IoT environments, where efficient and resilient model compression is essential. Full article
14 pages, 1301 KB  
Article
Balancing Accuracy and Simplicity in an Interpretable System for Sepsis Prediction Using Limited Clinical Data
by Ting-An Chang, Chun-Liang Liu and You-Cheng Liu
Appl. Sci. 2025, 15(19), 10562; https://doi.org/10.3390/app151910562 - 30 Sep 2025
Abstract
Sepsis is a life-threatening condition caused by an excessive immune response to infection, and even a one-hour delay in treatment can result in irreversible organ damage and increased mortality. This study aimed to develop an interpretable and efficient machine learning-based system for early [...] Read more.
Sepsis is a life-threatening condition caused by an excessive immune response to infection, and even a one-hour delay in treatment can result in irreversible organ damage and increased mortality. This study aimed to develop an interpretable and efficient machine learning-based system for early sepsis prediction using routinely collected electronic health record (EHR) data. The research question focused on whether high predictive performance could be achieved using only a minimal set of clinical features. Data were obtained from intensive care units and general wards in the PhysioNet Computing in Cardiology Challenge 2019 dataset. Thirty-seven predefined clinical features were extracted and systematically analyzed to assess their predictive contributions. Several machine learning models were trained and evaluated using area under the receiver operating feature curve (ROC-AUC) and accuracy metrics. The proposed model achieved an ROC-AUC of 0.929 and an accuracy of 0.926 when using all features. Remarkably, comparable performance was maintained (ROC-AUC = 0.912, accuracy = 0.907) when only 10 carefully selected features were used. The system outperformed existing state-of-the-art approaches while relying solely on commonly available clinical parameters. An interpretable, feature-efficient sepsis prediction system was successfully developed, demonstrating strong performance with minimal data requirements. The approach is well-suited for resource-limited healthcare settings, such as rural hospitals, and has the potential to reduce diagnostic burden while enabling timely intervention to improve patient outcomes. Full article
(This article belongs to the Special Issue AI-Based Biomedical Signal and Image Processing)
Show Figures

Figure 1

21 pages, 2365 KB  
Article
BIONIB: Blockchain-Based IoT Using Novelty Index in Bridge Health Monitoring
by Divija Swetha Gadiraju, Ryan McMaster, Saeed Eftekhar Azam and Deepak Khazanchi
Appl. Sci. 2025, 15(19), 10542; https://doi.org/10.3390/app151910542 - 29 Sep 2025
Abstract
Bridge health monitoring is critical for infrastructure safety, especially with the growing deployment of IoT sensors. This work addresses the challenge of securely storing large volumes of sensor data and extracting actionable insights for timely damage detection. We propose BIONIB, a novel framework [...] Read more.
Bridge health monitoring is critical for infrastructure safety, especially with the growing deployment of IoT sensors. This work addresses the challenge of securely storing large volumes of sensor data and extracting actionable insights for timely damage detection. We propose BIONIB, a novel framework that combines an unsupervised machine learning approach called the Novelty Index (NI) with a scalable blockchain platform (EOSIO) for secure, real-time monitoring of bridges. BIONIB leverages EOSIO’s smart contracts for efficient, programmable, and secure data management across distributed sensor nodes. Experiments on real-world bridge sensor data under varying loads, climatic conditions, and health states demonstrate BIONIB’s practical effectiveness. Key findings include CPU utilization below 40% across scenarios, a twofold increase in storage efficiency, and acceptable latency degradation, which is not critical in this domain. Our comparative analysis suggests that BIONIB fills a unique niche by coupling NI-based detection with a decentralized architecture, offering real-time alerts and transparent, verifiable records across sensor nodes. Full article
(This article belongs to the Special Issue Vibration Monitoring and Control of the Built Environment)
Show Figures

Figure 1

22 pages, 1249 KB  
Systematic Review
Radiomics vs. Deep Learning in Autism Classification Using Brain MRI: A Systematic Review
by Katerina Nalentzi, Georgios S. Ioannidis, Haralabos Bougias, Sotirios Bisdas, Myrsini Balafouta, Cleo Sgouropoulou, Michail E. Klontzas, Kostas Marias and Periklis Papavasileiou
Appl. Sci. 2025, 15(19), 10551; https://doi.org/10.3390/app151910551 - 29 Sep 2025
Abstract
Autism diagnosis through magnetic resonance imaging (MRI) has advanced significantly with the application of artificial intelligence (AI). This systematic review examines three computational paradigms: radiomics-based machine learning (ML), deep learning (DL), and hybrid models combining both. Across 49 studies (2011–2025), radiomics methods relying [...] Read more.
Autism diagnosis through magnetic resonance imaging (MRI) has advanced significantly with the application of artificial intelligence (AI). This systematic review examines three computational paradigms: radiomics-based machine learning (ML), deep learning (DL), and hybrid models combining both. Across 49 studies (2011–2025), radiomics methods relying on classical classifiers (i.e., SVM, Random Forest) achieved moderate accuracies (61–89%) and offered strong interpretability. DL models, particularly convolutional and recurrent neural networks applied to resting-state functional MRI, reached higher accuracies (up to 98.2%) but were hampered by limited transparency and generalizability. Hybrid models combining handcrafted radiomic features with learned DL representations via dual or fused architectures demonstrated promising balances of performance and interpretability but remain underexplored. A persistent limitation across all approaches is the lack of external validation and harmonization in multi-site studies, which affects robustness. Future pipelines should include standardized preprocessing, multimodal integration, and explainable AI frameworks to enhance clinical viability. This review underscores the complementary strengths of each methodological approach, with hybrid approaches appearing to be a promising middle ground of improved classification performance and enhanced interpretability. Full article
Show Figures

Figure 1

35 pages, 17848 KB  
Article
Satellite-Based Multi-Decadal Shoreline Change Detection by Integrating Deep Learning with DSAS: Eastern and Southern Coastal Regions of Peninsular Malaysia
by Saima Khurram, Amin Beiranvand Pour, Milad Bagheri, Effi Helmy Ariffin, Mohd Fadzil Akhir and Saiful Bahri Hamzah
Remote Sens. 2025, 17(19), 3334; https://doi.org/10.3390/rs17193334 - 29 Sep 2025
Abstract
Coasts are critical ecological, economic and social interfaces between terrestrial and marine systems. The current upsurge in the acquisition and availability of remote sensing datasets, such as Landsat remote sensing data series, provides new opportunities for analyzing multi-decadal coastal changes and other components [...] Read more.
Coasts are critical ecological, economic and social interfaces between terrestrial and marine systems. The current upsurge in the acquisition and availability of remote sensing datasets, such as Landsat remote sensing data series, provides new opportunities for analyzing multi-decadal coastal changes and other components of coastal risk. The emergence of machine learning-based techniques represents a new trend that can support large-scale coastal monitoring and modeling using remote sensing big data. This study presents a comprehensive multi-decadal analysis of coastal changes for the period from 1990 to 2024 using Landsat remote sensing data series along the eastern and southern coasts of Peninsular Malaysia. These coastal regions include the states of Kelantan, Terengganu, Pahang, and Johor. An innovative approach combining deep learning-based shoreline extraction with the Digital Shoreline Analysis System (DSAS) was meticulously applied to the Landsat datasets. Two semantic segmentation models, U-Net and DeepLabV3+, were evaluated for automated shoreline delineation from the Landsat imagery, with U-Net demonstrating superior boundary precision and generalizability. The DSAS framework quantified shoreline change metrics—including Net Shoreline Movement (NSM), Shoreline Change Envelope (SCE), and Linear Regression Rate (LRR)—across the states of Kelantan, Terengganu, Pahang, and Johor. The results reveal distinct spatial–temporal patterns: Kelantan exhibited the highest rates of shoreline change with erosion of −64.9 m/year and accretion of up to +47.6 m/year; Terengganu showed a moderated change partly due to recent coastal protection structures; Pahang displayed both significant erosion, particularly south of the Pahang River with rates of over −50 m/year, and accretion near river mouths; Johor’s coastline predominantly exhibited accretion, with NSM values of over +1900 m, linked to extensive land reclamation activities and natural sediment deposition, although local erosion was observed along the west coast. This research highlights emerging erosion hotspots and, in some regions, the impact of engineered coastal interventions, providing critical insights for sustainable coastal zone management in Malaysia’s monsoon-influenced tropical coastal environment. The integrated deep learning and DSAS approach applied to Landsat remote sensing data series provides a scalable and reproducible framework for long-term coastal monitoring and climate adaptation planning around the world. Full article
Show Figures

Figure 1

Back to TopTop