Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,912)

Search Parameters:
Keywords = forward learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 2721 KB  
Article
Bayesian Network-Based Earth-Rock Dam Breach Probability Analysis Integrating Machine Learning
by Zongkun Li, Qing Shi, Heqiang Sun, Yingjian Zhou, Fuheng Ma, Jianyou Wang and Pieter van Gelder
Water 2025, 17(21), 3085; https://doi.org/10.3390/w17213085 - 28 Oct 2025
Abstract
Earth-rock dams are critical components of hydraulic engineering, undertaking core functions such as flood control and disaster mitigation. However, the potential occurrence of dam breach poses a severe threat to regional socioeconomic stability and ecological security. To address the limitations of traditional Bayesian [...] Read more.
Earth-rock dams are critical components of hydraulic engineering, undertaking core functions such as flood control and disaster mitigation. However, the potential occurrence of dam breach poses a severe threat to regional socioeconomic stability and ecological security. To address the limitations of traditional Bayesian network (BN) in capturing the complex nonlinear coupling and dynamic mutual interactions among risk factors, they are integrated with machine learning techniques, based on a collected dataset of earth-rock dam breach case samples, the PC structure learning algorithm was employed to preliminarily uncover risk associations. The dataset was compiled from public databases, including the U.S. Army Corps of Engineers (USACE) and Dam Safety Management Center of the Ministry of Water Resources of China, as well as engineering reports from provincial water conservancy departments in China and Europe. Expert knowledge was integrated to optimize the network topology, thereby correcting causal relationships inconsistent with engineering mechanisms. The results indicate that the established hybrid model achieved AUC, accuracy, and F1-Score values of 0.887, 0.895, and 0.899, respectively, significantly outperforming the data-driven model G1. Forward inference identified the key drivers elevating breach risk. Conversely, backward inference revealed that overtopping was the direct failure mode with the highest probability of occurrence and the greatest contribution. The integration of data-driven approaches and domain knowledge provides theoretical and technical support for the probabilistic quantification of earth-rock dam breach and risk prevention and control decision-making. Full article
(This article belongs to the Section Hydraulics and Hydrodynamics)
Show Figures

Figure 1

15 pages, 943 KB  
Article
DeepCMS: A Feature Selection-Driven Model for Cancer Molecular Subtyping with a Case Study on Testicular Germ Cell Tumors
by Mehwish Wahid Khan, Ghufran Ahmed, Muhammad Shahzad, Abdallah Namoun, Shahid Hussain and Meshari Huwaytim Alanazi
Diagnostics 2025, 15(21), 2730; https://doi.org/10.3390/diagnostics15212730 - 28 Oct 2025
Abstract
Background/Objectives: Cancer is a chronic and heterogeneous disease, possessing molecular variation within a single type, resulting in its molecular subtypes. Cancer molecular subtyping offers biological insights into cancer variability, facilitating the development of personalized medicines. Various models have been proposed for cancer molecular [...] Read more.
Background/Objectives: Cancer is a chronic and heterogeneous disease, possessing molecular variation within a single type, resulting in its molecular subtypes. Cancer molecular subtyping offers biological insights into cancer variability, facilitating the development of personalized medicines. Various models have been proposed for cancer molecular subtyping, utilizing the high-dimensional transcriptomic, genomic, or proteomic data. The issue of data scarcity, characterized by high feature dimensionality and a limited sample size, remains a persistent problem.The objective of this research is to propose a deep learning framework, DeepCMS, that leverages the capabilities of feed-forward neural networks, gene set enrichment analysis, and feature selection to construct a well-representative subset of the feature space, thereby producing promising results. Methods: The gene expression data were transformed into enrichment scores, resulting in over 22,000 features. From those, the top 2000 features were selected, and deep learning was applied to these features. The encouraging outcomes indicate the efficacy of the proposed framework in terms of defining a well-representative feature space and accurately classifying cancer molecular subtypes. Results: DeepCMS consistently outperformed state-of-the-art models in aggregated accuracy, sensitivity, specificity, and balanced accuracy. The aggregated metrics surpassed 0.90 for all efficiency measures on independent test datasets, showing the generalizability and robustness of our framework. Although developed using colon cancer’s gene expression data, this approach may be applied to any gene expression data; a case study is also devised for illustration. Conclusions: Overall, the proposed DeepCMS framework enables the accurate and robust classification of cancer molecular subtypes using a compact and informative feature set, facilitating improved precision in oncology applications. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

25 pages, 1928 KB  
Article
A Methodological Comparison of Forecasting Models Using KZ Decomposition and Walk-Forward Validation
by Khawla Al-Saeedi, Diwei Zhou, Andrew Fish, Katerina Tsakiri and Antonios Marsellos
Mathematics 2025, 13(21), 3410; https://doi.org/10.3390/math13213410 - 26 Oct 2025
Viewed by 49
Abstract
The accurate forecasting of surface air temperature (T2M) is crucial for climate analysis, agricultural planning, and energy management. This study proposes a novel forecasting framework grounded in structured temporal decomposition. Using the Kolmogorov–Zurbenko (KZ) filter, all predictor variables are decomposed into three physically [...] Read more.
The accurate forecasting of surface air temperature (T2M) is crucial for climate analysis, agricultural planning, and energy management. This study proposes a novel forecasting framework grounded in structured temporal decomposition. Using the Kolmogorov–Zurbenko (KZ) filter, all predictor variables are decomposed into three physically interpretable components: long-term, seasonal, and short-term variations, forming an expanded multi-scale feature space. A central innovation of this framework lies in training a single unified model on the decomposed feature set to predict the original target variable, thereby enabling the direct learning of scale-specific driver–response relationships. We present the first comprehensive benchmarking of this architecture, demonstrating that it consistently enhances the performance of both regularized linear models (Ridge and Lasso) and tree-based ensemble methods (Random Forest and XGBoost). Under rigorous walk-forward validation, the framework substantially outperforms conventional, non-decomposed approaches—for example, XGBoost improves the coefficient of determination (R2) from 0.80 to 0.91. Furthermore, temporal decomposition enhances interpretability by enabling Ridge and Lasso models to achieve performance levels comparable to complex ensembles. Despite these promising results, we acknowledge several limitations: the analysis is restricted to a single geographic location and time span, and short-term components remain challenging to predict due to their stochastic nature and the weaker relevance of predictors. Additionally, the framework’s effectiveness may depend on the optimal selection of KZ parameters and the availability of sufficiently long historical datasets for stable walk-forward validation. Future research could extend this approach to multiple geographic regions, longer time series, adaptive KZ tuning, and specialized short-term modeling strategies. Overall, the proposed framework demonstrates that temporal decomposition of predictors offers a powerful inductive bias, establishing a robust and interpretable paradigm for surface air temperature forecasting. Full article
Show Figures

Graphical abstract

18 pages, 6011 KB  
Article
From Data-Rich to Data-Scarce: Spatiotemporal Evaluation of a Hybrid Wavelet-Enhanced Deep Learning Model for Day-Ahead Wind Power Forecasting Across Greece
by Ioannis Laios, Dimitrios Zafirakis and Konstantinos Moustris
Energies 2025, 18(21), 5585; https://doi.org/10.3390/en18215585 (registering DOI) - 24 Oct 2025
Viewed by 168
Abstract
Efficient wind power forecasting is critical in achieving large-scale integration of wind energy in modern electricity systems. On the other hand, limited availability of wealthy, long-term historical data of wind power generation for many sites of interest often challenges the training of tailored [...] Read more.
Efficient wind power forecasting is critical in achieving large-scale integration of wind energy in modern electricity systems. On the other hand, limited availability of wealthy, long-term historical data of wind power generation for many sites of interest often challenges the training of tailored forecasting models, which, in turn, introduces uncertainty concerning the anticipated operational status of similar early-life, or even prospective, wind farm projects. To that end, this study puts forward a spatiotemporal, national-level forecasting exercise as a means of addressing wind power data scarcity in Greece. It does so by developing a hybrid wavelet-enhanced deep learning model that leverages long-term historical data from a reference site located in central Greece. The model is optimized for 24-h day-ahead forecasting, using a hybrid architecture that incorporates discrete wavelet transform for feature extraction, with deep neural networks for spatiotemporal learning. Accordingly, the model’s generalization is evaluated across a number of geographically distributed sites of different quality wind potential, each constrained to only one year of available data. The analysis compares forecasting performance between the original and target sites to assess spatiotemporal robustness of the model without site-specific retraining. Our results demonstrate that the developed model maintains competitive accuracy across data-scarce locations for the first 12 h of the day-ahead forecasting horizon, designating, at the same time, distinct performance patterns, dependent on the geographical and wind potential quality dimensions of the examined areas. Overall, this work underscores the feasibility of leveraging data-rich regions to inform forecasting in under-instrumented areas and contributes to the broader discourse on spatial generalization in renewable energy modeling and planning. Full article
(This article belongs to the Special Issue Machine Learning in Renewable Energy Resource Assessment)
Show Figures

Figure 1

23 pages, 16607 KB  
Article
Few-Shot Class-Incremental SAR Target Recognition with a Forward-Compatible Prototype Classifier
by Dongdong Guan, Rui Feng, Yuzhen Xie, Xiaolong Zheng, Bangjie Li and Deliang Xiang
Remote Sens. 2025, 17(21), 3518; https://doi.org/10.3390/rs17213518 - 23 Oct 2025
Viewed by 241
Abstract
In practical Synthetic Aperture Radar (SAR) applications, new-class objects can appear at any time as the rapid accumulation of large-scale and high-quantity SAR imagery and are usually supported by limited instances in most cooperative scenarios. Hence, powering advanced deep-learning (DL)-based SAR Automatic Target [...] Read more.
In practical Synthetic Aperture Radar (SAR) applications, new-class objects can appear at any time as the rapid accumulation of large-scale and high-quantity SAR imagery and are usually supported by limited instances in most cooperative scenarios. Hence, powering advanced deep-learning (DL)-based SAR Automatic Target Recognition (SAR ATR) systems with the ability to continuously learn new concepts from few-shot samples without forgetting the old ones is important. In this paper, we tackle the Few-Shot Class-Incremental Learning (FSCIL) problem in the SAR ATR field and propose a Forward-Compatible Prototype Classifier (FCPC) by emphasizing the model’s forward compatibility to incoming targets before and after deployment. Specifically, the classifier’s sensitivity to diversified cues of emerging targets is improved in advance by a Virtual-class Semantic Synthesizer (VSS), considering the class-agnostic scattering parts of targets in SAR imagery and semantic patterns of the DL paradigm. After deploying the classifier in dynamic worlds, since novel target patterns from few-shot samples are highly biased and unstable, the model’s representability to general patterns and its adaptability to class-discriminative ones are balanced by a Decoupled Margin Adaptation (DMA) strategy, in which only the model’s high-level semantic parameters are timely tuned by improving the similarity of few-shot boundary samples to class prototypes and the dissimilarity to interclass ones. For inference, a Nearest-Class-Mean (NCM) classifier is adopted for prediction by comparing the semantics of unknown targets with prototypes of all classes based on the cosine criterion. In experiments, contributions of the proposed modules are verified by ablation studies, and our method achieves considerable performance on three FSCIL of SAR ATR datasets, i.e., SAR-AIRcraft-FSCIL, MSTAR-FSCIL, and FUSAR-FSCIL, compared with numerous benchmarks, demonstrating its superiority and effectiveness in dealing with the FSCIL of SAR ATR. Full article
Show Figures

Figure 1

20 pages, 5690 KB  
Article
Constructing a Prognostic Model for Clear Cell Renal Cell Carcinoma Based on Glycosyltransferase Gene and Verification of Key Gene Identification
by Chong Zhou, Mingzhe Zhou, Yuzhou Luo, Ruohan Jiang, Yushu Hu, Meiqi Zhao, Xu Yan, Shan Xiao, Mengjie Xue, Mengwei Wang, Ping Jiang, Yunzhen Zhou, Xien Huang, Donglin Sun, Chunlong Zhang, Yan Jin and Nan Wu
Int. J. Mol. Sci. 2025, 26(20), 10182; https://doi.org/10.3390/ijms262010182 - 20 Oct 2025
Viewed by 161
Abstract
Clear cell renal cell carcinoma (ccRCC) is the most common and aggressive subtype of kidney cancer. This study aimed to construct a prognostic model for ccRCC based on glycosyltransferase genes, which play important roles in cell processes like proliferation, apoptosis. Glycosyltransferase genes were [...] Read more.
Clear cell renal cell carcinoma (ccRCC) is the most common and aggressive subtype of kidney cancer. This study aimed to construct a prognostic model for ccRCC based on glycosyltransferase genes, which play important roles in cell processes like proliferation, apoptosis. Glycosyltransferase genes were collected from four public databases and analyzed using RNA-seq data with clinical information from three ccRCC datasets. Prognostic models were constructed using eight machine learning algorithms, generating a total of 117 combinatorial algorithm models, and the StepCox[forward]+Ridge model with the highest predictive accuracy (C-index = 0.753) which selected and named the Glycosyltransferases Risk Score (GTRS) model. The GTRS effectively stratified patients into high- and low-risk groups with significantly different overall survival and maintained robust performance across TCGA, CPTAC, and E-MTAB1980 cohorts (AUC > 0.75). High-risk patients exhibited higher tumor mutational burden, immunosuppressive microenvironment, and poorer response to immunotherapy. TYMP and GCNT4 were experimentally validated as key genes, functioning as oncogenic and tumor-suppressive factors. In conclusion, GTRS serves as a reliable prognostic tool for ccRCC and provides mechanistic insights into glycosylation-related tumor progression. Full article
Show Figures

Figure 1

16 pages, 4679 KB  
Article
Optimization of Litchi Fruit Detection Based on Defoliation and UAV
by Jing Wang, Mingyue Zhang, Zhenhui Zheng, Zhaoshen Yao, Boxuan Nie, Dongliang Guo, Ling Chen, Jianguang Li and Juntao Xiong
Agronomy 2025, 15(10), 2421; https://doi.org/10.3390/agronomy15102421 - 19 Oct 2025
Viewed by 233
Abstract
The use of UAVs to detect litchi in natural environments is imperative for rapid litchi yield estimation and automated harvesting systems. However, UAV-based lychee fruit detection bottlenecks arise from complex canopy architecture and leaf occlusion. This study proposed a collaborative optimization strategy integrating [...] Read more.
The use of UAVs to detect litchi in natural environments is imperative for rapid litchi yield estimation and automated harvesting systems. However, UAV-based lychee fruit detection bottlenecks arise from complex canopy architecture and leaf occlusion. This study proposed a collaborative optimization strategy integrating agronomic technique with deep learning. Three leaf thinning intensities (0, 6, and 12 compound leaves) were applied at the early stage of fruit to systematically evaluate their effects on fruit growth, canopy structure, and detection performance. Results indicated that moderate defoliation (six leaves) significantly enhanced canopy openness and light penetration without adversely impacting on yield and fruit quality. Subsequent UAV-based detection under moderate versus no defoliation treatment revealed that the YOLOv8-based model achieved significant performance gains: mean average precision (mAP) increased from 0.818 to 0.884, and the F1-score improved from 0.796 to 0.842. The study contributes a novel collaborative optimization strategy that effectively mitigates occlusion issues in fruit detection. This approach demonstrates that agronomic techniques can be strategically used to enhance AI perception, offering a significant step forward in the integration of agricultural machinery and agronomy for intelligent orchard systems. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

25 pages, 371 KB  
Article
Security Analysis and Designing Advanced Two-Party Lattice-Based Authenticated Key Establishment and Key Transport Protocols for Mobile Communication
by Mani Rajendran, Dharminder Chaudhary, S. A. Lakshmanan and Cheng-Chi Lee
Future Internet 2025, 17(10), 472; https://doi.org/10.3390/fi17100472 - 16 Oct 2025
Viewed by 194
Abstract
In this paper, we have proposed a two-party authenticated key establishment (AKE), and authenticated key transport protocols based on lattice-based cryptography, aiming to provide security against quantum attacks for secure communication. This protocol enables two parties, who may share long-term public keys, to [...] Read more.
In this paper, we have proposed a two-party authenticated key establishment (AKE), and authenticated key transport protocols based on lattice-based cryptography, aiming to provide security against quantum attacks for secure communication. This protocol enables two parties, who may share long-term public keys, to securely establish a shared session key, and transportation of the session key from the server while achieving mutual authentication. Our construction leverages the hardness of lattice problems Ring Learning With Errors (Ring-LWE), ensuring robustness against quantum and classical adversaries. Unlike traditional schemes whose security depends upon number-theoretic assumptions being vulnerable to quantum attacks, our protocol ensures security in the post-quantum era. The proposed protocol ensures forward secrecy, and provides security even if the long-term key is compromised. This protocol also provides essential property key freshness and resistance against man-in-the-middle attacks, impersonation attacks, replay attacks, and key mismatch attacks. On the other hand, the proposed key transport protocol provides essential property key freshness, anonymity, and resistance against man-in-the-middle attacks, impersonation attacks, replay attacks, and key mismatch attacks. A two-party key transport protocol is a cryptographic protocol in which one party (typically a trusted key distribution center or sender) securely generates and sends a session key to another party. Unlike key exchange protocols (where both parties contribute to key generation), key transport protocols rely on one party to generate the key and deliver it securely. The protocol possesses a minimum number of exchanged messages and can reduce the number of communication rounds to help minimize the communication overhead. Full article
Show Figures

Figure 1

23 pages, 1868 KB  
Review
Multidimensional Advances in Wildfire Behavior Prediction: Parameter Construction, Model Evolution and Technique Integration
by Hai-Hui Wang, Kai-Xuan Zhang, Shamima Aktar and Ze-Peng Wu
Fire 2025, 8(10), 402; https://doi.org/10.3390/fire8100402 - 16 Oct 2025
Viewed by 735
Abstract
Forest and grassland fire behavior prediction is increasingly critical under climate change, as rising fire frequency and intensity threaten ecosystems and human societies worldwide. This paper reviews the status and future development trends of wildfire behavior modeling and prediction technologies. It provides a [...] Read more.
Forest and grassland fire behavior prediction is increasingly critical under climate change, as rising fire frequency and intensity threaten ecosystems and human societies worldwide. This paper reviews the status and future development trends of wildfire behavior modeling and prediction technologies. It provides a comprehensive overview of the evolution of models from empirical to physical and then to data-driven approaches, emphasizing the integration of multidisciplinary techniques such as machine learning and deep learning. While conventional physical models offer mechanistic insights, recent advancements in data-driven models have enabled the analysis of big data to uncover intricate nonlinear relationships. We underscore the necessity of integrating multiple models via complementary, weighted fusion and hybrid methods to bolster robustness across diverse situations. Ultimately, we advocate for the creation of intelligent forecast systems that leverage data from space, air and ground sources to provide multifaceted fire behavior predictions in regions and globally. Such systems would more effectively transform fire management from a reactive approach to a proactive strategy, thereby safeguarding global forest carbon sinks and promoting sustainable development in the years to come. By offering forward-looking insights and highlighting the importance of multidisciplinary approaches, this review serves as a valuable resource for researchers, practitioners, and policymakers, supporting informed decision-making and fostering interdisciplinary collaboration. Full article
(This article belongs to the Section Fire Science Models, Remote Sensing, and Data)
Show Figures

Figure 1

23 pages, 4965 KB  
Article
Direct Estimation of Electric Field Distribution in Circular ECT Sensors Using Graph Convolutional Networks
by Robert Banasiak, Zofia Stawska and Anna Fabijańska
Sensors 2025, 25(20), 6371; https://doi.org/10.3390/s25206371 - 15 Oct 2025
Viewed by 384
Abstract
The Electrical Capacitance Tomography (ECT) imaging pipeline relies on accurate estimation of electric field distributions to compute electrode capacitances and reconstruct permittivity maps. Traditional ECT forward model methods based on the Finite Element Method (FEM) offer high accuracy but are computationally intensive, limiting [...] Read more.
The Electrical Capacitance Tomography (ECT) imaging pipeline relies on accurate estimation of electric field distributions to compute electrode capacitances and reconstruct permittivity maps. Traditional ECT forward model methods based on the Finite Element Method (FEM) offer high accuracy but are computationally intensive, limiting their use in real-time applications. In this proof-of-concept study, we investigate the use of Graph Convolutional Networks (GCNs) for direct, one-step prediction of electric field distributions associated with a circular ECT sensor numerical model. The network is trained on FEM-simulated data and outputs of full 2D electric field maps for all excitation patterns. To evaluate physical fidelity, we compute capacitance matrices using both GCN-predicted and FEM-based fields. Our results show strong agreement in both direct field prediction and derived quantities, demonstrating the feasibility of replacing traditional solvers with fast, learned approximators. This approach has significant implications for further real-time ECT imaging and control applications. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

25 pages, 3069 KB  
Article
DrSVision: A Machine Learning Tool for Cortical Region-Specific fNIRS Calibration Based on Cadaveric Head MRI
by Serhat Ilgaz Yöner, Mehmet Emin Aksoy, Hayrettin Can Südor, Kurtuluş İzzetoğlu, Baran Bozkurt and Alp Dinçer
Sensors 2025, 25(20), 6340; https://doi.org/10.3390/s25206340 - 14 Oct 2025
Viewed by 320
Abstract
Functional Near-Infrared Spectroscopy is (fNIRS) a non-invasive neuroimaging technique that monitors cerebral hemodynamic responses by measuring near-infrared (NIR) light absorption caused by changes in oxygenated and deoxygenated hemoglobin concentrations. While fNIRS has been widely used in cognitive and clinical neuroscience, a key challenge [...] Read more.
Functional Near-Infrared Spectroscopy is (fNIRS) a non-invasive neuroimaging technique that monitors cerebral hemodynamic responses by measuring near-infrared (NIR) light absorption caused by changes in oxygenated and deoxygenated hemoglobin concentrations. While fNIRS has been widely used in cognitive and clinical neuroscience, a key challenge persists: the lack of practical tools required for calibrating source-detector separation (SDS) to maximize sensitivity at depth (SAD) for monitoring specific cortical regions of interest to neuroscience and neuroimaging studies. This study presents DrSVision version 1.0, a standalone software developed to address this limitation. Monte Carlo (MC) simulations were performed using segmented magnetic resonance imaging (MRI) data from eight cadaveric heads to realistically model light attenuation across anatomical layers. SAD of 10–20 mm with SDS of 19–39 mm was computed. The dataset was used to train a Gaussian Process Regression (GPR)-based machine learning (ML) model that recommends optimal SDS for achieving maximal sensitivity at targeted depths. The software operates independently of any third-party platforms and provides users with region-specific calibration outputs tailored for experimental goals, supporting more precise application of fNIRS. Future developments aim to incorporate subject-specific calibration using anatomical data and broaden support for diverse and personalized experimental setups. DrSVision represents a step forward in fNIRS experimentation. Full article
(This article belongs to the Special Issue Recent Innovations in Computational Imaging and Sensing)
Show Figures

Graphical abstract

24 pages, 2328 KB  
Review
Large Language Model Agents for Biomedicine: A Comprehensive Review of Methods, Evaluations, Challenges, and Future Directions
by Xiaoran Xu and Ravi Sankar
Information 2025, 16(10), 894; https://doi.org/10.3390/info16100894 - 14 Oct 2025
Viewed by 1028
Abstract
Large language model (LLM)-based agents are rapidly emerging as transformative tools across biomedical research and clinical applications. By integrating reasoning, planning, memory, and tool use capabilities, these agents go beyond static language models to operate autonomously or collaboratively within complex healthcare settings. This [...] Read more.
Large language model (LLM)-based agents are rapidly emerging as transformative tools across biomedical research and clinical applications. By integrating reasoning, planning, memory, and tool use capabilities, these agents go beyond static language models to operate autonomously or collaboratively within complex healthcare settings. This review provides a comprehensive survey of biomedical LLM agents, spanning their core system architectures, enabling methodologies, and real-world use cases such as clinical decision making, biomedical research automation, and patient simulation. We further examine emerging benchmarks designed to evaluate agent performance under dynamic, interactive, and multimodal conditions. In addition, we systematically analyze key challenges, including hallucinations, interpretability, tool reliability, data bias, and regulatory gaps, and discuss corresponding mitigation strategies. Finally, we outline future directions in areas such as continual learning, federated adaptation, robust multi-agent coordination, and human AI collaboration. This review aims to establish a foundational understanding of biomedical LLM agents and provide a forward-looking roadmap for building trustworthy, reliable, and clinically deployable intelligent systems. Full article
Show Figures

Figure 1

11 pages, 2705 KB  
Proceeding Paper
Understanding Exoplanet Habitability: A Bayesian ML Framework for Predicting Atmospheric Absorption Spectra
by Vasuda Trehan, Kevin H. Knuth and M. J. Way
Phys. Sci. Forum 2025, 12(1), 9; https://doi.org/10.3390/psf2025012009 - 13 Oct 2025
Viewed by 182
Abstract
The evolution of space technology in recent years, fueled by advancements in computing such as Artificial Intelligence (AI) and machine learning (ML), has profoundly transformed our capacity to explore the cosmos. Missions like the James Webb Space Telescope (JWST) have made information about [...] Read more.
The evolution of space technology in recent years, fueled by advancements in computing such as Artificial Intelligence (AI) and machine learning (ML), has profoundly transformed our capacity to explore the cosmos. Missions like the James Webb Space Telescope (JWST) have made information about distant objects more easily accessible, resulting in extensive amounts of valuable data. As part of this work-in-progress study, we are working to create an atmospheric absorption spectrum prediction model for exoplanets. The eventual model will be based on both collected observational spectra and synthetic spectral data generated by the ROCKE-3D general circulation model (GCM) developed by the climate modeling program at NASA’s Goddard Institute for Space Studies (GISS). In this initial study, spline curves are used to describe the bin heights of simulated atmospheric absorption spectra as a function of one of the values of the planetary parameters. Bayesian Adaptive Exploration is then employed to identify areas of the planetary parameter space for which more data are needed to improve the model. The resulting system will be used as a forward model so that planetary parameters can be inferred given a planet’s atmospheric absorption spectrum. This work is expected to contribute to a better understanding of exoplanetary properties and general exoplanet climates and habitability. Full article
Show Figures

Figure 1

34 pages, 4932 KB  
Review
Recent Progress in Liquid Microlenses and Their Arrays for Adaptive and Applied Optical Systems
by Siyu Lu, Zheyuan Cao, Jinzhong Ling, Ying Yuan, Xin Liu, Xiaorui Wang and Jin-Kun Guo
Micromachines 2025, 16(10), 1158; https://doi.org/10.3390/mi16101158 - 13 Oct 2025
Viewed by 702
Abstract
Liquid microlenses and their arrays (LMLAs) have emerged as a transformative platform in adaptive optics, offering superior reconfigurability, compactness, and fast response compared to conventional solid-state lenses. This review summarizes recent progress from an application-oriented perspective, focusing on actuation mechanisms, fabrication strategies, and [...] Read more.
Liquid microlenses and their arrays (LMLAs) have emerged as a transformative platform in adaptive optics, offering superior reconfigurability, compactness, and fast response compared to conventional solid-state lenses. This review summarizes recent progress from an application-oriented perspective, focusing on actuation mechanisms, fabrication strategies, and functional performance. Among actuation mechanisms, electric-field-driven approaches are highlighted, including electrowetting for shape tuning and liquid crystal-based refractive-index tuning techniques. The former excels in tuning range and response speed, whereas the latter enables programmable wavefront control with lower optical aberrations but limited efficiency. Notably, double-emulsion configurations, with fast interfacial actuation and inherent structural stability, demonstrate great potential for highly integrated optical components. Fabrication methodologies—including semiconductor-derived processes, additive manufacturing, and dynamic molding—are evaluated, revealing trade-offs among scalability, structural complexity, and cost. Functionally, advances in focal length tuning, field-of-view expansion, depth-of-field extension, and aberration correction have been achieved, though strong coupling among these parameters still constrains system-level performance. Looking forward, innovations in functional materials, hybrid fabrication, and computational imaging are expected to mitigate these constraints. These developments will accelerate applications in microscopy, endoscopy, AR/VR displays, industrial inspection, and machine vision, while paving the way for intelligent photonic systems that integrate adaptive optics with machine learning for real-time control. Full article
(This article belongs to the Special Issue Micro-Nano Photonics: From Design and Fabrication to Application)
Show Figures

Figure 1

17 pages, 2309 KB  
Article
Robust Visual–Inertial Odometry via Multi-Scale Deep Feature Extraction and Flow-Consistency Filtering
by Hae Min Cho
Appl. Sci. 2025, 15(20), 10935; https://doi.org/10.3390/app152010935 - 11 Oct 2025
Viewed by 392
Abstract
We present a visual–inertial odometry (VIO) system that integrates a deep feature extraction and filtering strategy with optical flow to improve tracking robustness. While many traditional VIO methods rely on hand-crafted features, they often struggle to remain robust under challenging visual conditions, such [...] Read more.
We present a visual–inertial odometry (VIO) system that integrates a deep feature extraction and filtering strategy with optical flow to improve tracking robustness. While many traditional VIO methods rely on hand-crafted features, they often struggle to remain robust under challenging visual conditions, such as low texture, motion blur, or lighting variation. These methods tend to exhibit large performance variance across different environments, primarily due to the limited repeatability and adaptability of hand-crafted keypoints. In contrast, learning-based features offer richer representations and can generalize across diverse domains thanks to data-driven training. However, they often suffer from uneven spatial distribution and temporal instability, which can degrade tracking performance. To address these issues, we propose a hybrid front-end that combines a lightweight deep feature extractor with an image pyramid and grid-based keypoint sampling to enhance spatial diversity. Additionally, a forward–backward optical-flow-consistency check is applied to filter unstable keypoints. The system improves feature tracking stability by enforcing spatial and temporal consistency while maintaining real-time efficiency. Finally, the effectiveness of the proposed VIO system is validated on the EuRoC MAV benchmark, showing a 19.35% reduction in trajectory RMSE and improved consistency across multiple sequences compared with previous methods. Full article
(This article belongs to the Special Issue Advances in Autonomous Driving: Detection and Tracking)
Show Figures

Figure 1

Back to TopTop