Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (13,174)

Search Parameters:
Keywords = sampling algorithm

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 7135 KB  
Article
Evolutionary Multi-Objective Prompt Learning for Synthetic Text Data Generation with Black-Box Large Language Models
by Diego Pastrián, Nicolás Hidalgo, Víctor Reyes and Erika Rosas
Appl. Sci. 2026, 16(8), 3623; https://doi.org/10.3390/app16083623 (registering DOI) - 8 Apr 2026
Abstract
High-quality training data are essential for the performance and generalization of artificial intelligence systems, particularly in dynamic environments such as adaptive stream processing for disaster response. However, constructing large and representative datasets remains costly and time-consuming, especially in domains where real data are [...] Read more.
High-quality training data are essential for the performance and generalization of artificial intelligence systems, particularly in dynamic environments such as adaptive stream processing for disaster response. However, constructing large and representative datasets remains costly and time-consuming, especially in domains where real data are scarce or difficult to obtain. Large Language Models (LLMs) provide powerful capabilities for synthetic text generation, yet the quality of generated data strongly depends on the design of input prompts. Prompt engineering is therefore critical, but it remains largely manual and difficult to scale, particularly in black-box settings where model internals are inaccessible. This work introduces EVOLMD-MO, a multi-objective evolutionary framework for automated prompt learning aimed at generating high-quality synthetic text datasets using black-box LLMs. The proposed approach formulates prompt optimization as a multi-objective search problem in which candidate prompts evolve through genetic operators guided by two complementary objectives: semantic fidelity to reference data and generative diversity of the produced samples. To support scalable optimization, the framework integrates a modular multi-agent architecture that decouples prompt evolution, LLM interaction, and evaluation mechanisms. The evolutionary process is implemented using the NSGA-II algorithm, enabling the discovery of diverse Pareto-optimal prompts that balance semantic preservation and diversity. Experimental evaluation using large-scale disaster-related social media data demonstrates that the proposed approach consistently improves prompt quality across generations while maintaining a stable trade-off between fidelity and diversity. Compared with a single-objective baseline, EVOLMD-MO explores a significantly broader semantic search space and produces more diverse yet semantically coherent synthetic datasets. These results indicate that multi-objective evolutionary prompt learning constitutes a promising strategy for black-box LLM-driven data generation, with potential applicability to adaptive data analytics and real-time decision-support systems in highly dynamic environments, pending broader validation across domains and models. Full article
(This article belongs to the Special Issue Resource Management for AI-Centric Computing Systems)
Show Figures

Figure 1

33 pages, 736 KB  
Article
Analysis of Chip Electronic Components’ Typical Yield in Taping Process Based on Virtual Metrology
by Shiqi Zhang, Lizhen Chen, Jiangcheng Fu, Chenghu Yang and Guangli Chen
Sensors 2026, 26(8), 2292; https://doi.org/10.3390/s26082292 (registering DOI) - 8 Apr 2026
Abstract
This study addresses virtual metrology for the taping process of chip electronic components, in which partial observability, unmeasured disturbances, and severe label imbalance make direct batch-wise yield prediction unstable. Rather than proposing a new standalone learning algorithm, we develop a data-centric VM framework [...] Read more.
This study addresses virtual metrology for the taping process of chip electronic components, in which partial observability, unmeasured disturbances, and severe label imbalance make direct batch-wise yield prediction unstable. Rather than proposing a new standalone learning algorithm, we develop a data-centric VM framework that reformulates the task as the prediction of operating-condition-level typical yield. First, physically relevant features are retained based on process knowledge and analyzed using Pearson correlation, Spearman correlation, and mutual information. We then perform multidimensional equal-frequency binning to partition the observable feature space into locally homogeneous operating condition groups, and define the within-bin median yield as the typical yield, thereby constructing an operating condition dictionary. Based on this dictionary-based representation, low-yield-oriented sample weighting is combined with nested cross-validation and Bayesian optimization for model comparison and hyperparameter tuning. Using desensitized production data from an electronic component taping process, the results under this representation show more stable prediction than direct modeling on unbinned batch samples while also improving tail-oriented fitting relative to unweighted baselines. These findings suggest that, for partially observable manufacturing data, operating condition stratification provides a practical basis for stabilizing VM prediction, while low-yield-oriented sample weighting further improves sensitivity to the low-yield tail, supporting picture yield early warning and process-level decision making. Full article
Show Figures

Figure 1

20 pages, 6792 KB  
Article
PER-TD3 Integrated with HER Mechanism: Improving Training Efficiency and Control Accuracy for PEMFC Differential Pressure Control
by Yuan Li, Baijun Lai, Jing Wang, Yan Sun, Donghai Hu and Hua Ding
World Electr. Veh. J. 2026, 17(4), 195; https://doi.org/10.3390/wevj17040195 - 8 Apr 2026
Abstract
The cathode and anode differential pressure control of a proton exchange membrane fuel cell (PEMFC) directly affects its service life and operating efficiency. Existing control methods find it difficult to cope with strong nonlinear perturbations, and fixed differential pressure control is prone to [...] Read more.
The cathode and anode differential pressure control of a proton exchange membrane fuel cell (PEMFC) directly affects its service life and operating efficiency. Existing control methods find it difficult to cope with strong nonlinear perturbations, and fixed differential pressure control is prone to pressure overshoot and threshold exceedance, resulting in unstable pressure regulation. In order to solve the current research problems, a reinforcement learning method based on hybrid experience replay (HP-TD3) is proposed. A CART-based algorithm is first used to classify the states of the test load, and a load-related segmented reward function is designed. In addition, a hindsight experience replay (HER) mechanism is incorporated into the Priority Experience Replay Twin Delayed Deep Deterministic Policy Gradient (PER-TD3) framework to improve sample utilization efficiency and training stability. Finally, the performance of HP-TD3 and its ability to cope with nonlinear disturbances are verified on a fuel cell control unit hardware-in-the-loop (FCU-HIL) platform. In the A test load (frequent switching and high low-load proportion), the Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and the degradation index of the fuel cell dynamic performance (Δfc) of HP-TD3 are respectively reduced by 17.4%, 20.5%, and 13.3% compared to P-TD3; in the B test load (high-load operation and low switching frequency), these indicators are reduced by 25.7%, 29.4%, and 15.4% respectively. Full article
(This article belongs to the Section Storage Systems)
Show Figures

Figure 1

22 pages, 2073 KB  
Article
TVAE-GAN: A Generative Model for Providing Early Warnings to High-Risk Students in Basic Education and Its Explanation
by Chao Duan, Yiqing Wang, Wenlong Zhang, Zhongtao Yu, Yu Pei, Mingyan Zhang and Qionghao Huang
Information 2026, 17(4), 356; https://doi.org/10.3390/info17040356 - 8 Apr 2026
Abstract
The rapid development of intelligent learning guidance systems has created a favorable environment for personalized learning. By accurately predicting students’ future performance, education can be tailored and teaching strategies optimized. However, traditional prediction algorithms seldom account for highly imbalanced datasets in basic education, [...] Read more.
The rapid development of intelligent learning guidance systems has created a favorable environment for personalized learning. By accurately predicting students’ future performance, education can be tailored and teaching strategies optimized. However, traditional prediction algorithms seldom account for highly imbalanced datasets in basic education, overlook temporal factors, and lack further interpretability of the prediction results. To address these shortcomings, we propose Temporal Variational Autoencoder-Generative Adversarial Network (TVAE-GAN), a temporal variational autoencoder-generative adversarial network model aimed at providing early warnings for high-risk students in basic education, with in-depth interpretability analysis of the prediction results to suit the unique context of basic education. TVAE-GAN extracts features from real samples and introduces a Long Short-Term Memory (LSTM) network to capture dynamic features in time series, helping the model better understand temporal dependencies in the data, remember the sequential causal information of students’ online learning, and achieve better data generation performance. Using these features, the generative model generates new samples, and the discriminator model evaluates their quality, producing outputs that closely resemble real samples through training. The effectiveness of the TVAE-GAN model is validated on a collected online basic education dataset while also advancing the timing of interventions in predictions. The performance differences between the proposed method and classic resampling methods, as well as their impact in the educational field, are analyzed, highlighting that misclassification increases teacher workload and affects students’ emotions. Key influencing factors are identified using a decision-tree surrogate model, providing teachers with multidimensional references for academic assessment. Full article
Show Figures

Figure 1

322 KB  
Proceeding Paper
GNSS Interference Along a Highway near an Aircraft Approach Lane: A 5-Month Study
by Julia I. M. Hauser, Roman Lesjak and Hamid Kavousi Ghafi
Eng. Proc. 2026, 126(1), 46; https://doi.org/10.3390/engproc2026126046 - 7 Apr 2026
Abstract
Intentional and unintentional GNSS interference can greatly affect the performance of precise timing and localization in areas such as automated driving or aviation. Nevertheless, reports show that jamming occurs near many European airports that are located close to a highway or in heavy [...] Read more.
Intentional and unintentional GNSS interference can greatly affect the performance of precise timing and localization in areas such as automated driving or aviation. Nevertheless, reports show that jamming occurs near many European airports that are located close to a highway or in heavy industry areas due to broadcasting of interfering signals. To assess the impact of such potential risks, we investigated interference occurring on a section of highway located both near to an airport and close to logistics centers as part of the Austrian Security Research Program project CATCH-IN. This section of highway is of particular interest, as the highway runs in parallel to the approach path of aircraft and crosses the approach path 3.7 km before the aircraft touches down (the flight altitude is only 200 m above the ground). For this experiment, we distributed six Septentrio Mosaic x5 GNSS receivers as sensors along the highway and monitored this section for five months. We analyzed the data with AGC monitoring, CN0 monitoring, and baseband sample monitoring to identify interference along the highway that could affect sensors along the descending flight trajectory. During the period of this experiment, we saw events that we believe could cause potential safety risks and problems for aviation safety. In our analysis, we focused on the statistical evaluation of the temporal repetitions, in particular the times of day that see more interference and the frequencies at which more interference occurs. Additionally, we analyzed the performance of different algorithms for dealing with large datasets. The results provide new insight into potential monitoring stations near airports and raise awareness of potential risks and vulnerabilities in aviation safety as well as automated driving along highways. Full article
(This article belongs to the Proceedings of European Navigation Conference 2025)
Show Figures

Figure 1

19 pages, 6970 KB  
Article
Reliability Research of Natural Gas Pipeline Units Based on Mechanistic Modeling
by Huirong Huang, Chen Wu, Jie Zhong, Huishu Liu, Qian Huang, Xueyuan Long, Yuan Tian, Weichao Yu, Shangfei Song and Jing Gong
Processes 2026, 14(7), 1183; https://doi.org/10.3390/pr14071183 - 7 Apr 2026
Abstract
Due to long-term burial underground, oil and gas pipelines are susceptible to external surface corrosion influenced by time and soil conditions, which can lead to leakage and burst failures. Pipeline failure not only results in significant economic losses but also has catastrophic impacts [...] Read more.
Due to long-term burial underground, oil and gas pipelines are susceptible to external surface corrosion influenced by time and soil conditions, which can lead to leakage and burst failures. Pipeline failure not only results in significant economic losses but also has catastrophic impacts on human safety and the environment. Therefore, modeling and analyzing the corrosion failure of these pipelines is of critical practical importance to ensure their safe operation during service. Addressing the insufficient research on correlation effects in current reliability evaluations of corroded pipelines, this paper proposes a calculation method for the failure probability of corroded oil and gas pipelines that considers the influence of two-layer correlations. Taking a specific segment of the Shaanxi–Beijing pipeline as a case study, the Monte Carlo sampling algorithm is employed to calculate the impact of two-layer correlations and the quantity of defect on the pipeline’s failure probability. Furthermore, a sensitivity analysis of the correlation coefficients is conducted. The results indicate that the influence of defect correlation on pipeline failure probability is significantly more pronounced than that of random variable correlation. The probabilities of pinhole leakage and burst failure decrease as the correlation coefficient between defects increases, while they increase with the number of defects. Random variable correlation exhibits no impact on pinhole leakage probability; however, the burst failure probability decreases with an increasing correlation coefficient between wall thickness and pipe diameter, but increases as the correlation between initial defect length and depth grows. Furthermore, the correlation coefficient between axial and radial defect growth rates exerts a bidirectional effect on burst failure probability: during the first 25 years of the prediction period, the failure probability increases with the correlation coefficient, whereas it subsequently decreases after approximately 25 years. These findings are applicable to the reliability evaluation of oil and gas pipelines containing multiple corrosion defects, providing valuable technical references for ensuring safe operation and the steady supply of energy resources. Full article
(This article belongs to the Section Petroleum and Low-Carbon Energy Process Engineering)
Show Figures

Figure 1

22 pages, 1170 KB  
Article
Adverse Drug Reaction Detection on Social Media Based on Large Language Models
by Hao Li and Hongfei Lin
Information 2026, 17(4), 352; https://doi.org/10.3390/info17040352 - 7 Apr 2026
Abstract
Adverse drug reaction (ADR) detection is essential for ensuring drug safety and effective pharmacovigilance. The rapid growth of users’ medication reviews posted on social media has introduced a valuable new data source for ADR detection. However, the large scale and high noise inherent [...] Read more.
Adverse drug reaction (ADR) detection is essential for ensuring drug safety and effective pharmacovigilance. The rapid growth of users’ medication reviews posted on social media has introduced a valuable new data source for ADR detection. However, the large scale and high noise inherent in social media text pose substantial challenges to existing detection methods. Although large language models (LLMs) exhibit strong robustness to noisy and interfering information, they are often limited by issues such as stochastic outputs and hallucinations. To address these challenges, this paper proposes two generative detection frameworks based on Chain of Thought (CoT), namely LLaMA-DetectionADR for Supervised Fine-Tuning (SFT) and DetectionADRGPT for low-resource in-context learning. LLaMA-DetectionADR automatically generates CoT reasoning sequences to construct an instruction tuning dataset, which is then used to fine-tune the LLaMA3-8B model via Quantized Low-Rank Adaptation (QLoRA). In contrast, DetectionADRGPT leverages clustering algorithms to select representative unlabeled samples and enhances in-context learning by incorporating CoT reasoning paths together with their corresponding labels. Experimental results on the Twitter and CADEC social media datasets show that LLaMA-DetectionADR achieves excellent performance, with F1 scores of 92.67% and 86.13%, respectively. Meanwhile, DetectionADRGPT obtains competitive F1 scores of 87.29% and 82.80% with only a few labeled examples, approaching the performance of fully supervised advanced models. The overall results demonstrate the effectiveness and practical value of the proposed CoT-based generative frameworks for ADR detection from social media. Full article
(This article belongs to the Topic Generative AI and Interdisciplinary Applications)
Show Figures

Figure 1

16 pages, 1033 KB  
Article
Modified Shamir Threshold Scheme for Secure Storage of Biometric Data
by Saule Nyssanbayeva, Nursulu Kapalova and Saltanat Beisenova
Computers 2026, 15(4), 228; https://doi.org/10.3390/computers15040228 - 7 Apr 2026
Abstract
The security of biometric data is a critical challenge in modern information security due to their uniqueness and non-revocability. Compromise of biometric characteristics leads to irreversible consequences; therefore, storing or transmitting them in plaintext is unacceptable. This paper addresses the confidentiality and integrity [...] Read more.
The security of biometric data is a critical challenge in modern information security due to their uniqueness and non-revocability. Compromise of biometric characteristics leads to irreversible consequences; therefore, storing or transmitting them in plaintext is unacceptable. This paper addresses the confidentiality and integrity of fingerprint data using cryptographic protection methods. Considering the specific nature of biometrics, fingerprint features are used only to generate a cryptographic secret rather than being stored directly. To protect the derived secret, a modified threshold secret-sharing scheme based on non-positional polynomial notation and the Chinese Remainder Theorem is proposed. The method generates a cryptographic secret from fingerprint minutiae described by spatial coordinates and ridge orientation. Concatenating minutiae coordinates and converting them into binary form produces a unique value deterministically linked to a specific user. Compared to the classical Shamir scheme, the modified scheme reduces the computational complexity of secret reconstruction from O(n log2n) to O(k log k), decreases data storage requirements by 30–40% through compact polynomial remainders, and increases successful secret reconstruction by 12–15% in the presence of noise in biometric samples. The results show that the proposed algorithm can be effectively applied in biometric authentication systems to protect personal data in distributed environments. Security analysis confirms resistance to major attack classes and demonstrates practical applicability in real-world systems. Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
Show Figures

Figure 1

17 pages, 511 KB  
Article
Homogeneity Test and Sample Size of Relative Risk Ratios for Complex Paired Data Under Dalla’s Model
by Shuman Sun and Zhiming Li
Axioms 2026, 15(4), 268; https://doi.org/10.3390/axioms15040268 - 7 Apr 2026
Abstract
In clinical research, unilateral data and bilateral data are commonly collected when paired organs or body parts of people receive treatment. Existing models are often inadequate for the research of combined unilateral and bilateral data. Considering population heterogeneity, this paper proposes three statistical [...] Read more.
In clinical research, unilateral data and bilateral data are commonly collected when paired organs or body parts of people receive treatment. Existing models are often inadequate for the research of combined unilateral and bilateral data. Considering population heterogeneity, this paper proposes three statistical tests and sample size estimation methods for the relative risk ratio in stratified unilateral and bilateral data under Dallal’s model. We derive test statistics (i.e., likelihood ratio, Wald-type, and score statistics) and evaluate their performance in terms of type I error rates and powers. Then, sample size determination is performed using an iterative algorithm. Monte Carlo simulations demonstrate that the score test performs well across various parameter configurations. Moreover, the estimated powers for determining sample size based on the score test are closer to the actual empirical powers. Two real examples of otolaryngology and myopathy are provided to illustrate the effectiveness of the proposed methods. Full article
Show Figures

Figure 1

18 pages, 10375 KB  
Article
Extended Coherent Modulation Imaging for Object Reconstruction with Single Diffraction Pattern
by Yue Wang, Yafang Zou, Ye Wu, Xinke Li, Xibao Gao, Long Jin, Weiyou Zeng, Qinglan Wang and Xi He
Photonics 2026, 13(4), 349; https://doi.org/10.3390/photonics13040349 - 7 Apr 2026
Abstract
Coherent diffraction imaging (CDI) is a fast-growing imaging technique. Among all CDI methods, coherent modulation imaging (CMI) has strong potential for dynamic imaging because of its ability to form an image from a single diffraction pattern. However, current CMI methods mostly reconstruct the [...] Read more.
Coherent diffraction imaging (CDI) is a fast-growing imaging technique. Among all CDI methods, coherent modulation imaging (CMI) has strong potential for dynamic imaging because of its ability to form an image from a single diffraction pattern. However, current CMI methods mostly reconstruct the exit wave distribution behind the object plane, which is seriously affected by the illumination artifact. Recently, some improved CMI methods have been developed to resolve the problem. However, many of these methods still need two diffraction patterns—one empty-sample diffraction pattern and another snapshot measurement. Recent advances in randomized probe imaging have shown that a single diffraction pattern suffices for quantitative reconstruction when the probe is pre-calibrated. Herein, we propose a modified CMI algorithm to reconstruct pure object function with single diffraction pattern, thereby simplifying the experimental process. Moreover, the proposed method can also work in the situation where the modulation effect is weak. Both numerical simulations and optical experiments have been conducted to verify the proposed method. Full article
Show Figures

Figure 1

14 pages, 948 KB  
Article
Urinary miRNA Analysis for Clear Cell Renal Cell Carcinoma: miR-20a as a Key Endogenous Normalizer
by Giovanni Cochetti, Giacomo Vannuccini, Matteo Mearini, Alessio Paladini, Francesca Cocci, Raffaele La Mura, Daniele Mirra, Giuseppe Giardino and Ettore Mearini
Int. J. Mol. Sci. 2026, 27(7), 3323; https://doi.org/10.3390/ijms27073323 - 7 Apr 2026
Abstract
Urinary microRNAs (miRNAs) are promising noninvasive biomarkers for cancer detection, but their clinical utility is reduced by inconsistent normalization strategies, reducing reproducibility and comparability across studies. In this study, we assessed the stability of miR-20a as an endogenous normalizer for urinary miRNA profiling [...] Read more.
Urinary microRNAs (miRNAs) are promising noninvasive biomarkers for cancer detection, but their clinical utility is reduced by inconsistent normalization strategies, reducing reproducibility and comparability across studies. In this study, we assessed the stability of miR-20a as an endogenous normalizer for urinary miRNA profiling in clear cell renal cell carcinoma (ccRCC) while standardizing the pre-analytical phase using a urine stabilizing solution. Ninety-nine urine samples were analyzed: 47 from healthy individuals, 30 from ccRCC patients pre-surgery, and 22 post-operative patients. Six candidate miRNAs—miR-20a, miR-15b, miR-16, miR-15a, miR-210-3p, and miR-let-7b—were quantified via RT-qPCR. Stability analysis with RefFinder, integrating multiple algorithms (geNorm, normFinder, BestKeeper, and ΔCt methods), identified miR-20a as the most stable among the six candidates. Raw Ct values of miR-20a were normally distributed (Shapiro–Wilk test, p > 0.05), with no significant intergroup differences (one-way ANOVA, F(2.96) = 2.324, p = 0.103) and minimal intragroup variability (CV% 4.98–6.38). MiR-20a expression remained stable across different tumor staging, grading, and urine storage durations. These findings confirm miR-20a as a robust endogenous normalizer for urinary miRNA analyses and support the feasibility of developing reproducible urinary liquid biopsy workflows for ccRCC, even in settings where immediate sample processing is not feasible. Full article
(This article belongs to the Special Issue Roles of Non-Coding RNAs in Cancer)
Show Figures

Figure 1

15 pages, 1148 KB  
Article
Early Prediction of Well-Being Outcomes in Older Adults Using Explainable AI and Emotional Intelligence Measures
by Evgenia Kouli, Evangelos Bebetsos, Maria Michalopoulou and Filippos Filippou
Appl. Sci. 2026, 16(7), 3586; https://doi.org/10.3390/app16073586 - 7 Apr 2026
Abstract
Background: Well-being in the elderly is shaped by complex emotional and social factors. Early identification of individuals at risk for reduced well-being may support timely preventive or supportive interventions. This study examined whether emotional intelligence indicators collected at baseline can predict well-being status [...] Read more.
Background: Well-being in the elderly is shaped by complex emotional and social factors. Early identification of individuals at risk for reduced well-being may support timely preventive or supportive interventions. This study examined whether emotional intelligence indicators collected at baseline can predict well-being status 5 months later using explainable machine learning models. Methods: A cohort of elderly participants aged 60 to 89 years completed emotional intelligence measures at baseline, and well-being was assessed 5 months later using the POMS questionnaire. Four machine learning algorithms, Logistic Regression (LR), Support Vector Machines (SVM), Random Forest (RF), and Extreme Gradient Boosting (XGBoost), were developed using 5-fold stratified cross-validation. Model performance was evaluated through accuracy, precision, recall, F1-score, ROC AUC, and normalized confusion matrices. SHapley Additive exPlanations (SHAP) were applied to interpret the contribution and directionality of each predictor. Results: XGBoost achieved the highest predictive performance (accuracy = 0.789; F1 = 0.778) and demonstrated balanced classification across well-being categories. SVM also performed robustly (accuracy = 0.760), while LR showed reduced sensitivity for detecting those with poorer well-being. SHAP analysis identified self-control, emotionality, sociability, self-motivation, and well-being components as the most influential predictors. Lower emotionality, higher sociability, and higher self-control scores were linked to a greater probability of favorable well-being outcomes. Conclusions: The findings demonstrate the feasibility of using explainable machine learning models to predict 5-month well-being status within this sample of older adults using emotional intelligence indicators. XGBoost provided the strongest and most balanced performance, while SHAP analysis clarified how specific emotional intelligence dimensions influenced predictions. These findings suggest that interpretable machine learning approaches may support future efforts toward early recognition of older adults who may be at risk for reduced well-being and guide personalized intervention strategies. Full article
Show Figures

Figure 1

15 pages, 1808 KB  
Article
Investigation of the Prevalence of Associated Genetic Mutations (Co-Mutations) in Patients with Actionable Driver Mutations in Lung Cancer: A Retrospective Study
by Abed Agbarya, Walid Shalata, Edmond Sabo, Leonard Saiegh, Yuval Shaham, Haitam Nasrallah, Kamel Mhameed, Salam Mazareb, Mohammad Sheikh-Ahmad and Dan Levy Faber
Diagnostics 2026, 16(7), 1106; https://doi.org/10.3390/diagnostics16071106 - 7 Apr 2026
Abstract
Background/Objectives: Lung cancer remains the leading cause of cancer-related mortality globally. Approximately 45% of these tumors harbor oncogenic mutations that drive carcinogenesis and are amenable to targeted therapies. Other predictive biomarkers—e.g., PD-L1, TMB, and MSI—play a crucial role in patients’ management. This [...] Read more.
Background/Objectives: Lung cancer remains the leading cause of cancer-related mortality globally. Approximately 45% of these tumors harbor oncogenic mutations that drive carcinogenesis and are amenable to targeted therapies. Other predictive biomarkers—e.g., PD-L1, TMB, and MSI—play a crucial role in patients’ management. This study aims to investigate the existence of mutation clusters (co-mutations) and evaluate the correlation of these clusters with various clinical and laboratory parameters. Methods: A retrospective study was conducted utilizing pathological samples from lung cancer patients harboring mutations in EGFR, KRAS, ALK, BRAF, MET, HER2, ROS1, NTRK, and NRG1. Data were collected from the Institute of Pathology at Carmel Medical Center between the years 2022 and 2024. Patients were stratified using a Two-Step Cluster Analysis algorithm based on actionable mutations and co-mutations. Heatmaps and dendrograms were generated to assess the correlation between these genomic clusters, clinical metrics, and predictive biomarkers. Results: The study cohort included 129 patients with actionable mutations. Five distinct clusters were identified: Clusters 1, 2, and 3 exhibited a high expression of STK11 and TP53 co-mutations alongside KRAS drivers (n = 38, n = 12, and n = 23, respectively). Clusters 4 and 5 demonstrated high expression of ALK alterations and tumor suppressor gene mutations (n = 31 and n = 25, respectively). Cluster comparisons demonstrated statistically significant differences between clusters regarding age, gender, PD-L1 expression, and tumor mutational burden. No significant associations were found regarding ethnicity or microsatellite instability status. Conclusions: By constructing clusters based on the aggregate of genomic alterations in patients with actionable mutations, it is possible to predict associations with distinct demographic and clinical characteristics. Future research should apply this analytical approach to larger cohorts to further characterize these subgroups and investigate potential correlations with therapeutic efficacy. Full article
(This article belongs to the Special Issue Advancements and Innovations in the Diagnosis of Lung Cancer)
Show Figures

Figure 1

15 pages, 3734 KB  
Article
An SVM-Based High-Precision Reconstruction Algorithm for High-Power Laser Beam Spots with Large Divergence Angles
by Wenrong Mo, Bin Li, Jianxin Wang, Cai Wen, Youlin Wang and Awais Tabassum
Optics 2026, 7(2), 26; https://doi.org/10.3390/opt7020026 - 7 Apr 2026
Abstract
Lasers are a key enabling technology across numerous engineering and scientific fields, especially in high-energy laser systems for defense, materials processing, and fusion research, where precise characterization of high-power, large-divergence-angle laser spots is critical. However, the inherent properties of high-power, large-divergence-angle lasers—such as [...] Read more.
Lasers are a key enabling technology across numerous engineering and scientific fields, especially in high-energy laser systems for defense, materials processing, and fusion research, where precise characterization of high-power, large-divergence-angle laser spots is critical. However, the inherent properties of high-power, large-divergence-angle lasers—such as large spot area and strong intensity contrast—pose real obstacles to existing methods, which often suffer from low accuracy and inefficiency. In this paper, a flat-field correction technique was proposed for the CCD to reduce the distortions produced by the non-uniform response of the sensor in spot measurements. Then, a spot recognition algorithm based on support vector machines was developed, which can effectively and accurately locate and identify laser spots with limited training samples and computational resources, achieving a classification accuracy of over 98.11%. Additionally, an efficient correction approach is proposed to assess the spot intensity and shape with high accuracy even at large tilt angles. Experimental results show that this proposed approach can measure the high-power laser spot with a large divergence angle precisely and efficiently, and improves both the measurement precision and operational efficiency remarkably. Full article
Show Figures

Figure 1

23 pages, 18571 KB  
Article
Data-Driven Modeling and Response Prediction of Cut-Out Type Piezoelectric Beams
by Mingli Bian, Wenan Jiang and Qinsheng Bi
Micromachines 2026, 17(4), 450; https://doi.org/10.3390/mi17040450 - 6 Apr 2026
Viewed by 142
Abstract
In addressing the issue of insufficient theoretical model accuracy for Cut-out type piezoelectric beams with limiters under the influence of contact-impact nonlinearity, this study utilizes the backpropagation neural network algorithm to develop a data-driven modeling approach based on experimental data from partial distance [...] Read more.
In addressing the issue of insufficient theoretical model accuracy for Cut-out type piezoelectric beams with limiters under the influence of contact-impact nonlinearity, this study utilizes the backpropagation neural network algorithm to develop a data-driven modeling approach based on experimental data from partial distance parameters. This approach aims to achieve accurate predictions of the output voltage and displacement responses of the energy harvester. For different parameter combinations of the limiter gap distance d and installation distance a, amplitude–frequency response data were first systematically collected through experiments, along with time–voltage response data corresponding to different load resistances. Using these data, a training sample set was constructed, and a multi-layer BP neural network prediction model was established with frequency or time as the input and voltage and displacement responses as the outputs. Validation against experimental data demonstrated that the BP neural network can accurately extrapolate and predict the amplitude–frequency response curves of voltage and displacement under various distance parameter combinations, as well as accurately predict the transient voltage outputs under different load conditions. Full article
(This article belongs to the Special Issue MEMS/NEMS Devices and Applications, 4th Edition)
Show Figures

Figure 1

Back to TopTop