Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,291)

Search Parameters:
Keywords = widely linear processing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 687 KB  
Article
Collateral Status Evaluation Using CT Angiography and Perfusion Source Images in Acute Stroke Patients
by Heitor C. B. R. Alves, Bruna G. Dutra, Vivian Gagliardi, Rubens J. Gagliardi, Felipe T. Pacheco, Antonio C. M. Maia and Antônio J. da Rocha
Brain Sci. 2025, 15(10), 1092; https://doi.org/10.3390/brainsci15101092 - 9 Oct 2025
Abstract
Background/Objectives: Single-phase CT angiography (sCTA) is widely used to assess collateral circulation in acute ischemic stroke, but its static nature can lead to an underestimation of collateral flow. Our study aimed to develop and validate a direct, qualitative dynamic CTA (dCTA) collateral score [...] Read more.
Background/Objectives: Single-phase CT angiography (sCTA) is widely used to assess collateral circulation in acute ischemic stroke, but its static nature can lead to an underestimation of collateral flow. Our study aimed to develop and validate a direct, qualitative dynamic CTA (dCTA) collateral score based on CTP source images, without the need for post-processing software, to provide a more accurate prognostic tool. Methods: We retrospectively analyzed 112 patients with anterior circulation ischemic stroke from a prospective registry who underwent non-contrast CT, sCTA, and CTP within 8 h of onset. Collateral circulation was graded using a 4-point sCTA score and our novel 4-point dCTA score, which incorporates temporal filling patterns. We used linear regression to compare the association of both scores with CTP-derived core/hypoperfusion volumes, infarct growth, and final infarct volume. Results: The dCTA method frequently reclassified patients with poor collaterals on sCTA to good collaterals on dCTA (n = 23), while the reverse was rare (n = 5). A better collateral score was significantly associated with smaller core volume for both sCTA and dCTA, but the dCTA score demonstrated a superior model fit (R2 = 0.36 vs. 0.32). Similar superior correlations for dCTA were observed for hypoperfusion, infarct growth, and final infarct volumes. Critically, only the dCTA score significantly modified the association between core volume and time since stroke onset (p for interaction = 0.04). Conclusions: A collateral score derived from CTP source images (dCTA) offers a more reliable prediction of infarct lesion sizes and progression than conventional sCTA. By incorporating temporal resolution without requiring extra software, dCTA provides a robust correlation with stroke temporal evolution and represents a readily implementable tool to enhance patient selection in acute stroke. Full article
(This article belongs to the Special Issue Stroke: Epidemiology, Diagnosis, Etiology, Treatment, and Prevention)
14 pages, 4733 KB  
Article
Microstructural Stability and Densification Behavior of Cantor-Type High-Entropy Alloy Processed by Spark Plasma Sintering
by Marcin Madej, Beata Leszczyńska-Madej, Anna Kopeć-Surzyn, Paweł Nieroda and Stanislav Rusz
Materials 2025, 18(19), 4625; https://doi.org/10.3390/ma18194625 - 7 Oct 2025
Viewed by 132
Abstract
High-entropy alloys (HEAs) of the Cantor type (CoCrFeMnNi) are widely recognized as model systems for studying the relationships between composition, microstructure, and functional performance. In this study, atomized Cantor alloy powders were consolidated using spark plasma sintering (SPS) under systematically varied process parameters [...] Read more.
High-entropy alloys (HEAs) of the Cantor type (CoCrFeMnNi) are widely recognized as model systems for studying the relationships between composition, microstructure, and functional performance. In this study, atomized Cantor alloy powders were consolidated using spark plasma sintering (SPS) under systematically varied process parameters (temperature and dwell time). The densification behavior, microstructural evolution, and mechanical response were investigated using Archimedes’ density measurements, Vickers hardness testing, compression tests, scanning electron microscopy, and EDS mapping. The results reveal a non-linear relationship between sintering temperature and densification, with maximum relative densities obtained at 1050 °C and 1100 °C for short dwell times. Despite the ultrafast nature of SPS, grain growth was observed, particularly at elevated temperatures and extended dwell times, challenging the assumption that SPS inherently limits grain coarsening. All sintered samples retained a single-phase FCC structure with homogeneous elemental distribution, and no phase segregation or secondary precipitates were detected. Compression testing showed that samples sintered at 1060 °C and 1100 °C exhibited the highest strength, demonstrating the strong interplay between sintering kinetics and grain cohesion. Full article
Show Figures

Figure 1

15 pages, 1974 KB  
Article
A Flexible Electrochemical Sensor Based on Porous Ceria Hollow Microspheres Nanozyme for Sensitive Detection of H2O2
by Jie Huang, Xuanda He, Shuang Zou, Keying Ling, Hongying Zhu, Qijia Jiang, Yuxuan Zhang, Zijian Feng, Penghui Wang, Xiaofei Duan, Haiyang Liao, Zheng Yuan, Yiwu Liu and Jinghua Tan
Biosensors 2025, 15(10), 664; https://doi.org/10.3390/bios15100664 - 2 Oct 2025
Viewed by 371
Abstract
The development of cost-effective and highly sensitive hydrogen peroxide (H2O2) biosensors with robust stability is critical due to the pivotal role of H2O2 in biological processes and its broad utility across various applications. In this work, [...] Read more.
The development of cost-effective and highly sensitive hydrogen peroxide (H2O2) biosensors with robust stability is critical due to the pivotal role of H2O2 in biological processes and its broad utility across various applications. In this work, porous ceria hollow microspheres (CeO2-phm) were synthesized using a solvothermal synthesis method and employed in the construction of an electrochemical biosensor for H2O2 detection. The resulting CeO2-phm featured a uniform pore size centered at 3.4 nm and a high specific surface area of 168.6 m2/g. These structural attributes contribute to an increased number of active catalytic sites and promote efficient electrolyte penetration and charge transport, thereby enhancing its electrochemical sensing performance. When integrated into screen-printed carbon electrodes (CeO2-phm/cMWCNTs/SPCE), the CeO2-phm/cMWCNTs/SPCE-based biosensor exhibited a wide linear detection range from 0.5 to 450 μM, a low detection limit of 0.017 μM, and a high sensitivity of 2070.9 and 2161.6 μA·mM−1·cm−2—surpassing the performance of many previously reported H2O2 sensors. In addition, the CeO2-phm/cMWCNTs/SPCE-based biosensor possesses excellent anti-interference performance, repeatability, reproducibility, and stability. Its effectiveness was further validated through successful application in real sample analysis. Hence, CeO2-phm with solvothermal synthesis has great potential applications as a sensing material for the quantitative determination of H2O2. Full article
(This article belongs to the Special Issue Advances in Nanozyme-Based Biosensors)
Show Figures

Figure 1

26 pages, 3841 KB  
Article
Comparison of Regression, Classification, Percentile Method and Dual-Range Averaging Method for Crop Canopy Height Estimation from UAV-Based LiDAR Point Cloud Data
by Pai Du, Jinfei Wang and Bo Shan
Drones 2025, 9(10), 683; https://doi.org/10.3390/drones9100683 - 1 Oct 2025
Viewed by 213
Abstract
Crop canopy height is a key structural indicator that is strongly associated with crop development, biomass accumulation, and crop health. To overcome the limitations of time-consuming and labor-intensive traditional field measurements, Unmanned Aerial Vehicle (UAV)-based Light Detection and Ranging (LiDAR) offers an efficient [...] Read more.
Crop canopy height is a key structural indicator that is strongly associated with crop development, biomass accumulation, and crop health. To overcome the limitations of time-consuming and labor-intensive traditional field measurements, Unmanned Aerial Vehicle (UAV)-based Light Detection and Ranging (LiDAR) offers an efficient alternative by capturing three-dimensional point cloud data (PCD). In this study, UAV-LiDAR data were acquired using a DJI Matrice 600 Pro equipped with a 16-channel LiDAR system. Three canopy height estimation methodological approaches were evaluated across three crop types: corn, soybean, and winter wheat. Specifically, this study assessed machine learning regression modeling, ground point classification techniques, percentile-based method and a newly proposed Dual-Range Averaging (DRA) method to identify the most effective method while ensuring practicality and reproducibility. The best-performing method for corn was Support Vector Regression (SVR) with a linear kernel (R2 = 0.95, RMSE = 0.137 m). For soybean, the DRA method yielded the highest accuracy (R2 = 0.93, RMSE = 0.032 m). For winter wheat, the PointCNN deep learning model demonstrated the best performance (R2 = 0.93, RMSE = 0.046 m). These results highlight the effectiveness of integrating UAV-LiDAR data with optimized processing methods for accurate and widely applicable crop height estimation in support of precision agriculture practices. Full article
(This article belongs to the Special Issue UAV Agricultural Management: Recent Advances and Future Prospects)
Show Figures

Figure 1

20 pages, 14676 KB  
Article
Optimal and Model Predictive Control of Single Phase Natural Circulation in a Rectangular Closed Loop
by Aitazaz Hassan, Guilherme Ozorio Cassol, Syed Abuzar Bacha and Stevan Dubljevic
Sustainability 2025, 17(19), 8807; https://doi.org/10.3390/su17198807 - 1 Oct 2025
Viewed by 302
Abstract
Pipeline systems are essential across various industries for transporting fluids over various ranges of distances. A notable application is natural circulation through thermo-syphoning, driven by temperature-induced density variations that generate fluid flow in closed loops. This passive mechanism is widely employed in sectors [...] Read more.
Pipeline systems are essential across various industries for transporting fluids over various ranges of distances. A notable application is natural circulation through thermo-syphoning, driven by temperature-induced density variations that generate fluid flow in closed loops. This passive mechanism is widely employed in sectors such as process engineering, oil and gas, geothermal energy, solar water heaters, fertilizers, etc. Natural Circulation Loops eliminate the need for mechanical pumps. While this passive mechanism reduces energy consumption and maintenance costs, maintaining stability and efficiency under varying operating conditions remains a challenge. This study investigates thermo-syphoning in a rectangular closed-loop system and develops optimal control strategies like using a Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) to ensure stable and efficient heat removal while explicitly addressing physical constraints. The results demonstrate that MPC improves system stability and reduces energy usage through optimized control actions by nearly one-third in the initial energy requirement. Compared to the LQR and unconstrained MPC, MPC with active constraints effectively manages input limitations, ensuring safer and more practical operation. With its predictive capability and adaptability, the proposed MPC framework offers a robust, scalable solution for real-time industrial applications, supporting the development of sustainable and adaptive natural circulation pipeline systems. Full article
Show Figures

Figure 1

34 pages, 4605 KB  
Article
Forehead and In-Ear EEG Acquisition and Processing: Biomarker Analysis and Memory-Efficient Deep Learning Algorithm for Sleep Staging with Optimized Feature Dimensionality
by Roberto De Fazio, Şule Esma Yalçınkaya, Ilaria Cascella, Carolina Del-Valle-Soto, Massimo De Vittorio and Paolo Visconti
Sensors 2025, 25(19), 6021; https://doi.org/10.3390/s25196021 - 1 Oct 2025
Viewed by 356
Abstract
Advancements in electroencephalography (EEG) technology and feature extraction methods have paved the way for wearable, non-invasive systems that enable continuous sleep monitoring outside clinical environments. This study presents the development and evaluation of an EEG-based acquisition system for sleep staging, which can be [...] Read more.
Advancements in electroencephalography (EEG) technology and feature extraction methods have paved the way for wearable, non-invasive systems that enable continuous sleep monitoring outside clinical environments. This study presents the development and evaluation of an EEG-based acquisition system for sleep staging, which can be adapted for wearable applications. The system utilizes a custom experimental setup with the ADS1299EEG-FE-PDK evaluation board to acquire EEG signals from the forehead and in-ear regions under various conditions, including visual and auditory stimuli. Afterward, the acquired signals were processed to extract a wide range of features in time, frequency, and non-linear domains, selected based on their physiological relevance to sleep stages and disorders. The feature set was reduced using the Minimum Redundancy Maximum Relevance (mRMR) algorithm and Principal Component Analysis (PCA), resulting in a compact and informative subset of principal components. Experiments were conducted on the Bitbrain Open Access Sleep (BOAS) dataset to validate the selected features and assess their robustness across subjects. The feature set extracted from a single EEG frontal derivation (F4-F3) was then used to train and test a two-step deep learning model that combines Long Short-Term Memory (LSTM) and dense layers for 5-class sleep stage classification, utilizing attention and augmentation mechanisms to mitigate the natural imbalance of the feature set. The results—overall accuracies of 93.5% and 94.7% using the reduced feature sets (94% and 98% cumulative explained variance, respectively) and 97.9% using the complete feature set—demonstrate the feasibility of obtaining a reliable classification using a single EEG derivation, mainly for unobtrusive, home-based sleep monitoring systems. Full article
Show Figures

Figure 1

22 pages, 3031 KB  
Article
Study of the Effect of Accelerated Ageing on the Properties of Selected Hyperelastic Materials
by Marcin Konarzewski and Jakub Henryk Kotkowski
Appl. Sci. 2025, 15(19), 10620; https://doi.org/10.3390/app151910620 - 30 Sep 2025
Viewed by 133
Abstract
Hyperelastic materials, which include various types of rubber, are widely used in industry (such as the automotive industry). Their main disadvantage is the loss of their original properties over time due to environmental factors (called ageing). The ageing process is long-lasting, which is [...] Read more.
Hyperelastic materials, which include various types of rubber, are widely used in industry (such as the automotive industry). Their main disadvantage is the loss of their original properties over time due to environmental factors (called ageing). The ageing process is long-lasting, which is why so-called accelerated ageing is used when studying the effect of ageing on material properties. Accelerated ageing is realized with a higher intensity of the ageing agent, e.g., by irradiating specimens with UV radiation or by holding them at elevated temperatures. In the literature, there is a lack of parameters for constitutive models that take into account the effect of ageing on material properties. This paper presents the process of determining the parameters for a selected constitutive model using two commonly used rubbers in industry: chloroprene (CR) and ethylene-propylene-diene (EPDM). Before determining the material parameters, the samples were subjected to accelerated ageing at 100 °C for periods of 7, 21, and 35 days. Stress–strain curves were then determined from a tensile test and the parameters of the constitutive model were determined using the non-linear least squares method. Finally, numerical validation of the obtained values was also carried out. Full article
Show Figures

Figure 1

30 pages, 769 KB  
Article
Mathematical Generalization of Kolmogorov-Arnold Networks (KAN) and Their Variants
by Fray L. Becerra-Suarez, Ana G. Borrero-Ramírez, Edwin Valencia-Castillo and Manuel G. Forero
Mathematics 2025, 13(19), 3128; https://doi.org/10.3390/math13193128 - 30 Sep 2025
Viewed by 660
Abstract
Neural networks have become a fundamental tool for solving complex problems, from image processing and speech recognition to time series prediction and large-scale data classification. However, traditional neural architectures suffer from interpretability problems due to their opaque representations and lack of explicit interaction [...] Read more.
Neural networks have become a fundamental tool for solving complex problems, from image processing and speech recognition to time series prediction and large-scale data classification. However, traditional neural architectures suffer from interpretability problems due to their opaque representations and lack of explicit interaction between linear and nonlinear transformations. To address these limitations, Kolmogorov–Arnold Networks (KAN) have emerged as a mathematically grounded approach capable of efficiently representing complex nonlinear functions. Based on the principles established by Kolmogorov and Arnold, KAN offer an alternative to traditional architectures, mitigating issues such as overfitting and lack of interpretability. Despite their solid theoretical basis, practical implementations of KAN face challenges, such as optimal function selection and computational efficiency. This paper provides a systematic review that goes beyond previous surveys by consolidating the diverse structural variants of KAN (e.g., Wavelet-KAN, Rational-KAN, MonoKAN, Physics-KAN, Linear Spline KAN, and Orthogonal Polynomial KAN) into a unified framework. In addition, we emphasize their mathematical foundations, compare their advantages and limitations, and discuss their applicability across domains. From this review, three main conclusions can be drawn: (i) spline-based KAN remain the most widely used due to their stability and simplicity, (ii) rational and wavelet-based variants provide greater expressivity but introduce numerical challenges, and (iii) emerging approaches such as Physics-KAN and automatic basis selection open promising directions for scalability and interpretability. These insights provide a benchmark for future research and practical implementations of KAN. Full article
(This article belongs to the Special Issue Machine Learning Applications in Image Processing and Computer Vision)
Show Figures

Figure 1

37 pages, 905 KB  
Review
Application of Fuzzy Logic Techniques in Solar Energy Systems: A Review
by Siviwe Maqekeni, KeChrist Obileke, Odilo Ndiweni and Patrick Mukumba
Appl. Syst. Innov. 2025, 8(5), 144; https://doi.org/10.3390/asi8050144 - 30 Sep 2025
Viewed by 292
Abstract
Fuzzy logic has been applied to a wide range of problems, including process control, object recognition, image and signal processing, prediction, classification, decision-making, optimization, and time series analysis. These apply to solar energy systems. Though experts in renewable energy prefer fuzzy logic techniques, [...] Read more.
Fuzzy logic has been applied to a wide range of problems, including process control, object recognition, image and signal processing, prediction, classification, decision-making, optimization, and time series analysis. These apply to solar energy systems. Though experts in renewable energy prefer fuzzy logic techniques, their contribution to the decision-making process of solar energy systems lies in the possibility of illustrating risk factors and introducing the concepts of linguistic variables of data from solar energy applications. In solar energy systems, the primary beneficiaries and audience of the fuzzy logic techniques are solar energy policy makers, as it concerns decision-making models, ranking of criteria or weights, and assessment of the potential location of the installation of solar energy plants, depending on the case. In a real-world scenario, fuzzy logic allows easy and efficient controller configuration in a non-linear control system, such as a solar panel. This study attempts to review the role and contribution of fuzzy logic in solar energy based on its applications. The findings from the review revealed that the fuzzy logic application identifies and detects faults in solar energy systems as well as in the optimization of energy output and the location of solar energy plants. In addition, fuzzy model (predicting), hybrid model (simulating performance), and multi-criteria decision-making (MCDM) are components of fuzzy logic techniques. As the review indicated, these are useful as a solution to the challenges of solar energy systems. Importantly, the integration and incorporation of fuzzy logic and neural networks should be recommended for the efficient and effective performance of solar energy systems. Full article
Show Figures

Figure 1

26 pages, 1076 KB  
Article
NL-COMM: Enabling High-Performing Next-Generation Networks via Advanced Non-Linear Processing
by Chathura Jayawardena, George Ntavazlis Katsaros and Konstantinos Nikitopoulos
Future Internet 2025, 17(10), 447; https://doi.org/10.3390/fi17100447 - 30 Sep 2025
Viewed by 210
Abstract
Future wireless networks are expected to deliver enhanced spectral efficiency while being energy efficient. MIMO and other non-orthogonal transmission schemes, such as non-orthogonal multiple access (NOMA), offer substantial theoretical spectral efficiency gains. However, these gains have yet to translate into practical deployments, largely [...] Read more.
Future wireless networks are expected to deliver enhanced spectral efficiency while being energy efficient. MIMO and other non-orthogonal transmission schemes, such as non-orthogonal multiple access (NOMA), offer substantial theoretical spectral efficiency gains. However, these gains have yet to translate into practical deployments, largely due to limitations in current signal processing methods. Linear transceiver processing, though widely adopted, fails to fully exploit non-orthogonal transmissions, forcing massive MIMO systems to use a disproportionately large number of RF chains for relatively few streams, increasing power consumption. Non-linear processing can unlock the full potential of non-orthogonal schemes but is hindered by high computational complexity and integration challenges. Moreover, existing message-passing receivers for NOMA depend on specially designed sparse signals, limiting resource allocation flexibility and efficiency. This work presents NL-COMM, an efficient non-linear processing framework that translates the theoretical gains of non-orthogonal transmissions into practical benefits for both the uplink and downlink. NL-COMM delivers over 200% spectral efficiency gains, enables 50% reductions in antennas and RF chains (and thus base station power consumption), and increases concurrently supported users by 450%. In distributed MIMO deployments, the antenna reduction halves fronthaul bandwidth requirements, mitigating a key system bottleneck. Furthermore, NL-COMM offers the flexibility to unlock new NOMA schemes. Finally, we present both hardware and software architectures for NL-COMM that support massively parallel execution, demonstrating how advanced non-linear processing can be realized in practice to meet the demands of next-generation networks. Full article
(This article belongs to the Special Issue Key Enabling Technologies for Beyond 5G Networks—2nd Edition)
Show Figures

Figure 1

24 pages, 3701 KB  
Article
Optimization of Genomic Breeding Value Estimation Model for Abdominal Fat Traits Based on Machine Learning
by Hengcong Chen, Dachang Dou, Min Lu, Xintong Liu, Cheng Chang, Fuyang Zhang, Shengwei Yang, Zhiping Cao, Peng Luan, Yumao Li and Hui Zhang
Animals 2025, 15(19), 2843; https://doi.org/10.3390/ani15192843 - 29 Sep 2025
Viewed by 209
Abstract
Abdominal fat is a key indicator of chicken meat quality. Excessive deposition not only reduces meat quality but also decreases feed conversion efficiency, making the breeding of low-abdominal-fat strains economically important. Genomic selection (GS) uses information from genome-wide association studies (GWASs) and high-throughput [...] Read more.
Abdominal fat is a key indicator of chicken meat quality. Excessive deposition not only reduces meat quality but also decreases feed conversion efficiency, making the breeding of low-abdominal-fat strains economically important. Genomic selection (GS) uses information from genome-wide association studies (GWASs) and high-throughput sequencing data. It estimates genomic breeding values (GEBVs) from genotypes, which enables early and precise selection. Given that abdominal fat is a polygenic trait controlled by numerous small-effect loci, this study combined population genetic analyses with machine learning (ML)-based feature selection. Relevant single-nucleotide polymorphisms (SNPs) were first identified using a combined GWAS and linkage disequilibrium (LD) approach, followed by a two-stage feature selection process—Lasso for dimensionality reduction and recursive feature elimination (RFE) for refinement—to generate the model input set. We evaluated multiple machine learning models for predicting genomic estimated breeding values (GEBVs). The results showed that linear models and certain nonlinear models achieved higher accuracy and were well suited as base learners for ensemble methods. Building on these findings, we developed a Dynamic Adaptive Weighted Stacking Ensemble Learning Framework (DAWSELF), which applies dynamic weighting and voting to heterogeneous base learners and integrates them layer by layer, with Ridge serving as the meta-learner. In three independent validation populations, DAWSELF consistently outperformed individual models and conventional stacking frameworks in prediction accuracy. This work establishes an efficient GEBV prediction framework for complex traits such as chicken abdominal fat and provides a reusable SNP feature selection strategy, offering practical value for enhancing the precision of poultry breeding and improving product quality. Full article
(This article belongs to the Section Animal Genetics and Genomics)
Show Figures

Figure 1

20 pages, 2894 KB  
Article
Statistical Learning-Assisted Evolutionary Algorithm for Digital Twin-Driven Job Shop Scheduling with Discrete Operation Sequence Flexibility
by Yan Jia, Weiyao Cheng, Leilei Meng and Chaoyong Zhang
Symmetry 2025, 17(10), 1614; https://doi.org/10.3390/sym17101614 - 29 Sep 2025
Viewed by 277
Abstract
With the rapid development of Industry 5.0, smart manufacturing has become a key focus in production systems. Hence, achieving efficient planning and scheduling on the shop floor is important, especially in job shop environments, which are widely encountered in manufacturing. However, traditional job [...] Read more.
With the rapid development of Industry 5.0, smart manufacturing has become a key focus in production systems. Hence, achieving efficient planning and scheduling on the shop floor is important, especially in job shop environments, which are widely encountered in manufacturing. However, traditional job shop scheduling problems (JSP) assume fixed operation sequences, whereas in modern production, some operations exhibit sequence flexibility, referred to as sequence-free operations. To mitigate this gap, this paper studies the JSP with discrete operation sequence flexibility (JSPDS), aiming to minimize the makespan. To effectively solve the JSPDS, a mixed-integer linear programming model is formulated to solve small-scale instances, verifying multiple optimal solutions. To enhance solution quality for larger instances, a digital twin (DT)–enhanced initialization method is proposed, which captures expert knowledge from a high-fidelity virtual workshop to generate high-quality initial population. In addition, a statistical learning-assisted local search method is developed, employing six tailored search operators and Thompson sampling to adaptively select promising operators during the evolutionary algorithm (EA) process. Extensive experiments demonstrate that the proposed DT-statistical learning EA (DT-SLEA) significantly improves scheduling performance compared with state-of-the-art algorithms, highlighting the effectiveness of integrating digital twin and statistical learning techniques for shop scheduling problems. Specifically, in the Wilcoxon test, pairwise comparisons with the other algorithms show that DT-SLEA has p-values below 0.05. Meanwhile, the proposed framework provides guidance on utilizing symmetry to improve optimization in complex manufacturing systems. Full article
(This article belongs to the Special Issue Symmetry/Asymmetry in Operations Research)
Show Figures

Figure 1

15 pages, 2039 KB  
Article
Optimising Multimodal Image Registration Techniques: A Comprehensive Study of Non-Rigid and Affine Methods for PET/CT Integration
by Babar Ali, Mansour M. Alqahtani, Essam M. Alkhybari, Ali H. D. Alshehri, Mohammad Sayed and Tamoor Ali
Diagnostics 2025, 15(19), 2484; https://doi.org/10.3390/diagnostics15192484 - 28 Sep 2025
Viewed by 356
Abstract
Background/Objective: Multimodal image registration plays a critical role in modern medical imaging, enabling the integration of complementary modalities such as positron emission tomography (PET) and computed tomography (CT). This study compares the performance of three widely used image registration techniques—Demons Image Registration [...] Read more.
Background/Objective: Multimodal image registration plays a critical role in modern medical imaging, enabling the integration of complementary modalities such as positron emission tomography (PET) and computed tomography (CT). This study compares the performance of three widely used image registration techniques—Demons Image Registration with Modality Transformation, Free-Form Deformation using the Medical Image Registration Toolbox (MIRT), and MATLAB Intensity-Based Registration—in terms of improving PET/CT image alignment. Methods: A total of 100 matched PET/CT image slices from a clinical scanner were analysed. Preprocessing techniques, including histogram equalisation and contrast enhancement (via imadjust and adapthisteq), were applied to minimise intensity discrepancies. Each registration method was evaluated under varying parameter conditions with regard to sigma fluid (range 4–8), histogram bins (100 to 256), and interpolation methods (linear and cubic). Performance was assessed using quantitative metrics: root mean square error (RMSE), mean squared error (MSE), mean absolute error (MAE), the Pearson correlation coefficient (PCC), and standard deviation (STD). Results: Demons registration achieved optimal performance at a sigma fluid value of 6, with an RMSE of 0.1529, and demonstrated superior computational efficiency. The MIRT showed better adaptability to complex anatomical deformations, with an RMSE of 0.1725. MATLAB Intensity-Based Registration, when combined with contrast enhancement, yielded the highest accuracy (RMSE = 0.1317 at alpha = 6). Preprocessing improved registration accuracy, reducing the RMSE by up to 16%. Conclusions: Each registration technique has distinct advantages: the Demons algorithm is ideal for time-sensitive tasks, the MIRT is suited to precision-driven applications, and MATLAB-based methods offer flexible processing for large datasets. This study provides a foundational framework for optimising PET/CT image registration in both research and clinical environments. Full article
(This article belongs to the Special Issue Diagnostics in Oncology Research)
Show Figures

Figure 1

14 pages, 10382 KB  
Article
A Low-Power, Wide-DR PPG Readout IC with VCO-Based Quantizer Embedded in Photodiode Driver Circuits
by Haejun Noh, Woojin Kim, Yongkwon Kim, Seok-Tae Koh and Hyuntak Jeon
Electronics 2025, 14(19), 3834; https://doi.org/10.3390/electronics14193834 - 27 Sep 2025
Viewed by 251
Abstract
This work presents a low-power photoplethysmography (PPG) readout integrated circuit (IC) that achieves a wide dynamic range (DR) through the direct integration of a voltage-controlled oscillator (VCO)-based quantizer into the photodiode driver. Conventional PPG readout circuits rely on either transimpedance amplifier (TIA) or [...] Read more.
This work presents a low-power photoplethysmography (PPG) readout integrated circuit (IC) that achieves a wide dynamic range (DR) through the direct integration of a voltage-controlled oscillator (VCO)-based quantizer into the photodiode driver. Conventional PPG readout circuits rely on either transimpedance amplifier (TIA) or light-to-digital converter (LDC) topologies, both of which require auxiliary DC suppression loops. These additional loops not only raise power consumption but also limit the achievable DR. The proposed design eliminates the need for such circuits by embedding a linear regulator with a mirroring scale calibrator and a time-domain quantizer. The quantizer provides first-order noise shaping, enabling accurate extraction of the AC PPG signal while the regulator directly handles the large DC current component. Post-layout simulations show that the proposed readout achieves a signal-to-noise-and-distortion ratio (SNDR) of 40.0 dB at 10 µA DC current while consuming only 0.80 µW from a 2.5 V supply. The circuit demonstrates excellent stability across process–voltage–temperature (PVT) corners and maintains high accuracy over a wide DC current range. These features, combined with a compact silicon area of 0.725 mm2 using TSMC 250 nm bipolar–CMOS–DMOS (BCD) process, make the proposed IC an attractive candidate for next-generation wearable and biomedical sensing platforms. Full article
(This article belongs to the Special Issue CMOS Integrated Circuits Design)
Show Figures

Figure 1

16 pages, 1595 KB  
Article
Real-Time FTIR-ATR Spectroscopy for Monitoring Ethanolysis: Spectral Evaluation, Regression Modelling, and Molecular Insight
by Jakub Husar, Lubomir Sanek and Jiri Pecha
Int. J. Mol. Sci. 2025, 26(19), 9381; https://doi.org/10.3390/ijms26199381 - 25 Sep 2025
Viewed by 362
Abstract
As the demand for biodiesel continues to rise, there is a pressing need for efficient and continuous monitoring of the transesterification reaction at the industrial level. However, there is a lack of straightforward online monitoring methods capable of accurately following the course of [...] Read more.
As the demand for biodiesel continues to rise, there is a pressing need for efficient and continuous monitoring of the transesterification reaction at the industrial level. However, there is a lack of straightforward online monitoring methods capable of accurately following the course of ethanolysis under various reaction conditions. In this work, simple linear regression (SLR) and multiple linear regression (MLR) models were developed to assess Fourier transform infrared spectroscopy (FTIR) data from a continuous flow cell, enabling real-time ethanolysis monitoring without sample pretreatment. Gas chromatography (GC) was utilised as the reference method to accurately characterise the reaction mixture’s composition during ethanolysis. Extensive correlation analysis was performed to identify spectra regions where the reaction system’s state changes are observable. The gained regions were subsequently applied in the linear regression model’s development. This novel approach resulted in the performance of simple linear regression comparable to complex partial least squares (PLS) regression model (RMSEP = 2.11). The developed online monitoring system was validated in a wide range of reaction conditions (40–60 °C; 0.25–1.0% w/w NaOH); it effectively identifies dynamic changes in the ethanolysis process and confirms achieving the threshold value of ester content set by EU regulation directly in the production process. Full article
Show Figures

Graphical abstract

Back to TopTop