Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,376)

Search Parameters:
Keywords = binning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 399 KB  
Article
Breast Immunology Network: Toward a Multidisciplinary and Integrated Model for Breast Cancer Care in Italy
by Andrea Botticelli, Ovidio Brignoli, Francesco Caruso, Giuseppe Curigliano, Vincenzo Di Lauro, Carla Masini, Mario Taffurelli and Giuseppe Viale
Cancers 2025, 17(18), 3089; https://doi.org/10.3390/cancers17183089 - 22 Sep 2025
Viewed by 153
Abstract
Background: Breast cancer is the most common female cancer in Italy. Despite better survival rates, significant disparities in access to diagnosis, treatment, and follow-up persist across regions. We propose an integrated, multidisciplinary care model—the Breast Immunology Network (BIN)—to address these challenges. Methods: The [...] Read more.
Background: Breast cancer is the most common female cancer in Italy. Despite better survival rates, significant disparities in access to diagnosis, treatment, and follow-up persist across regions. We propose an integrated, multidisciplinary care model—the Breast Immunology Network (BIN)—to address these challenges. Methods: The model was developed through a two-phase expert consultation with key opinion leaders and stakeholders, aligned with national and European oncology guidelines. No new patient data were collected; this is a qualitative analysis based on expert consensus and existing literature. The proposed model integrates a Hub-and-Spoke cancer network structure with fully functioning multidisciplinary teams (MDTs), standardized care pathways (PDTA), and digital tools to ensure continuity of care. Results: Experts identified critical gaps in Italy’s breast cancer care: limited access to specialized centers, inconsistent adherence to screening programs, and delays in treatment initiation. The proposed BIN model aims to bridge these gaps by enhancing collaboration across all care levels, incorporating immunotherapy where appropriate, and defining key performance indicators (KPIs) for continuous quality evaluation. For example, quantitative targets include achieving ≥65% nationwide mammography screening adherence and ensuring ≥90% of patients are treated in certified Breast Units. Conclusions: The Breast Immunology Network offers a strategic framework to improve equity, quality, and timeliness of breast cancer care in Italy. Importantly, unlike existing Hub–Spoke or CCCN models, the BIN formalizes governance tools, harmonized eligibility criteria, and a national registry for immunotherapy. By uniting Breast Units and community services under shared governance, and by integrating innovations such as immunotherapy and telemedicine, the BIN model could significantly improve clinical outcomes and ensure more equitable care for all patients. Its implementation may serve as a reference model for other health systems seeking to optimize oncology pathways through multidisciplinary integration and advanced treatments. Full article
(This article belongs to the Section Cancer Immunology and Immunotherapy)
Show Figures

Figure 1

13 pages, 5006 KB  
Article
Enhancing Heart Rate Detection in Vehicular Settings Using FMCW Radar and SCR-Guided Signal Processing
by Ashwini Kanakapura Sriranga, Qian Lu and Stewart Birrell
Sensors 2025, 25(18), 5885; https://doi.org/10.3390/s25185885 - 20 Sep 2025
Viewed by 255
Abstract
This paper presents an optimised signal processing framework for contactless physiological monitoring using Frequency Modulated Continuous Wave (FMCW) radar within automotive environments. This research focuses on enhancing heart rate (HR) and heart rate variability (HRV) detection from radar signals by integrating radar placement [...] Read more.
This paper presents an optimised signal processing framework for contactless physiological monitoring using Frequency Modulated Continuous Wave (FMCW) radar within automotive environments. This research focuses on enhancing heart rate (HR) and heart rate variability (HRV) detection from radar signals by integrating radar placement optimisation and advanced phase-based processing techniques. Optimal radar placement was evaluated through Signal-to-Clutter Ratio (SCR) analysis, conducted with multiple human participants in both laboratory and dynamic driving simulator experimental conditions, to determine the optimal in-vehicle location for signal acquisition. An effective processing pipeline was developed, incorporating background subtraction, range bin selection, bandpass filtering, and phase unwrapping. These techniques facilitated the reliable extraction of inter-beat intervals and heartbeat peaks from the phase signal without the need for contact-based sensors. The framework was evaluated using a Walabot FMCW radar module against ground truth HR signals, demonstrating consistent and repeatable results under baseline and mild motion conditions. In subsequent work, this framework was extended with deep learning methods, where radar-derived HR and HRV were benchmarked against research-grade ECG and achieved over 90% accuracy, further reinforcing the robustness and reliability of the approach. Together, these findings confirm that carefully guided radar positioning and robust signal processing can enable accurate and practical in-cabin physiological monitoring, offering a scalable solution for integration in future intelligent vehicle and driver monitoring systems. Full article
Show Figures

Figure 1

15 pages, 2376 KB  
Article
Dry Iron Ore Fluidization, Flowability, and Handling: Supporting Dry Processing of Iron Ores and Guiding Industrial Designing
by Benito Barbabela e Silva, Anderson de Araújo Soares, Monica Guimarães Vieira, Rogério Ruiz, Arthur Pinto Chaves and Maurício Guimarães Bergerman
Minerals 2025, 15(9), 998; https://doi.org/10.3390/min15090998 - 19 Sep 2025
Viewed by 359
Abstract
Renewed interest in dry processing has arisen due to challenges in water management. In dry iron ore beneficiation, flowability of bulk solids is a key concern, leading to issues like plugging and clogging in bins and chutes. Handling also faces challenges from environmental [...] Read more.
Renewed interest in dry processing has arisen due to challenges in water management. In dry iron ore beneficiation, flowability of bulk solids is a key concern, leading to issues like plugging and clogging in bins and chutes. Handling also faces challenges from environmental regulations, particularly regarding dust emissions. Enclosed conveyor technologies, such as air-assisted conveyors that use fluidization, offer effective solutions, though the maximum particle size that can be conveyed is a limitation that needs consideration. This paper examines how the size and chemical composition of bulk iron ore materials affect their handling behavior. By employing Geldart’s and Jenike’s methods, this document provides technical parameters and recommendations, which are lacking in the current literature, for the designing of dry processing plants. Findings indicate that ultrafines can have a flow function as low as 2.05, indicating cohesive behavior even when dried, while fluidization tests support these characteristics. In contrast, coarser fractions are easy for free-flowing materials. Samples with a top size of 0.5 mm fall between sand-like and spoutable groups in Geldart’s classification. Denser materials did not fluidize, while less dense ones did. Thus, air slides should avoid handling materials at this threshold and focus on finer materials. This paper offers guidance for designing dry processing plants to address handling bottlenecks. Full article
(This article belongs to the Section Mineral Processing and Extractive Metallurgy)
Show Figures

Figure 1

14 pages, 3062 KB  
Article
Self-Supervised Monocular Depth Estimation Based on Differential Attention
by Ming Zhou, Hancheng Yu, Zhongchen Li and Yupu Zhang
Algorithms 2025, 18(9), 590; https://doi.org/10.3390/a18090590 - 19 Sep 2025
Viewed by 178
Abstract
Depth estimation algorithms are widely applied in various fields, including 3D reconstruction, autonomous driving, and industrial robotics. Monocular self-supervised algorithms for depth prediction offer a cost-effective alternative to acquiring depth through hardware devices such as LiDAR. However, current depth prediction networks, predominantly based [...] Read more.
Depth estimation algorithms are widely applied in various fields, including 3D reconstruction, autonomous driving, and industrial robotics. Monocular self-supervised algorithms for depth prediction offer a cost-effective alternative to acquiring depth through hardware devices such as LiDAR. However, current depth prediction networks, predominantly based on conventional encoder–decoder architectures, often encounter two critical limitations: insufficient feature fusion mechanisms during the upsampling phase and constrained receptive fields. These limitations result in the loss of high-frequency details in the predicted depth maps. To overcome these issues, we introduce differential attention operators to enhance global feature representation and refine locally upsampled features within the depth decoder. Furthermore, we equip the decoder with a deformable bin-structured prediction head; this lightweight design enables per-pixel dynamic aggregation of local depth distributions via adaptive receptive field modulation and deformable sampling, enhancing the decoder’s fine-grained detail processing by capturing local geometry and holistic structures. Experimental results on the KITTI and Make3D datasets demonstrate that our proposed method produces more accurate depth maps with finer details compared to existing approaches. Full article
(This article belongs to the Special Issue Algorithms for Feature Selection (3rd Edition))
Show Figures

Figure 1

11 pages, 301 KB  
Article
Thermodynamics of Observations
by Arno Keppens and Jean-Christopher Lambert
Entropy 2025, 27(9), 968; https://doi.org/10.3390/e27090968 - 17 Sep 2025
Viewed by 192
Abstract
This work demonstrates that the four laws of classical thermodynamics apply to the statistics of symmetric observation distributions, and provides examples of how this can be exploited in uncertainty assessments. First, an expression for the partition function Z is derived. In contrast with [...] Read more.
This work demonstrates that the four laws of classical thermodynamics apply to the statistics of symmetric observation distributions, and provides examples of how this can be exploited in uncertainty assessments. First, an expression for the partition function Z is derived. In contrast with general classical thermodynamics, however, this can be performed without the need for variational calculus, while Z also equals the number of observations N directly. Apart from the partition function ZN as a scaling factor, three state variables m, n, and ϵ fully statistically characterize the observation distribution, corresponding to its expectation value, degrees of freedom, and random error, respectively. Each term in the first law of thermodynamics is then shown to be a variation on δm2=δ(nϵ)2 for both canonical (constant n and ϵ) and macro-canonical (constant ϵ) observation ensembles, while micro-canonical ensembles correspond to a single observation result bin having δm2=0. This view enables the improved fitting and combining of observation distributions, capturing both measurand variability and measurement precision. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

26 pages, 4906 KB  
Article
Real-Time Sequential Adaptive Bin Packing Based on Second-Order Dual Pointer Adversarial Network: A Symmetry-Driven Approach for Balanced Container Loading
by Zibao Zhou, Enliang Wang and Xuejian Zhao
Symmetry 2025, 17(9), 1554; https://doi.org/10.3390/sym17091554 - 17 Sep 2025
Viewed by 309
Abstract
Modern logistics operations require real-time adaptive solutions for three-dimensional bin packing that maintain spatial symmetry and load balance. This paper introduces a time-series-based online 3D packing problem with dual unknown sequences, where containers and items arrive dynamically. The challenge lies in achieving symmetric [...] Read more.
Modern logistics operations require real-time adaptive solutions for three-dimensional bin packing that maintain spatial symmetry and load balance. This paper introduces a time-series-based online 3D packing problem with dual unknown sequences, where containers and items arrive dynamically. The challenge lies in achieving symmetric distribution for stability and optimal space utilization. We propose the Second-Order Dual Pointer Adversarial Network (So-DPAN), a deep reinforcement learning architecture that leverages symmetry principles to decompose spatiotemporal optimization into sequence matching and spatial arrangement sub-problems. The dual pointer mechanism enables efficient item-container pairing, while the second-order structure captures temporal dependencies by maintaining symmetric packing patterns. Our approach considers geometric symmetry for spatial arrangement and temporal symmetry for sequence matching. The Actor-Critic framework uses symmetry-based reward functions to guide learning toward balanced configurations. Experiments demonstrate that So-DPAN outperforms DQN, DDPG, and traditional heuristics in solution quality and efficiency while maintaining superior symmetry metrics in center-of-gravity positioning and load distribution. The algorithm exploits inherent symmetries in packing structure, advancing theoretical understanding through symmetry-aware optimization while providing a deployable framework for Industry 4.0 smart logistics. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

19 pages, 3707 KB  
Article
An NMR Metabolomics Analysis Pipeline for Human Neutrophil Samples with Limited Source Material
by Grace Filbertine, Genna A. Abdullah, Lucy Gill, Rudi Grosman, Marie M. Phelan, Direkrit Chiewchengchol, Nattiya Hirankarn and Helen L. Wright
Metabolites 2025, 15(9), 612; https://doi.org/10.3390/metabo15090612 - 15 Sep 2025
Viewed by 355
Abstract
Background/Objectives: Untargeted 1H NMR metabolomics is a robust and reproducible approach used to study the metabolism in biological samples, providing unprecedented insight into altered cellular processes associated with human diseases. Metabolomics is increasingly used alongside other techniques to detect an instantaneous altered [...] Read more.
Background/Objectives: Untargeted 1H NMR metabolomics is a robust and reproducible approach used to study the metabolism in biological samples, providing unprecedented insight into altered cellular processes associated with human diseases. Metabolomics is increasingly used alongside other techniques to detect an instantaneous altered cellular function, for example, the role of neutrophils in the inflammatory response. However, in some clinical settings, blood samples may be limited, restricting the amount of cellular material available for a metabolomic analysis. In this study, we wanted to establish an optimal 1D 1H NMR metabolomic pipeline for use with human neutrophil samples with low amounts of input material. Methods: We compared the effect of different neutrophil isolation protocols on metabolite profiles. We also compared the effect of the absolute cell counts (100,000 to 5,000,000) on the identities of metabolites that were detected with an increasing number of scans (NS) from 256 to 2048. Results/Conclusions: The variance in the neutrophil profile was equivalent between the isolation methods, and the choice of isolation method did not significantly alter the metabolite profile. The minimum number of cells required for the detection of neutrophil metabolites was 400,000 at an NS of 256 for the spectra acquired with a cryoprobe (700 MHz). Increasing the NS to 2048 increased metabolite detection at the very lowest cell counts (<400,000 neutrophils); however, this was associated with a significant increase in the analysis time, which would be rate-limiting for large studies. The application of a correlation-reliability-score-filtering method to the spectral bins preserved the essential discriminatory features of the PLS-DA models whilst improving the dataset robustness and analytical precision. Full article
(This article belongs to the Special Issue NMR-Based Metabolomics in Biomedicine and Food Science)
Show Figures

Figure 1

26 pages, 1213 KB  
Article
A Hybrid Symmetry Strategy Improved Binary Planet Optimization Algorithm with Theoretical Interpretability for the 0-1 Knapsack Problem
by Yang Yang
Symmetry 2025, 17(9), 1538; https://doi.org/10.3390/sym17091538 - 15 Sep 2025
Viewed by 190
Abstract
The Planet Optimization Algorithm (POA) is a meta-heuristic inspired by celestial mechanics, drawing on Newtonian gravitational principles to simulate planetary dynamics in optimization search spaces. While the POA demonstrates a strong performance in continuous domains, we propose an Improved Binary Planet Optimization Algorithm [...] Read more.
The Planet Optimization Algorithm (POA) is a meta-heuristic inspired by celestial mechanics, drawing on Newtonian gravitational principles to simulate planetary dynamics in optimization search spaces. While the POA demonstrates a strong performance in continuous domains, we propose an Improved Binary Planet Optimization Algorithm (IBPOA) tailored to the classical 0-1 knapsack problem (0-1 KP). Building upon the POA, the IBPOA introduces a novel improved transfer function (ITF) and a greedy repair operator (GRO). Unlike general binarization methods, the ITF integrates theoretical foundations from branch-and-bound (B&B) and reduction algorithms, reducing the search space while guaranteeing optimal solutions. This improvement is strengthened further through the incorporation of the GRO, which significantly improves the searching capability. Extensive computational experiments on large-scale instances demonstrate the IBPOA’s effectiveness for the 0-1 KP, showing a superior performance in its convergence rate, population diversity, and exploration–exploitation balance. The results from 30 independent runs confirm that the IBPOA consistently obtains the optimal solutions across all 15 benchmark instances, spanning three categories. Wilcoxon’s rank-sum tests against seven state-of-the-art algorithms reveal that the IBPOA significantly outperforms all competitors (p<0.05), though it is occasionally matched in its solution quality by the binary reptile search algorithm (BinRSA). Crucially, the IBPOA achieves solutions 4.16 times faster than the BinRSA on average, establishing an optimal balance between solution quality and computational efficiency. Full article
(This article belongs to the Special Issue Symmetry in Intelligent Algorithms)
Show Figures

Figure 1

27 pages, 11645 KB  
Article
Structural Design and Parameter Optimization of In-Row Deep Fertilizer Application Device for Maize
by Shengxian Wu, Zihao Dou, Shulong Fei, Feng Shi, Xinbo Zhang, Ze Liu and Dongyan Huang
Agriculture 2025, 15(18), 1934; https://doi.org/10.3390/agriculture15181934 - 12 Sep 2025
Viewed by 337
Abstract
To enhance the stability and consistency of topdressing depth during maize fertilization, an inter-row deep fertilizer application unit was designed. Through analysis of the coherence between subsurface pressure and topdressing depth stability obtained from stability performance tests, structural optimizations were implemented on the [...] Read more.
To enhance the stability and consistency of topdressing depth during maize fertilization, an inter-row deep fertilizer application unit was designed. Through analysis of the coherence between subsurface pressure and topdressing depth stability obtained from stability performance tests, structural optimizations were implemented on the deep application unit. This resulted in an integrated vibration damping device incorporating a magnetorheological damper (MR damper fertilizer application unit). The MR damper fertilizer application unit was validated through simulation testing. Using an orthogonal experimental design approach, soil bin tests were conducted to identify the preferred parameter ensemble for this unit. Subsequent field trials under these optimized parameters enabled comparative performance evaluation of both fertilizer application units under actual operating conditions. The simulation results indicated that the MR damper fertilizer application unit achieved reductions in the standard deviation of the gauge wheel’s force on the ground by 39.6%, 41.0%, and 44.6% at three distinct operational speeds, respectively. The soil bin tests identified the optimal operational parameters as follows: MR damper current of 0.6 A, vibration damping system spring stiffness of 8 N/mm, and a working speed of 7.2 km/h. Field testing results indicated that, when utilizing the optimal parameters, the MR damper fertilizer application unit achieved a 6.9% improvement in the rate of qualified topdressing depth and a 3.8% reduction in the depth variation coefficient compared to the conventional deep fertilizer application unit. Compared to traditional fertilizer applicators, this study effectively addresses issues of poor fertilization depth uniformity and low qualification rates caused by severe gauge wheel bouncing due to uneven terrain during field operations. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

17 pages, 749 KB  
Article
Probing the Cosmic Distance Duality Relation via Non-Parametric Reconstruction for High Redshifts
by Felipe Avila, Fernanda Oliveira, Camila Franco, Maria Lopes, Rodrigo Holanda, Rafael C. Nunes and Armando Bernui
Universe 2025, 11(9), 307; https://doi.org/10.3390/universe11090307 - 9 Sep 2025
Viewed by 392
Abstract
We test the validity of the cosmic distance duality relation (CDDR) by combining angular diameter distance and luminosity distance measurements from recent cosmological observations. For the angular diameter distance, we use data from transverse baryon acoustic oscillations and galaxy clusters. On the other [...] Read more.
We test the validity of the cosmic distance duality relation (CDDR) by combining angular diameter distance and luminosity distance measurements from recent cosmological observations. For the angular diameter distance, we use data from transverse baryon acoustic oscillations and galaxy clusters. On the other hand, the luminosity distance is obtained from Type Ia supernovae in the Pantheon+ sample and from quasar catalogs. To reduce the large dispersion in quasar luminosity distances, we apply a selection criterion based on their deviation from the ΛCDM model and implement a binning procedure to suppress statistical noise. We reconstruct the CDDR using Gaussian Processes, a non-parametric supervised machine learning method. Our results show no significant deviation from the CDDR within the 2σ confidence level across the redshift range explored, supporting its validity even at high redshifts. Full article
(This article belongs to the Special Issue Universe: Feature Papers 2024—'Cosmology')
Show Figures

Figure 1

19 pages, 7987 KB  
Article
A Local Thresholding Algorithm for Image Segmentation by Using Gradient Orientation Histogram
by Lijie Dong, Kailong Zhang, Mingyue He, Shenxin Zhong and Congjie Ou
Appl. Sci. 2025, 15(17), 9808; https://doi.org/10.3390/app15179808 - 7 Sep 2025
Viewed by 571
Abstract
This paper proposes a new local thresholding method to further explore the relationship between gradients and image patterns. In most studies, the image gradient histogram is simply divided into K bins that have the same intervals in angular space. This kind of empirical [...] Read more.
This paper proposes a new local thresholding method to further explore the relationship between gradients and image patterns. In most studies, the image gradient histogram is simply divided into K bins that have the same intervals in angular space. This kind of empirical approaches may not fully capture the correlation information between pixels. In this paper, a variance-based idea is applied to the gradient orientation histogram. It clusters pixels into subsets with different angular intervals. Analyzing these subsets with similar common patterns respectively will help to assist in achieving the optimal thresholds for image segmentation. For the result assessments, the proposed algorithm is compared with other 1-D and 2-D histogram-based thresholding methods, as well as hybrid local–global thresholding methods. It is shown that the proposed algorithm can effectively recognize the common features of the images that belong to the same category, and maintain the stable performances when the number of thresholds increases. Furthermore, the processing time of the present algorithm is competitive with those of other algorithms, which shows the potential application in real-time scenes. Full article
Show Figures

Figure 1

17 pages, 4874 KB  
Article
Investigating the Relationship Between Topographic Variables and Wildfire Burn Severity
by Linh Nguyen Van and Giha Lee
Geographies 2025, 5(3), 47; https://doi.org/10.3390/geographies5030047 - 3 Sep 2025
Viewed by 728
Abstract
Wildfire behavior and post-fire effects are strongly modulated by terrain, yet the relative influence of individual topographic factors on burn severity remains incompletely quantified at landscape scales. The Composite Burn Index (CBI) provides a field-calibrated measure of severity, but large-area analyses have been [...] Read more.
Wildfire behavior and post-fire effects are strongly modulated by terrain, yet the relative influence of individual topographic factors on burn severity remains incompletely quantified at landscape scales. The Composite Burn Index (CBI) provides a field-calibrated measure of severity, but large-area analyses have been hampered by limited plot density and cumbersome data extraction workflows. In this study, we paired 6150 CBI plots from 234 U.S. wildfire events (1994–2017) with 30 m SRTM DEM, extracting mean elevation, slope, and compass aspect within a 90 m buffer around each plot to minimize geolocation noise. Topographic variables were grouped into ecologically meaningful classes—six elevation belts (≤500 m to >2500 m), six slope bins (≤5° to >25°), and eight aspect octants—and their relationships with CBI were evaluated using Tukey HSD post hoc comparisons. Our findings show that all three factors exerted highly significant influences on severity (p < 0.001): mean CBI peaked in the 1500–2000 m belt (0.42 higher than lowlands), rose almost monotonically with steepness to slopes > 20° (0.37 higher than <5°), and was greatest on east- and northwest-facing slopes (0.19 higher than south-facing aspects). Further analysis revealed that burn severity emerges from strongly context-dependent synergies among elevation, slope, and aspect, rather than from simple additive effects. By demonstrating a rapid, reproducible workflow for terrain-aware severity assessment entirely within GEE, the study provides both methodological guidance and actionable insights for fuel-management planning, risk mapping, and post-fire restoration prioritization. Full article
Show Figures

Figure 1

20 pages, 1369 KB  
Article
Bin-3-Way-PARAFAC-PLS: A 3-Way Partial Least Squares for Binary Response
by Elisa Frutos-Bernal, Laura Vicente-González and Ana Elizabeth Sipols
Axioms 2025, 14(9), 678; https://doi.org/10.3390/axioms14090678 - 3 Sep 2025
Viewed by 397
Abstract
In various research domains, researchers frequently encounter multiple datasets pertaining to the same subjects, with one dataset providing explanatory variables for the others. To address this structure, we introduce the Binary 3-way PARAFAC Partial Least Squares (Bin-3-Way-PARAFAC-PLS), a novel multiway regression method. This [...] Read more.
In various research domains, researchers frequently encounter multiple datasets pertaining to the same subjects, with one dataset providing explanatory variables for the others. To address this structure, we introduce the Binary 3-way PARAFAC Partial Least Squares (Bin-3-Way-PARAFAC-PLS), a novel multiway regression method. This method is specifically engineered for scenarios involving a three-way real-valued explanatory data array and a matrix of binary response data. We detail the algorithm’s implementation and illustrate its practical application. Furthermore, we describe biplot representations to aid in result interpretation. The accompanying software necessary for implementing the method is also provided. Finally, the proposed method’s utility in real-world problem-solving is demonstrated through its application to a psychological dataset. Full article
(This article belongs to the Special Issue Probability, Statistics and Estimations, 2nd Edition)
Show Figures

Figure 1

16 pages, 5482 KB  
Article
Non-Precipitation Echo Identification in X-Band Dual-Polarization Weather Radar
by Zihang Zhao, Hao Wen, Lei Wu, Ruiyi Li, Ting Zhuang and Yang Zhang
Remote Sens. 2025, 17(17), 3023; https://doi.org/10.3390/rs17173023 - 31 Aug 2025
Viewed by 673
Abstract
This study proposes a novel quality control method combining fuzzy logic and threshold discrimination for processing X-band dual-polarization radar data from Beijing. The method effectively eliminates non-precipitation echoes, including electromagnetic interference, clear-air echoes, and ground clutter through five key steps: (1) Identifying electromagnetic [...] Read more.
This study proposes a novel quality control method combining fuzzy logic and threshold discrimination for processing X-band dual-polarization radar data from Beijing. The method effectively eliminates non-precipitation echoes, including electromagnetic interference, clear-air echoes, and ground clutter through five key steps: (1) Identifying electromagnetic interference using continuity of reflectivity across adjacent elevation angles, radial mean correlation coefficient, and differential reflectivity; (2) Preserving precipitation data in ground clutter-mixed regions by jointly utilizing the difference in reflectivity before and after clutter suppression by the signal processor, and characteristic value proportions; (3) Developing a fuzzy logic algorithm with six parameters (e.g., reflectivity texture, depolarization ratio) for ground clutter and clear-air echoes removal; (4) Filtering echoes with missing dual-polarization variables using cross-elevation mean reflectivity, mean correlation coefficient, and valid range bin proportion; (5) Removing residual noise via radial/azimuthal reflectivity continuity analysis. Validation with 635 PPI scans demonstrates high identification accuracy across echo types: 93.5% for electromagnetic interference, 98.4% for ground clutter, 97.7% for clear-air echoes, and 98.2% for precipitation echoes. Full article
Show Figures

Figure 1

18 pages, 6001 KB  
Article
A Graph Contrastive Learning Method for Enhancing Genome Recovery in Complex Microbial Communities
by Guo Wei and Yan Liu
Entropy 2025, 27(9), 921; https://doi.org/10.3390/e27090921 - 31 Aug 2025
Viewed by 624
Abstract
Accurate genome binning is essential for resolving microbial community structure and functional potential from metagenomic data. However, existing approaches—primarily reliant on tetranucleotide frequency (TNF) and abundance profiles—often perform sub-optimally in the face of complex community compositions, low-abundance taxa, and long-read sequencing datasets. To [...] Read more.
Accurate genome binning is essential for resolving microbial community structure and functional potential from metagenomic data. However, existing approaches—primarily reliant on tetranucleotide frequency (TNF) and abundance profiles—often perform sub-optimally in the face of complex community compositions, low-abundance taxa, and long-read sequencing datasets. To address these limitations, we present MBGCCA, a novel metagenomic binning framework that synergistically integrates graph neural networks (GNNs), contrastive learning, and information-theoretic regularization to enhance binning accuracy, robustness, and biological coherence. MBGCCA operates in two stages: (1) multimodal information integration, where TNF and abundance profiles are fused via a deep neural network trained using a multi-view contrastive loss, and (2) self-supervised graph representation learning, which leverages assembly graph topology to refine contig embeddings. The contrastive learning objective follows the InfoMax principle by maximizing mutual information across augmented views and modalities, encouraging the model to extract globally consistent and high-information representations. By aligning perturbed graph views while preserving topological structure, MBGCCA effectively captures both global genomic characteristics and local contig relationships. Comprehensive evaluations using both synthetic and real-world datasets—including wastewater and soil microbiomes—demonstrate that MBGCCA consistently outperforms state-of-the-art binning methods, particularly in challenging scenarios marked by sparse data and high community complexity. These results highlight the value of entropy-aware, topology-preserving learning for advancing metagenomic genome reconstruction. Full article
(This article belongs to the Special Issue Network-Based Machine Learning Approaches in Bioinformatics)
Show Figures

Figure 1

Back to TopTop