Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (487)

Search Parameters:
Keywords = binarization

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 4937 KB  
Article
Feature Extractor for Damage Localization on Composite-Overwrapped Pressure Vessel Based on Signal Similarity Using Ultrasonic Guided Waves
by Houssam El Moutaouakil, Jan Heimann, Daniel Lozano, Vittorio Memmolo and Andreas Schütze
Appl. Sci. 2025, 15(17), 9288; https://doi.org/10.3390/app15179288 - 24 Aug 2025
Abstract
Hydrogen is one of the future green energy sources that could resolve issues related to fossil fuels. The widespread use of hydrogen can be enabled by composite-overwrapped pressure vessels for storage. It offers advantages due to its low weight and improved mechanical performance. [...] Read more.
Hydrogen is one of the future green energy sources that could resolve issues related to fossil fuels. The widespread use of hydrogen can be enabled by composite-overwrapped pressure vessels for storage. It offers advantages due to its low weight and improved mechanical performance. However, the safe storage of hydrogen requires continuous monitoring. Combining ultrasonic guided waves with interpretable machine learning provides a powerful tool for structural health monitoring. In this study, we developed a feature extraction approach based on a similarity method that enables interpretability in the proposed machine learning model for damage detection and localization in pressure vessels. Furthermore, a systematic optimization was performed to explore and tune the model’s parameters. This resulting model provides accurate damage localization and is capable of detecting and localizing damage on hydrogen pressure vessels with an average localization error of 2 cm and a classification accuracy of 96.5% when using quantized classification. In contrast, binarized classification yields a higher accuracy of 99.5%, but with a larger localization error of 6 cm. Full article
Show Figures

Figure 1

10 pages, 343 KB  
Article
Distant Resolved Spectroscopic Binaries: Orbital Parallaxes Contradict Trigonometric Parallaxes
by Oleg Y. Malkov and Arseniy M. Sachkov
Galaxies 2025, 13(4), 96; https://doi.org/10.3390/galaxies13040096 - 21 Aug 2025
Viewed by 155
Abstract
Resolved spectroscopic binaries (RSB) are the only way (besides trigonometric parallax) to determine the dynamical, hypothesis-free distances to the stars of the galaxy. Analyzing the most comprehensive up-to-date data on RSB, we found that trigonometric parallaxes of all distant (d> [...] Read more.
Resolved spectroscopic binaries (RSB) are the only way (besides trigonometric parallax) to determine the dynamical, hypothesis-free distances to the stars of the galaxy. Analyzing the most comprehensive up-to-date data on RSB, we found that trigonometric parallaxes of all distant (d> 0.5 kpc) binaries overestimate the distance by 10–50%. Such objects appear as single stars in Gaia and Hipparcos data, but their binarity can be detected/suspected by comparing trigonometric parallaxes in different data releases from these space missions. Full article
(This article belongs to the Special Issue Stellar Spectroscopy, Molecular Astronomy and Atomic Astronomy)
Show Figures

Figure 1

23 pages, 3801 KB  
Article
Multi-Variable Evaluation via Position Binarization-Based Sparrow Search
by Jiwei Hua, Xin Gu, Debing Sun, Jinqi Zhu and Shuqin Wang
Electronics 2025, 14(16), 3312; https://doi.org/10.3390/electronics14163312 - 20 Aug 2025
Viewed by 180
Abstract
The Sparrow Search Algorithm (SSA), a metaheuristic renowned for rapid convergence, good stability, and high search accuracy in continuous optimization, faces inherent limitations when applied to discrete multi-variable combinatorial optimization problems like feature selection. To enable effective multi-variable evaluation and discrete feature subset [...] Read more.
The Sparrow Search Algorithm (SSA), a metaheuristic renowned for rapid convergence, good stability, and high search accuracy in continuous optimization, faces inherent limitations when applied to discrete multi-variable combinatorial optimization problems like feature selection. To enable effective multi-variable evaluation and discrete feature subset selection using SSA, a novel binary variant, Position Binarization-based Sparrow Search Algorithm (BSSA), is proposed. BSSA employs a sigmoid transformation function to convert the continuous position vectors generated by the standard SSA into binary solutions, representing feature inclusion or exclusion. Recognizing that the inherent exploitation bias of SSA and the complexity of high-dimensional feature spaces can lead to premature convergence and suboptimal solutions, we further enhance BSSA by introducing stochastic Gaussian noise (zero mean) into the sigmoid transformation. This strategic perturbation actively diversifies the search population, improves exploration capability, and bolsters the algorithm’s robustness against local optima stagnation during multi-variable evaluation. The fitness of each candidate feature subset (solution) is evaluated using the classification accuracy of a Support Vector Machine (SVM) classifier. The BSSA algorithm is compared with four high-performance optimization algorithms on 12 diverse benchmark datasets selected from the UCI repository, utilizing multiple performance metrics. Experimental results demonstrate that BSSA achieves superior performance in classification accuracy, computational efficiency, and optimal feature selection, significantly advancing multi-variable evaluation for feature selection tasks. Full article
Show Figures

Figure 1

27 pages, 7145 KB  
Article
A Benchmark Study of Classical and U-Net ResNet34 Methods for Binarization of Balinese Palm Leaf Manuscripts
by Imam Yuadi, Khoirun Nisa’, Nisak Ummi Nazikhah, Yunus Abdul Halim, A. Taufiq Asyhari and Chih-Chien Hu
Heritage 2025, 8(8), 337; https://doi.org/10.3390/heritage8080337 - 18 Aug 2025
Viewed by 175
Abstract
Ancient documents that have undergone physical and visual degradation pose significant challenges in the digital recognition and preservation of information. This research aims to evaluate the effectiveness of ten classic binarization methods, including Otsu, Niblack, Sauvola, and ISODATA, as well as other adaptive [...] Read more.
Ancient documents that have undergone physical and visual degradation pose significant challenges in the digital recognition and preservation of information. This research aims to evaluate the effectiveness of ten classic binarization methods, including Otsu, Niblack, Sauvola, and ISODATA, as well as other adaptive methods, in comparison to the U-Net ResNet34 model trained on 256 × 256 image blocks for extracting textual content and separating it from the degraded parts and background of palm leaf manuscripts. We focused on two significant collections, Lontar Terumbalan, with a total of 19 images of Balinese manuscripts from the National Library of Indonesia Collection, and AMADI Lontarset, with a total of 100 images from ICHFR 2016. Results show that the deep learning approach outperforms classical methods in terms of overall evaluation metrics. The U-Net ResNet34 model reached the highest Dice score of 0.986, accuracy of 0.983, SSIM of 0.938, RMSE of 0.143, and PSNR of 17.059. Among the classical methods, ISODATA achieved the best results, with a Dice score of 0.957 and accuracy of 0.933, but still fell short of the deep learning model across most evaluation metrics. Full article
Show Figures

Figure 1

30 pages, 2890 KB  
Article
A Transfer Function-Based Binary Version of Improved Pied Kingfisher Optimizer for Solving the Uncapacitated Facility Location Problem
by Ayşe Beşkirli
Biomimetics 2025, 10(8), 526; https://doi.org/10.3390/biomimetics10080526 - 12 Aug 2025
Viewed by 332
Abstract
In this study, the pied kingfisher optimizer (PKO) algorithm is adapted to the uncapacitated facility location problem (UFLP), and its performance is evaluated. The PKO algorithm is binarized with fourteen different transfer functions (TF), and each variant is tested on a total of [...] Read more.
In this study, the pied kingfisher optimizer (PKO) algorithm is adapted to the uncapacitated facility location problem (UFLP), and its performance is evaluated. The PKO algorithm is binarized with fourteen different transfer functions (TF), and each variant is tested on a total of fifteen different Cap problems. In addition, performance improvement was realized by adding the Levy flight strategy to BinPKO, and this improved method was named BinIPKO. The experimental results show that the TF1 transfer function for BinIPKO performs very well on all problems in terms of both best and mean solution values. The TF2 transfer function performed efficiently on most Cap problems, ranking second only to TF1. Although the other transfer functions provided competitive solutions in some Cap problems, they lagged behind TF1 and TF2 in terms of overall performance. In addition, the performance of BinIPKO was also compared with the well-known PSO and GWO algorithms in the literature, as well as the recently proposed APO and EEFO algorithms, and it was found that BinIPKO performs well overall. In line with this information, it is seen that the IPKO algorithm, especially when used with the TF1 transfer function, provides an effective alternative for UFLP. Full article
(This article belongs to the Special Issue Exploration of Bio-Inspired Computing: 2nd Edition)
Show Figures

Figure 1

26 pages, 1345 KB  
Article
Fractional Chebyshev Transformation for Improved Binarization in the Energy Valley Optimizer for Feature Selection
by Islam S. Fathi, Ahmed R. El-Saeed, Gaber Hassan and Mohammed Aly
Fractal Fract. 2025, 9(8), 521; https://doi.org/10.3390/fractalfract9080521 - 11 Aug 2025
Viewed by 332
Abstract
The feature selection (FS) procedure is a critical preprocessing step in data mining and machine learning, aiming to enhance model performance by eliminating redundant features and reducing dimensionality. The Energy Valley Optimizer (EVO), inspired by particle physics concepts of stability and decay, offers [...] Read more.
The feature selection (FS) procedure is a critical preprocessing step in data mining and machine learning, aiming to enhance model performance by eliminating redundant features and reducing dimensionality. The Energy Valley Optimizer (EVO), inspired by particle physics concepts of stability and decay, offers a novel metaheuristic approach. This study introduces an enhanced binary version of EVO, termed Improved Binarization in the Energy Valley Optimizer with Fractional Chebyshev Transformation (IBEVO-FC), and specifically designed for feature selection challenges. IBEVO-FC incorporates several key advancements over the original EVO. Firstly, it employs a novel fractional Chebyshev transformation function to effectively map the continuous search space of EVO to the binary domain required for feature selection, leveraging the unique properties of fractional orthogonal polynomials for improved binarization. Secondly, the Laplace crossover method is integrated into the initialization phase to improve population diversity and local search capabilities. Thirdly, a random replacement strategy is applied to enhance exploitation and mitigate premature convergence. The efficacy of IBEVO-FC is rigorously evaluated on 26 benchmark datasets from the UCI Repository and compared against 7 contemporary wrapper-based feature selection algorithms. Statistical analysis confirms the competitive performance of the proposed IBEVO-FC method in terms of classification accuracy and feature subset size. Full article
Show Figures

Figure 1

17 pages, 4404 KB  
Proceeding Paper
Surface Roughness and Fractal Analysis of TiO2 Thin Films by DC Sputtering
by Helena Cristina Vasconcelos, Telmo Eleutério and Maria Meirelles
Eng. Proc. 2025, 105(1), 2; https://doi.org/10.3390/engproc2025105002 - 4 Aug 2025
Viewed by 202
Abstract
This study examines the effect of oxygen concentration and sputtering power on the surface morphology of TiO2 thin films deposited by DC reactive magnetron sputtering. Surface roughness parameters were obtained using MountainsMap® software(10.2) from SEM images, while fractal dimensions and texture [...] Read more.
This study examines the effect of oxygen concentration and sputtering power on the surface morphology of TiO2 thin films deposited by DC reactive magnetron sputtering. Surface roughness parameters were obtained using MountainsMap® software(10.2) from SEM images, while fractal dimensions and texture descriptors were extracted via Python-based image processing. Fractal dimension was calculated using the box-counting method applied to binarized images with multiple threshold levels, and texture analysis employed Gray-Level Co-occurrence Matrix (GLCM) statistics to capture local anisotropies and spatial heterogeneity. Four samples were analyzed, previously prepared with oxygen concentrations of 50% and 75%, and sputtering powers of 500 W and 1000 W. The results have shown that films deposited at higher oxygen levels and sputtering powers exhibited increased roughness, higher fractal dimensions, and stronger GLCM contrast, indicating more complex and heterogeneous surface structures. Conversely, films produced at lower oxygen and power settings showed smoother, more isotropic surfaces with lower complexity. This integrated analysis framework links deposition parameters with morphological characteristics, enhancing the understanding of surface evolution and enabling better control of TiO2 thin film properties. Full article
Show Figures

Figure 1

28 pages, 15616 KB  
Article
Binary Secretary Bird Optimization Algorithm for the Set Covering Problem
by Broderick Crawford, Felipe Cisternas-Caneo, Ricardo Soto, Claudio Patricio Toledo Mac-lean, José Lara Arce, Fabián Solís-Piñones, Gino Astorga and Giovanni Giachetti
Mathematics 2025, 13(15), 2482; https://doi.org/10.3390/math13152482 - 1 Aug 2025
Viewed by 354
Abstract
The Set Coverage Problem (SCP) is an important combinatorial optimization problem known to be NP-complete. The use of metaheuristics to solve the SCP includes different algorithms. In particular, binarization techniques have been explored to adapt metaheuristics designed for continuous optimization problems to the [...] Read more.
The Set Coverage Problem (SCP) is an important combinatorial optimization problem known to be NP-complete. The use of metaheuristics to solve the SCP includes different algorithms. In particular, binarization techniques have been explored to adapt metaheuristics designed for continuous optimization problems to the binary domain of the SCP. In this work, we present a new approach to solve the SCP based on the Secretary Bird Optimization Algorithm (SBOA). This algorithm is inspired by the natural behavior of the secretary bird, known for its ability to hunt prey and evade predators in its environment. Since the SBOA was originally designed for optimization problems in continuous space and the SCP is a binary problem, this paper proposes the implementation of several binarization techniques to adapt the algorithm to the discrete domain. These techniques include eight transfer functions and five different discretization methods. Taken together, these combinations create multiple SBOA adaptations that effectively balance exploration and exploitation, promoting an adequate distribution in the search space. Experimental results applied to the SCP together with its variant Unicost SCP and compared to Grey Wolf Optimizer and Particle Swarm Optimization suggest that the binary version of SBOA is a robust algorithm capable of producing high quality solutions with low computational cost. Given the promising results obtained, it is proposed as future work to focus on complex and large-scale problems as well as to optimize their performance in terms of time and accuracy. Full article
Show Figures

Figure 1

20 pages, 8446 KB  
Article
Extraction of Corrosion Damage Features of Serviced Cable Based on Three-Dimensional Point Cloud Technology
by Tong Zhu, Shoushan Cheng, Haifang He, Kun Feng and Jinran Zhu
Materials 2025, 18(15), 3611; https://doi.org/10.3390/ma18153611 - 31 Jul 2025
Viewed by 234
Abstract
The corrosion of high-strength steel wires is a key factor impacting the durability and reliability of cable-stayed bridges. In this study, the corrosion pit features on a high-strength steel wire, which had been in service for 27 years, were extracted and modeled using [...] Read more.
The corrosion of high-strength steel wires is a key factor impacting the durability and reliability of cable-stayed bridges. In this study, the corrosion pit features on a high-strength steel wire, which had been in service for 27 years, were extracted and modeled using three-dimensional point cloud data obtained through 3D surface scanning. The Otsu method was applied for image binarization, and each corrosion pit was geometrically represented as an ellipse. Key pit parameters—including length, width, depth, aspect ratio, and a defect parameter—were statistically analyzed. Results of the Kolmogorov–Smirnov (K–S) test at a 95% confidence level indicated that the directional angle component (θ) did not conform to any known probability distribution. In contrast, the pit width (b) and defect parameter (Φ) followed a generalized extreme value distribution, the aspect ratio (b/a) matched a Beta distribution, and both the pit length (a) and depth (d) were best described by a Gaussian mixture model. The obtained results provide valuable reference for assessing the stress state, in-service performance, and predicted remaining service life of operational stay cables. Full article
(This article belongs to the Section Construction and Building Materials)
Show Figures

Figure 1

20 pages, 1666 KB  
Article
Looking for Signs of Unresolved Binarity in the Continuum of LAMOST Stellar Spectra
by Mikhail Prokhorov, Kefeng Tan, Nikolay Samus, Ali Luo, Dana Kovaleva, Jingkun Zhao, Yujuan Liu, Pavel Kaygorodov, Oleg Malkov, Yihan Song, Sergey Sichevskij, Lev Yungelson and Gang Zhao
Galaxies 2025, 13(4), 83; https://doi.org/10.3390/galaxies13040083 - 30 Jul 2025
Viewed by 447
Abstract
We describe an attempt to derive the binarity rate of samples of 166 A-, F-, G-, and K-type stars from LAMOST DR5 and 1000 randomly selected presumably single stars from Gaia DR3 catalogs. To this end, we compared continua of the observed spectra [...] Read more.
We describe an attempt to derive the binarity rate of samples of 166 A-, F-, G-, and K-type stars from LAMOST DR5 and 1000 randomly selected presumably single stars from Gaia DR3 catalogs. To this end, we compared continua of the observed spectra with the continua of synthetic spectra from within 3700 <λ<9097 Å. The latter spectra were reduced to the LAMOST set of wavelengths, while the former ones were smoothed. Next, we searched for every observed star of the nearest synthetic spectrum using a four-parameter representation—Teff, logg, [Fe/H], and a range of interstellar absorption values. However, rms deviations of observed spectra from synthetic ones appeared to be not sufficient to claim that any of the stars is a binary. We conclude that comparison of the intensity of pairs of spectral lines remains the best way to detect binarity. Full article
(This article belongs to the Special Issue Stellar Spectroscopy, Molecular Astronomy and Atomic Astronomy)
Show Figures

Figure 1

14 pages, 2178 KB  
Article
State-of-the-Art Document Image Binarization Using a Decision Tree Ensemble Trained on Classic Local Binarization Algorithms and Image Statistics
by Nicolae Tarbă, Costin-Anton Boiangiu and Mihai-Lucian Voncilă
Appl. Sci. 2025, 15(15), 8374; https://doi.org/10.3390/app15158374 - 28 Jul 2025
Viewed by 373
Abstract
Image binarization algorithms reduce the original color space to only two values, black and white. They are an important preprocessing step in many computer vision applications. Image binarization is typically performed using a threshold value by classifying the pixels into two categories: lower [...] Read more.
Image binarization algorithms reduce the original color space to only two values, black and white. They are an important preprocessing step in many computer vision applications. Image binarization is typically performed using a threshold value by classifying the pixels into two categories: lower and higher than the threshold. Global thresholding uses a single threshold value for the entire image, whereas local thresholding uses different values for the different pixels. Although slower and more complex than global thresholding, local thresholding can better classify pixels in noisy areas of an image by considering not only the pixel’s value, but also its surrounding neighborhood. This study introduces a local thresholding method that uses the results of several local thresholding algorithms and other image statistics to train a decision tree ensemble. Through cross-validation, we demonstrate that the model is robust and performs well on new data. We compare the results with state-of-the-art solutions and reveal significant improvements in the average F-measure for all DIBCO datasets, obtaining an F-measure of 95.8%, whereas the previous high score was 93.1%. The proposed solution significantly outperformed the previous state-of-the-art algorithms on the DIBCO 2019 dataset, obtaining an F-measure of 95.8%, whereas the previous high score was 73.8%. Full article
(This article belongs to the Special Issue Statistical Signal Processing: Theory, Methods and Applications)
Show Figures

Figure 1

19 pages, 1602 KB  
Article
From Classic to Cutting-Edge: A Near-Perfect Global Thresholding Approach with Machine Learning
by Nicolae Tarbă, Costin-Anton Boiangiu and Mihai-Lucian Voncilă
Appl. Sci. 2025, 15(14), 8096; https://doi.org/10.3390/app15148096 - 21 Jul 2025
Viewed by 320
Abstract
Image binarization is an important process in many computer-vision applications. This transforms the color space of the original image into black and white. Global thresholding is a quick and reliable way to achieve binarization, but it is inherently limited by image noise and [...] Read more.
Image binarization is an important process in many computer-vision applications. This transforms the color space of the original image into black and white. Global thresholding is a quick and reliable way to achieve binarization, but it is inherently limited by image noise and uneven lighting. This paper introduces a global thresholding method that uses the results of classical global thresholding algorithms and other global image features to train a regression model via machine learning. We prove through nested cross-validation that the model can predict the best possible global threshold with an average F-measure of 90.86% and a confidence of 0.79%. We apply our approach to a popular computer vision problem, document image binarization, and compare popular metrics with the best possible values achievable through global thresholding and with the values obtained through the algorithms we used to train our model. Our results show a significant improvement over these classical global thresholding algorithms, achieving near-perfect scores on all the computed metrics. We also compared our results with state-of-the-art binarization algorithms and outperformed them on certain datasets. The global threshold obtained through our method closely approximates the ideal global threshold and could be used in a mixed local-global approach for better results. Full article
Show Figures

Figure 1

26 pages, 7178 KB  
Article
Super-Resolution Reconstruction of Formation MicroScanner Images Based on the SRGAN Algorithm
by Changqiang Ma, Xinghua Qi, Liangyu Chen, Yonggui Li, Jianwei Fu and Zejun Liu
Processes 2025, 13(7), 2284; https://doi.org/10.3390/pr13072284 - 17 Jul 2025
Viewed by 404
Abstract
Formation MicroScanner Image (FMI) technology is a key method for identifying fractured reservoirs and optimizing oil and gas exploration, but its inherent insufficient resolution severely constrains the fine characterization of geological features. This study innovatively applies a Super-Resolution Generative Adversarial Network (SRGAN) to [...] Read more.
Formation MicroScanner Image (FMI) technology is a key method for identifying fractured reservoirs and optimizing oil and gas exploration, but its inherent insufficient resolution severely constrains the fine characterization of geological features. This study innovatively applies a Super-Resolution Generative Adversarial Network (SRGAN) to the super-resolution reconstruction of FMI logging image to address this bottleneck problem. By collecting FMI logging image of glutenite from a well in Xinjiang, a training set containing 24,275 images was constructed, and preprocessing strategies such as grayscale conversion and binarization were employed to optimize input features. Leveraging SRGAN’s generator-discriminator adversarial mechanism and perceptual loss function, high-quality mapping from low-resolution FMI logging image to high-resolution images was achieved. This study yields significant results: in RGB image reconstruction, SRGAN achieved a Peak Signal-to-Noise Ratio (PSNR) of 41.39 dB, surpassing the optimal traditional method (bicubic interpolation) by 61.6%; its Structural Similarity Index (SSIM) reached 0.992, representing a 34.1% improvement; in grayscale image processing, SRGAN effectively eliminated edge blurring, with the PSNR (40.15 dB) and SSIM (0.990) exceeding the suboptimal method (bilinear interpolation) by 36.6% and 9.9%, respectively. These results fully confirm that SRGAN can significantly restore edge contours and structural details in FMI logging image, with performance far exceeding traditional interpolation methods. This study not only systematically verifies, for the first time, SRGAN’s exceptional capability in enhancing FMI resolution, but also provides a high-precision data foundation for reservoir parameter inversion and geological modeling, holding significant application value for advancing the intelligent exploration of complex hydrocarbon reservoirs. Full article
Show Figures

Figure 1

23 pages, 23418 KB  
Article
Effects of Aggregate-to-Binder Ratio on Mechanical Performance of Engineered Geopolymer Composites with Recycled Rubber Aggregates
by Yiwei Li, Shuzhuo Zhi, Ran Chai, Zhiying Zhou, Jiarui He, Zizhao Yao, Zhan Yang, Genquan Zhong and Yongchang Guo
Buildings 2025, 15(14), 2496; https://doi.org/10.3390/buildings15142496 - 16 Jul 2025
Viewed by 290
Abstract
This study investigates the development of a fully rubberized fine-aggregate engineered geopolymer composite (R-EGC) by replacing quartz sand with waste rubber particles (RPs). The influence of the rubber aggregate-to-binder mass ratio (A/B) on the performance of the R-EGC was systematically examined from both [...] Read more.
This study investigates the development of a fully rubberized fine-aggregate engineered geopolymer composite (R-EGC) by replacing quartz sand with waste rubber particles (RPs). The influence of the rubber aggregate-to-binder mass ratio (A/B) on the performance of the R-EGC was systematically examined from both macroscopic and microscopic perspectives. Quantitative analysis of crack width and number was conducted using binarized image-processing techniques to elucidate the crack propagation patterns. Moreover, scanning electron microscopy (SEM) and energy-dispersive spectroscopy (EDS) were employed to analyze the interfacial transition zone (ITZ) between the rubber aggregates and the geopolymer matrix under varying A/B ratios, aiming to explore the underlying failure mechanisms of the R-EGC. The research results indicated that the flowability of the R-EGC decreased gradually with increasing A/B ratio. The flowability of R-0.1 was 73.5%, outperforming R-0.2 and R-0.3 (66% and 65%, respectively). R-0.1 achieved the highest compressive strength of 35.3 MPa (compared to 31.2 MPa and 28.4 MPa for R-0.2 and R-0.3, respectively). R-0.3 demonstrated the most effective crack-control capability, with a tensile strength of 3.96 MPa (representing increases of 11.9% and 3.7% compared to R-0.1 and R-0.2, respectively) and the smallest crack width of 104 μm (indicating reductions of 20.6% and 43.5% compared to R-0.1 and R-0.2, respectively). R-0.2 exhibited the best ductility, with an ultimate tensile strain of 8.33%. Microstructural tests revealed that the interfacial transition zone (ITZ) widths for R-0.1, R-0.2, and R-0.3 were 2.47 μm, 4.53 μm, and 1.09 μm, respectively. An appropriate increase in the ITZ width was found to be beneficial for enhancing tensile ductility, but it compromised the crack-control ability of the R-EGC, thereby reducing its durability. Overall, this study clarifies the fundamental influence of the A/B ratio on the mechanical performance of the R-EGC. The findings provide valuable insights for future research in this field. Full article
(This article belongs to the Special Issue Next-Gen Cementitious Composites for Sustainable Construction)
Show Figures

Figure 1

18 pages, 1760 KB  
Article
Integrating 68Ga-PSMA-11 PET/CT with Clinical Risk Factors for Enhanced Prostate Cancer Progression Prediction
by Joanna M. Wybranska, Lorenz Pieper, Christian Wybranski, Philipp Genseke, Jan Wuestemann, Julian Varghese, Michael C. Kreissl and Jakub Mitura
Cancers 2025, 17(14), 2285; https://doi.org/10.3390/cancers17142285 - 9 Jul 2025
Viewed by 582
Abstract
Background/Objectives: This study evaluates whether combining 68Ga-PSMA-11-PET/CT derived imaging biomarkers with clinical risk factors improves the prediction of early biochemical recurrence (eBCR) or clinical progress in patients with high-risk prostate cancer (PCa) after primary treatment, using machine learning (ML) models. Methods: We [...] Read more.
Background/Objectives: This study evaluates whether combining 68Ga-PSMA-11-PET/CT derived imaging biomarkers with clinical risk factors improves the prediction of early biochemical recurrence (eBCR) or clinical progress in patients with high-risk prostate cancer (PCa) after primary treatment, using machine learning (ML) models. Methods: We analyzed data from 93 high-risk PCa patients who underwent 68Ga-PSMA-11 PET/CT and received primary treatment at a single center. Two predictive models were developed: a logistic regression (LR) model and an ML derived probabilistic graphical model (PGM) based on a naïve Bayes framework. Both models were compared against each other and against the CAPRA risk score. The models’ input variables were selected based on statistical analysis and domain expertise including a literature review and expert input. A decision tree was derived from the PGM to translate its probabilistic reasoning into a transparent classifier. Results: The five key input variables were as follows: binarized CAPRA score, maximal intraprostatic PSMA uptake intensity (SUVmax), presence of bone metastases, nodal involvement at common iliac bifurcation, and seminal vesicle infiltration. The PGM achieved superior predictive performance with a balanced accuracy of 0.73, sensitivity of 0.60, and specificity of 0.86, substantially outperforming both the LR (balanced accuracy: 0.50, sensitivity: 0.00, specificity: 1.00) and CAPRA (balanced accuracy: 0.59, sensitivity: 0.20, specificity: 0.99). The decision tree provided an explainable classifier with CAPRA as a primary branch node, followed by SUVmax and specific PET-detected tumor sites. Conclusions: Integrating 68Ga-PSMA-11 imaging biomarkers with clinical parameters, such as CAPRA, significantly improves models to predict progression in patients with high-risk PCa undergoing primary treatment. The PGM offers superior balanced accuracy and enables risk stratification that may guide personalized treatment decisions. Full article
Show Figures

Figure 1

Back to TopTop