Processing math: 90%
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (257)

Search Parameters:
Keywords = set pair analysis model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 1408 KiB  
Article
Cooper Pairs in 2D Trapped Atoms Interacting Through Finite-Range Potentials
by Erick Manuel Pineda-Ríos and Rosario Paredes
Atoms 2025, 13(1), 4; https://doi.org/10.3390/atoms13010004 - 7 Jan 2025
Viewed by 335
Abstract
This work deals with the key constituent behind the existence of superfluid states in ultracold fermionic gases confined in a harmonic trap in 2D, namely, the formation of Cooper pairs in the presence of a Fermi sea in inhomogeneous confinement. For a set [...] Read more.
This work deals with the key constituent behind the existence of superfluid states in ultracold fermionic gases confined in a harmonic trap in 2D, namely, the formation of Cooper pairs in the presence of a Fermi sea in inhomogeneous confinement. For a set of finite-range models representing particle–particle interaction, we first ascertain the simultaneity of the emergence of bound states and the divergence of the s-wave scattering length in 2D as a function of the interaction potential parameters in free space. Then, through the analysis of two particles interacting in 2D harmonic confinement, we evaluate the energy shift with respect to the discrete harmonic oscillator levels for both repulsive and attractive cases. All of these results are the basis for determining the energy gaps of Cooper pairs arising from two particles interacting in the presence of a Fermi sea consisting of particles immersed in a 2D harmonic trap. Full article
(This article belongs to the Special Issue Quantum Technologies with Cold Atoms)
Show Figures

Figure 1

18 pages, 7539 KiB  
Article
MPB-UNet: Multi-Parallel Blocks UNet for MRI Automated Brain Tumor Segmentation
by Fatma Chahbar, Medjeded Merati and Saïd Mahmoudi
Electronics 2025, 14(1), 40; https://doi.org/10.3390/electronics14010040 - 26 Dec 2024
Viewed by 458
Abstract
Brain tumor segmentation in Magnetic Resonance Imaging (MRI) is crucial for accurate diagnosis and treatment planning in neuro-oncology. This paper introduces a novel multi-parallel blocks UNet (MPB-UNet) architecture for automated brain tumor segmentation. Our approach enhances the standard UNet model by incorporating multiple [...] Read more.
Brain tumor segmentation in Magnetic Resonance Imaging (MRI) is crucial for accurate diagnosis and treatment planning in neuro-oncology. This paper introduces a novel multi-parallel blocks UNet (MPB-UNet) architecture for automated brain tumor segmentation. Our approach enhances the standard UNet model by incorporating multiple parallel processing paths, inspired by the human visual system’s multi-scale processing capabilities. We integrate Atrous Spatial Pyramid Pooling (ASPP) to effectively capture multi-scale contextual information. We evaluated our proposed architecture using the publicly available Low-Grade Glioma (LGG) Segmentation Dataset. This comprehensive collection comprises 3929 axial slices of FLAIR MRI sequences from 110 patients, each slice paired with a corresponding segmentation mask. Our model demonstrated superior performances on this dataset compared with existing state-of-the-art methods, highlighting its effectiveness in accurate tumor delineation. We provide a comprehensive analysis of the model’s performance, including visual results and comparisons with other architectures. This work contributes to advancing automated brain tumor segmentation techniques, potentially improving diagnostic accuracy and efficiency in clinical settings. The proposed multi-parallel blocks UNet shows promise for integration into clinical workflows and opens avenues for future studies in medical image analysis. Our model achieves strong performances across multiple metrics: 99.86% accuracy, 99.86% precision, 99.86% sensitivity, 99.86% specificity, 99.80% Dice Similarity Coefficient (DSC), and 92.17% Average Intersection over Union (IoU). Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Biomedical Data Processing)
Show Figures

Figure 1

21 pages, 4647 KiB  
Article
DeSPPNet: A Multiscale Deep Learning Model for Cardiac Segmentation
by Elizar Elizar, Rusdha Muharar and Mohd Asyraf Zulkifley
Diagnostics 2024, 14(24), 2820; https://doi.org/10.3390/diagnostics14242820 - 14 Dec 2024
Viewed by 735
Abstract
Background: Cardiac magnetic resonance imaging (MRI) plays a crucial role in monitoring disease progression and evaluating the effectiveness of treatment interventions. Cardiac MRI allows medical practitioners to assess cardiac function accurately by providing comprehensive and quantitative information about the structure and function, hence [...] Read more.
Background: Cardiac magnetic resonance imaging (MRI) plays a crucial role in monitoring disease progression and evaluating the effectiveness of treatment interventions. Cardiac MRI allows medical practitioners to assess cardiac function accurately by providing comprehensive and quantitative information about the structure and function, hence making it an indispensable tool for monitoring the disease and treatment response. Deep learning-based segmentation enables the precise delineation of cardiac structures including the myocardium, right ventricle, and left ventricle. The accurate segmentation of these structures helps in the diagnosis of heart failure, cardiac functional response to therapies, and understanding the state of the heart functions after treatment. Objectives: The objective of this study is to develop a multiscale deep learning model to segment cardiac organs based on MRI imaging data. Good segmentation performance is difficult to achieve due to the complex nature of the cardiac structure, which includes a variety of chambers, arteries, and tissues. Furthermore, the human heart is also constantly beating, leading to motion artifacts that reduce image clarity and consistency. As a result, a multiscale method is explored to overcome various challenges in segmenting cardiac MRI images. Methods: This paper proposes DeSPPNet, a multiscale-based deep learning network. Its foundation follows encoder–decoder pair architecture that utilizes the Spatial Pyramid Pooling (SPP) layer to improve the performance of cardiac semantic segmentation. The SPP layer is designed to pool features from densely convolutional layers at different scales or sizes, which will be combined to maintain a set of spatial information. By processing features at different spatial resolutions, the multiscale densely connected layer in the form of the Pyramid Pooling Dense Module (PPDM) helps the network to capture both local and global context, preserving finer details of the cardiac structure while also capturing the broader context required to accurately segment larger cardiac structures. The PPDM is incorporated into the deeper layer of the encoder section of the deep learning network to allow it to recognize complex semantic features. Results: An analysis of multiple PPDM placement scenarios and structural variations revealed that the 3-path PPDM, positioned at the encoder layer 5, yielded optimal segmentation performance, achieving dice, intersection over union (IoU), and accuracy scores of 0.859, 0.800, and 0.993, respectively. Conclusions: Different PPDM configurations produce a different effect on the network; as such, a shallower layer placement, like encoder layer 4, retains more spatial data that need more parallel paths to gather the optimal set of multiscale features. In contrast, deeper layers contain more informative features but at a lower spatial resolution, which reduces the number of parallel paths required to provide optimal multiscale context. Full article
Show Figures

Figure 1

19 pages, 46312 KiB  
Article
Persistent Homology Analysis of AI-Generated Fractal Patterns: A Mathematical Framework for Evaluating Geometric Authenticity
by Minhyeok Lee and Soyeon Lee
Fractal Fract. 2024, 8(12), 731; https://doi.org/10.3390/fractalfract8120731 - 13 Dec 2024
Viewed by 685
Abstract
We present a mathematical framework for analyzing fractal patterns in AI-generated images using persistent homology. Given a text-to-image mapping M:TI, we demonstrate that the persistent homology groups Hk(t) of sublevel set filtrations [...] Read more.
We present a mathematical framework for analyzing fractal patterns in AI-generated images using persistent homology. Given a text-to-image mapping M:TI, we demonstrate that the persistent homology groups Hk(t) of sublevel set filtrations {f1((,t])}tR characterize multi-scale geometric structures, where f:M(p)R is the grayscale intensity function of a generated image. The primary challenge lies in quantifying self-similarity in scales, which we address by analyzing birth–death pairs (bi,di) in the persistence diagram PD(M(p)). Our contribution extends beyond applying the stability theorem to AI-generated fractals; we establish how the self-similarity inherent in fractal patterns manifests in the persistence diagrams of generated images. We validate our approach using the Stable Diffusion 3.5 model for four fractal categories: ferns, trees, spirals, and crystals. An analysis of guidance scale effects γ[4.0,8.0] reveals monotonic relationships between model parameters and topological features. Stability testing confirms robustness under noise perturbations η0.2, with feature count variations Δμf<0.5. Our framework provides a foundation for enhancing generative models and evaluating their geometric fidelity in fractal pattern synthesis. Full article
Show Figures

Figure 1

13 pages, 5489 KiB  
Article
CT-Free Attenuation Correction in Paediatric Long Axial Field-of-View Positron Emission Tomography Using Synthetic CT from Emission Data
by Maria Elkjær Montgomery, Flemming Littrup Andersen, René Mathiasen, Lise Borgwardt, Kim Francis Andersen and Claes Nøhr Ladefoged
Diagnostics 2024, 14(24), 2788; https://doi.org/10.3390/diagnostics14242788 - 12 Dec 2024
Viewed by 694
Abstract
Background/Objectives: Paediatric PET/CT imaging is crucial in oncology but poses significant radiation risks due to children’s higher radiosensitivity and longer post-exposure life expectancy. This study aims to minimize radiation exposure by generating synthetic CT (sCT) images from emission PET data, eliminating the [...] Read more.
Background/Objectives: Paediatric PET/CT imaging is crucial in oncology but poses significant radiation risks due to children’s higher radiosensitivity and longer post-exposure life expectancy. This study aims to minimize radiation exposure by generating synthetic CT (sCT) images from emission PET data, eliminating the need for attenuation correction (AC) CT scans in paediatric patients. Methods: We utilized a cohort of 128 paediatric patients, resulting in 195 paired PET and CT images. Data were acquired using Siemens Biograph Vision 600 and Long Axial Field-of-View (LAFOV) Siemens Vision Quadra PET/CT scanners. A 3D parameter transferred conditional GAN (PT-cGAN) architecture, pre-trained on adult data, was adapted and trained on the paediatric cohort. The model’s performance was evaluated qualitatively by a nuclear medicine specialist and quantitatively by comparing sCT-derived PET (sPET) with standard PET images. Results: The model demonstrated high qualitative and quantitative performance. Visual inspection showed no significant (19/23) or minor clinically insignificant (4/23) differences in image quality between PET and sPET. Quantitative analysis revealed a mean SUV relative difference of −2.6 ± 5.8% across organs, with a high agreement in lesion overlap (Dice coefficient of 0.92 ± 0.08). The model also performed robustly in low-count settings, maintaining performance with reduced acquisition times. Conclusions: The proposed method effectively reduces radiation exposure in paediatric PET/CT imaging by eliminating the need for AC CT scans. It maintains high diagnostic accuracy and minimises motion-induced artifacts, making it a valuable alternative for clinical application. Further testing in clinical settings is warranted to confirm these findings and enhance patient safety. Full article
Show Figures

Figure 1

27 pages, 42580 KiB  
Article
Deep Rolling Process Modeling Using Finite Element Analysis in Residual Stress Measurement on Rail Head UIC860 Surface
by Siwasit Pitjamit, Wasawat Nakkiew, Pinmanee Insua, Adirek Baisukhan and Pattarawadee Poolperm
Appl. Sci. 2024, 14(23), 11222; https://doi.org/10.3390/app142311222 - 2 Dec 2024
Viewed by 747
Abstract
This study investigates the effects of deep rolling parameters, pressure, speed, and offset, on the residual stress distribution and material deformation in UIC 860 Grade 900A railway rails. We will model deep rolling to simulate the process and predict the residual stress profile [...] Read more.
This study investigates the effects of deep rolling parameters, pressure, speed, and offset, on the residual stress distribution and material deformation in UIC 860 Grade 900A railway rails. We will model deep rolling to simulate the process and predict the residual stress profile in railway rails. Subsequently, we will rigorously compare and analyze the FEM simulation results with experimental data to optimize deep rolling parameters for improved residual stress distribution. Using both experimental methods and finite element analysis via ANSYS 2023 R1, the study varied deep rolling parameters. Experimental deep rolling pressure was set at 150 bar, speed at 1800 mm/min, and offset at 0.1 mm, while FEA simulations predicted corresponding pressures of 157 bar and speed of 1796.52 mm/min. These parameter settings were chosen to induce significant surface compressive stresses that could enhance the material’s mechanical performance. The experimental results showed an average compressive residual stress of 498.9 MPa, closely aligning with the FEA-predicted value of 502.5 MPa. A paired t-test revealed no statistically significant difference between the two results, with a T-value of −0.22 and a p-value of 0.833, validating the reliability of the FEA model. The consistent deformation observed in both experimental and FEA simulations, especially with a 0.1 mm offset, confirmed that the rolling parameters were effective in producing uniform stress distribution, albeit with a slightly extended processing time due to the small offset. Overall, the findings confirm that optimizing the deep rolling parameters of pressure, speed, and offset leads to favorable residual stress distributions and improved material properties. The results indicate that FEA is a reliable tool for predicting the outcomes of deep rolling, and this study provides a strong foundation for further refinement of the process to enhance performance in practical applications, such as railway rail treatments. Full article
(This article belongs to the Section Applied Industrial Technologies)
Show Figures

Figure 1

15 pages, 836 KiB  
Article
Determinants of Neonatal Mortality at a Referral Paediatric Hospital in Angola: A Case–Control Study Using Theoretical Frameworks
by Israel C. Avelino, Joaquim Van-Dúnem and Luís Varandas
Int. J. Environ. Res. Public Health 2024, 21(12), 1609; https://doi.org/10.3390/ijerph21121609 - 30 Nov 2024
Viewed by 858
Abstract
Neonatal mortality rates in developing countries are influenced by a complex array of factors. Despite advancements in healthcare, Angola has one of the highest neonatal mortality rates in sub-Saharan Africa, with significant contributors including premature birth, intrapartum events, tetanus, and sepsis. This study, [...] Read more.
Neonatal mortality rates in developing countries are influenced by a complex array of factors. Despite advancements in healthcare, Angola has one of the highest neonatal mortality rates in sub-Saharan Africa, with significant contributors including premature birth, intrapartum events, tetanus, and sepsis. This study, utilizing key theoretical frameworks such as intersectionality, social determinants of health (SDOH), and ecosocial theory, aimed to identify the primary causes and contributing factors of neonatal mortality among infants admitted to the Neonatology Service at DBPH in Luanda from May 2022 to June 2023. A retrospective matched case–control design was employed, pairing each neonatal death with two surviving neonates based on age and sex. The analysis included 318 newborns, of whom 106 experienced hospital deaths. A stepwise binary logistic regression model was used to examine associations between variables and neonatal mortality. Variables with p < 0.25 in bivariate analysis were included in the multivariate model. Significant factors associated with neonatal mortality included the following: a low Apgar score at 1 min (<7) (OR 2.172; 95% CI: 1.436–4.731); maternal age under 20 years (OR 3.746; 95% CI: 2.172–6.459); home delivery (OR 1.769; 95% CI: 1.034–3.027); and duration of illness before admission ≥ 3 days (OR 2.600; 95% CI: 1.317–5.200). Addressing these issues requires urgent interventions, including improving Apgar score management through enhanced training for healthcare professionals, supporting young mothers with intensified maternal education, ensuring deliveries occur in appropriate healthcare settings, and improving universal health coverage and referral systems. These measures could be crucial for enhancing neonatal care and reducing mortality. Full article
Show Figures

Figure 1

24 pages, 4066 KiB  
Article
Motion-Accurate Allocation of a Mechanical Transmission System Based on Meta-Action and Intuitionistic Trapezoidal Fuzzy Numbers
by Zongyi Mu, Jian Li, Xiaogang Zhang, Genbao Zhang, Jinyuan Li and Hao Wei
Actuators 2024, 13(12), 484; https://doi.org/10.3390/act13120484 - 28 Nov 2024
Viewed by 374
Abstract
The traditional mechanical transmission system motion accuracy allocation process has the following problems: the error modeling process can not reflect the error formation mechanism of the system, and the influence of maintenance costs and the motion accuracy robustness of the system are ignored [...] Read more.
The traditional mechanical transmission system motion accuracy allocation process has the following problems: the error modeling process can not reflect the error formation mechanism of the system, and the influence of maintenance costs and the motion accuracy robustness of the system are ignored in the process of establishing the optimal allocation model of motion accuracy. In this paper, firstly, meta-action theory is introduced and the meta-action unit is taken as the basic analysis unit, the error modeling of mechanical transmission systems is studied, and the formation mechanism of the motion error is correctly analyzed. Secondly, the comprehensive cost of a mechanical transmission system, considering the part manufacturing cost, assembly cost and maintenance cost per unit, is accurately evaluated using the multi-criteria decision-making (MCDM) method. Thirdly, based on the motion error model, a robust model for system motion accuracy is obtained by analyzing the sensitivity of each motion pair. Then, a multi-objective optimal allocation model of motion accuracy is established. The model is solved by an intelligent algorithm to obtain the Pareto non-dominated solution set, and the optimal solution is selected by the fuzzy set method. Finally, the method described in this paper is illustrated by an engineering example. Full article
(This article belongs to the Section Control Systems)
Show Figures

Figure 1

17 pages, 5123 KiB  
Article
Development and Validation of a Novel Four Gene-Pairs Signature for Predicting Prognosis in DLBCL Patients
by Atsushi Tanabe, Jerry Ndzinu and Hiroeki Sahara
Int. J. Mol. Sci. 2024, 25(23), 12807; https://doi.org/10.3390/ijms252312807 - 28 Nov 2024
Viewed by 526
Abstract
Diffuse large B-cell lymphoma (DLBCL) is the most common subtype of non-Hodgkin’s lymphoma. Because individual clinical outcomes of DLBCL in response to standard therapy differ widely, new treatment strategies are being investigated to improve therapeutic efficacy. In this study, we identified a novel [...] Read more.
Diffuse large B-cell lymphoma (DLBCL) is the most common subtype of non-Hodgkin’s lymphoma. Because individual clinical outcomes of DLBCL in response to standard therapy differ widely, new treatment strategies are being investigated to improve therapeutic efficacy. In this study, we identified a novel signature for stratification of DLBCL useful for prognosis prediction and treatment selection. First, 408 prognostic gene sets were selected from approximately 2500 DLBCL samples in public databases, from which four gene-pair signatures consisting of seven prognostic genes were identified by Cox regression analysis. Then, the risk score was calculated based on these gene-pairs and we validated the risk score as a prognostic predictor for DLBCL patient outcomes. This risk score demonstrated independent predictive performance even when combined with other clinical parameters and molecular subtypes. Evaluating external DLBCL cohorts, we demonstrated that the risk-scoring model based the four gene-pair signatures leads to stable predictive performance, compared with nine existing predictive models. Finally, high-risk DLBCL showed high resistance to DNA damage caused by anticancer drugs, suggesting that this characteristic is responsible for the unfavorable prognosis of high-risk DLBCL patients. These results provide a novel index for classifying the biological characteristics of DLBCL and clearly indicate the importance of genetic analyses in the treatment of DLBCL. Full article
(This article belongs to the Section Molecular Pathology, Diagnostics, and Therapeutics)
Show Figures

Figure 1

22 pages, 13883 KiB  
Article
Applying the Improved Set Pair Analysis Method to Flood Season Staging in Tropical Island Rivers: A Case Study of the Hainan Island Rivers in China
by Puwei Wu, Gang Chen, Yukai Wang and Jun Li
Water 2024, 16(23), 3418; https://doi.org/10.3390/w16233418 - 27 Nov 2024
Viewed by 525
Abstract
The seasonality of floods is a key factor affecting riparian agriculture. Flood season staging is the main means of identifying the seasonality of floods. In the process of staging the flood season, set pair analysis is a widely used method. However, the set [...] Read more.
The seasonality of floods is a key factor affecting riparian agriculture. Flood season staging is the main means of identifying the seasonality of floods. In the process of staging the flood season, set pair analysis is a widely used method. However, the set pair analysis method (SPAM) cannot take into account the differences in and volatility of the staging indicators, and at the same time, the SPAM cannot provide corresponding staging schemes according to different scenarios. To address these problems, the improved set pair analysis method (ISPAM) is proposed. Kernel density estimation (KDE) is used to calculate the interval of the staging indicators to express their volatility. Based on the interval theory, the deviation method is improved, and the weights of the staging indicators are calculated to reflect the differences in different staging indicators. The theoretical correlation coefficient can be calculated by combining the weights and interval indicators and fitting the empirical connection coefficient corresponding to each time period. Finally, the ISPAM is established under different confidence levels to derive staging schemes under different scenarios. Based on the daily average precipitation flow data from 1961 to 2022 in the Nandujiang middle basin and surrounding areas in tropical island regions, the staging effect of the ISPAM was verified and compared using the SPAM, Fisher optimal segmentation method, and improved set pair analysis method without considering differences in the indicator weights (ISPAM-WCDIIW), and the improved set pair analysis method without considering indicator fluctuations (ISPAM-WCIF). According to the evaluation results from the silhouette coefficient method, it can be concluded that compared with the SPAM and ISPAM-WCIF, the ISPAM provided the optimal staging scheme for 100% of the years in the test set (2011–2022). Compared with the Fisher optimal segmentation method, the optimal staging scheme for more than 83% of the years (2011, 2013–2015, and 2017–2022) in the test set was provided by the ISPAM. Although the ISPAM-WCDIIW, like the ISPAM, can provide optimal staging schemes, the ISPAM-WCDIIW could not provide an exact staging scheme for more than 55% of the scenarios (the ISPAM-WCDIIW could not provide an exact staging scheme in scenarios (0.7, 0.6), (0.8, 0.6), (0.8, 0.9), (0.95, 0.6), and (0.95, 0.8)). The results show that the ISPAM model is more reasonable and credible compared with the SPAM, Fisher optimal segmentation method, ISPAM-WCDIIW, and ISPAM-WCIF. The purpose of this study is to provide a reference for flood season staging research during flood seasons. Full article
(This article belongs to the Section Hydrology)
Show Figures

Figure 1

728 KiB  
Proceeding Paper
Exploring Sleep Apnea Risk Factors with Contrast Set Mining: Findings from the Sleep Heart Health Study
by Nhung H. Hoang and Zilu Liang
Eng. Proc. 2024, 82(1), 84; https://doi.org/10.3390/ecsa-11-20462 - 26 Nov 2024
Viewed by 82
Abstract
Sleep apnea is a common sleep disorder with potentially serious health consequences. Identifying risk factors for sleep apnea is crucial for early detection and effective management. Traditionally, this has been achieved through statistical methods such as Pearson’s and Spearman’s correlation analysis, which examine [...] Read more.
Sleep apnea is a common sleep disorder with potentially serious health consequences. Identifying risk factors for sleep apnea is crucial for early detection and effective management. Traditionally, this has been achieved through statistical methods such as Pearson’s and Spearman’s correlation analysis, which examine relationships between individual variables and sleep apnea. However, these methods often miss complex, nonlinear patterns and interactions among multiple factors. In this study, we applied contrast set mining to identify patterns in attribute–value pair combinations (contrast sets) in the Sleep Heart Health Study database that differentiate between groups with varying levels of sleep apnea severity. Our findings reveal that males and individuals aged 60 to 80 exhibit a higher risk of sleep apnea, with a confidence exceeding 75%. Moreover, male patients diagnosed with second-degree obesity, defined as a body mass index (BMI) between 35 and 39.9 kg/m2, show an elevated risk of severe apnea, with a lift of over 2.23, support over 16%, and confidence around 80%. In contrast, female patients with a BMI within the normal range (18–25 kg/m2) demonstrate a lower risk of sleep apnea, with a lift of 2.36, support of 17%, and confidence exceeding 90%. Contrast set mining helps uncover meaningful rules within subgroups that traditional methods might overlook. Future research will focus on developing sleep apnea screening models based on these identified contrast set rules. Full article
Show Figures

Figure 1

21 pages, 8349 KiB  
Article
Quality Evaluation of Effective Abrasive Grains Micro-Edge Honing Based on Trapezoidal Fuzzy Analytic Hierarchy Process and Set Pair Analysis
by Jie Su, Yuan Liang, Yue Yu, Fuwei Wang, Jiancong Zhou, Lin Liu and Yang Gao
Appl. Sci. 2024, 14(23), 10939; https://doi.org/10.3390/app142310939 - 25 Nov 2024
Viewed by 513
Abstract
Studying the factors affecting machining accuracy, surface quality, and machining efficiency in the powerful honing machining process system, analyzing the basic law between various errors and machining quality, exploring the method of evaluating the quality of honing, and improving the machining quality and [...] Read more.
Studying the factors affecting machining accuracy, surface quality, and machining efficiency in the powerful honing machining process system, analyzing the basic law between various errors and machining quality, exploring the method of evaluating the quality of honing, and improving the machining quality and transmission performance of hardened gears has important engineering application value. Firstly, this paper establishes an effective abrasive grains micro-edge honing quality evaluation model, proposes a method based on the Trapezoidal Fuzzy Analytic Hierarchy Process (Tra-FAHP) and Set Pair Analysis (SPA) to comprehensively evaluate the quality of the honing process, and obtains the influence weights of each factor on the quality of honing. Secondly, the paper analyzes the influence rules of three types of abrasive grain sizes on helix error, tooth pitch error, tooth profile error, surface roughness, and honing efficiency. Finally, the correctness of the established comprehensive evaluation model of honing quality was verified with the threshold method and weights. The research results show that the model can correctly evaluate the quality of hardened gear honing and can be applied to studying the influence of abrasive grain micro-edge honing on machining characteristics. Full article
(This article belongs to the Section Surface Sciences and Technology)
Show Figures

Figure 1

21 pages, 4548 KiB  
Article
Evaluating Google Maps’ Eco-Routes: A Metaheuristic-Driven Microsimulation Approach
by Aleksandar Jovanovic, Slavica Gavric and Aleksandar Stevanovic
Geographies 2024, 4(4), 732-752; https://doi.org/10.3390/geographies4040040 - 24 Nov 2024
Viewed by 730
Abstract
Eco-routing, as a key strategy for mitigating urban pollution, is gaining prominence due to the fact that minimizing travel time alone does not necessarily result in the lowest fuel consumption. This research focuses on the challenge of selecting environmentally friendly routes within an [...] Read more.
Eco-routing, as a key strategy for mitigating urban pollution, is gaining prominence due to the fact that minimizing travel time alone does not necessarily result in the lowest fuel consumption. This research focuses on the challenge of selecting environmentally friendly routes within an urban street network. Employing microsimulation modelling and a computer-generated mirror of a small traffic network, the study integrates real-world traffic patterns to enhance accuracy. The route selection process is informed by fuel consumption and emissions data from trajectory parameters obtained during simulation, utilizing the Comprehensive Modal Emission Model (CMEM) for emission estimation. A comprehensive analysis of specific origin–destination pairs was conducted to assess the methodology, with all vehicles adhering to routes recommended by Google Maps. The findings reveal a noteworthy disparity between microsimulation results and Google Maps recommendations for eco-friendly routes within the University of Pittsburgh Campus street network. This incongruence underscores the necessity for further investigations to validate the accuracy of Google Maps’ eco-route suggestions in urban settings. As urban areas increasingly grapple with pollution challenges, such research becomes pivotal for refining and optimizing eco-routing strategies to effectively contribute to sustainable urban mobility. Full article
Show Figures

Figure 1

20 pages, 3523 KiB  
Article
Optimization of Ecological Dispatch and Hydrodynamic Improvements in Tidal River Channels Using SWMM Modeling: A Case Study of the Longjin Yangqi Area in Kurama Mountain
by Wentao Zhou and Weihong Liao
Water 2024, 16(22), 3336; https://doi.org/10.3390/w16223336 - 20 Nov 2024
Viewed by 564
Abstract
Being tidal-sensitive, the river channel in the Longjin Yangqi area of Cangshan, Fuzhou City, is challenged further because of rapid urbanization. Thus, resultant remediation efforts are crucial. This study aims analyzes hydrodynamic characteristics of the area and, secondly, proposes an ecological dispatch solution [...] Read more.
Being tidal-sensitive, the river channel in the Longjin Yangqi area of Cangshan, Fuzhou City, is challenged further because of rapid urbanization. Thus, resultant remediation efforts are crucial. This study aims analyzes hydrodynamic characteristics of the area and, secondly, proposes an ecological dispatch solution with evaluation of its effectiveness through the Storm Water Management Model (SWMM). The chief tasks cover imitating rainfall runoff, optimizing sluice gate activities, reorganizing pump management, and reshaping river morphology to bolster flood control and water quality. Improvements were shown through ecological dispatch strategies, which suggested increasing the channel width for the river and deepening the riverbed, thereby increasing the flood duration, lowering water levels, and less frequent flood occurrences. Optimizing sluice gate settings improved efficiency in the regulation of water flow and reduced scour or siltation problems. Various adjustments to pumping operations scattered over various times were based on live-data analysis, therefore enhancing water flow and the self-purification capacity of the water body. The SWMM was directly applied in this tidal river for urban water resource management with data processing from over 100,000 points in simulations. Wherever needed, changes to model parameters were made to improve its capability and enhance its appropriate use in future urban settings. As a whole, this study presents a plan for sustainable water resource management paired with environmental conditions for the benefit of over 500,000 urban residents in the Longjin Yangqi area. Full article
(This article belongs to the Section Hydraulics and Hydrodynamics)
Show Figures

Figure 1

24 pages, 3492 KiB  
Article
Syntactic–Semantic Detection of Clone-Caused Vulnerabilities in the IoT Devices
by Maxim Kalinin and Nikita Gribkov
Sensors 2024, 24(22), 7251; https://doi.org/10.3390/s24227251 - 13 Nov 2024
Viewed by 736
Abstract
This paper addresses the problem of IoT security caused by code cloning when developing a massive variety of different smart devices. A clone detection method is proposed to identify clone-caused vulnerabilities in IoT software. A hybrid solution combines syntactic and semantic analyses of [...] Read more.
This paper addresses the problem of IoT security caused by code cloning when developing a massive variety of different smart devices. A clone detection method is proposed to identify clone-caused vulnerabilities in IoT software. A hybrid solution combines syntactic and semantic analyses of the code. Based on the recovered code, an attributed abstract syntax tree is constructed for each code fragment. All nodes of the commonly used abstract syntax tree are proposed to be weighted with semantic attribute vectors. Each attributed tree is then encoded as a semantic vector using a Deep Graph Neural Network. Two graph networks are combined into a Siamese neural model, allowing training to generate semantic vectors and compare vector pairs within each training epoch. Semantic analysis is also applied to clones with low similarity metric values. This allows one to correct the similarity decision in the case of incorrect matching of functions at the syntactic level. To automate the search for clones, the BinDiff algorithm is added in the first stage to accurately select clone candidates. This has a positive impact on the ability to apply the proposed method to large sets of binary code. In an experimental study, the developed method—compared to BinDiff, Gemini, and Asteria tools—has demonstrated the highest efficiency. Full article
(This article belongs to the Special Issue AI-Based Security and Privacy for IoT Applications)
Show Figures

Figure 1

Back to TopTop