Next Issue
Volume 14, January-2
Previous Issue
Volume 13, December-2
 
 
applsci-logo

Journal Browser

Journal Browser

Appl. Sci., Volume 14, Issue 1 (January-1 2024) – 471 articles

Cover Story (view full-size image): Silver nanoparticles have long been known for their antibacterial properties. Recently, an increasing number of studies confirmed that they have antifungal properties as well. Due to the increasing number of these studies, this review was organized to summarize most of the research conducted so far in this field and to present the results of the activity of silver nanoparticles against fungal pathogens of humans and plants, green synthesis of silver nanoparticles, and the mechanism of action. The combined activity with antifungal drugs and toxicity assessment is also presented. The review describes the antifungal activity of silver nanoparticles against pathogens such as F. oxysporum, F. graminearum, T. asahii, B. cinerea, P. concavum, and Pestalotia sp., as well as many species of the genus Candida. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 9912 KiB  
Article
Research on VVC Intra-Frame Bit Allocation Scheme Based on Significance Detection
by Xuesong Jin, Huiyuan Sun and Yuhang Zhang
Appl. Sci. 2024, 14(1), 471; https://doi.org/10.3390/app14010471 - 4 Jan 2024
Viewed by 2625
Abstract
This research is based on an intra-frame rate control algorithm based on the Versatile Video Coding (VVC) standard, considering that there is the phenomenon of over-allocating the bitrate of the end coding tree units (CTUs) in the bit allocation process, while the front [...] Read more.
This research is based on an intra-frame rate control algorithm based on the Versatile Video Coding (VVC) standard, considering that there is the phenomenon of over-allocating the bitrate of the end coding tree units (CTUs) in the bit allocation process, while the front CTUs are not effectively compressed. Fusing a Canny-based edge detection algorithm, a color contrast-based saliency detection algorithm, a Sum of Absolute Transformed Differences (SATD) based CTU coding complexity measure, and a Partial Least Squares (PLS) regression model, this paper proposes a CTU-level bit allocation improvement scheme for intra-mode code rate control of the VVC standard. First, natural images are selected to produce a lightweight dataset. Second, different metrics are utilized to obtain the significance and complexity values of each coding unit, the relatively important coding units in the whole frame are selected, which are adjusted with different weights, and the optimal adjustment multiplicity is supplemented into the dataset. Finally, the PLS regression model was used to obtain regression equations to refine the weights for adjusting the bit allocation. The proposed bit allocation scheme improves the average rate control accuracy by 0.453%, Y-PSNR by 0.05 dB, BD-rate savings by 0.33%, and BD-PSNR by 0.03 dB compared to the VVC standard rate control algorithm. Full article
Show Figures

Figure 1

15 pages, 3096 KiB  
Article
Prediction Models for Mechanical Properties of Cement-Bound Aggregate with Waste Rubber
by Matija Zvonarić, Mirta Benšić, Ivana Barišić and Tihomir Dokšanović
Appl. Sci. 2024, 14(1), 470; https://doi.org/10.3390/app14010470 - 4 Jan 2024
Cited by 2 | Viewed by 1462
Abstract
The high stiffness of cement-bound aggregate (CBA) is recognized as its main drawback. The stiffness is described by the modulus of elasticity, which is difficult to determine precisely in CBA. Incorporating rubber in these mixtures reduces their stiffness, but mathematical models of the [...] Read more.
The high stiffness of cement-bound aggregate (CBA) is recognized as its main drawback. The stiffness is described by the modulus of elasticity, which is difficult to determine precisely in CBA. Incorporating rubber in these mixtures reduces their stiffness, but mathematical models of the influence of rubber on the mechanical characteristics have not previously been defined. The scope of this research was to define a prediction model for the compressive strength (fc), dynamic modulus of elasticity (Edyn) and static modulus of elasticity (Est) based on the measured ultrasonic pulse velocity as a non-destructive test method. The difference between these two modules is based on the measurement method. Within this research, the cement and waste rubber content were varied, and the mechanical properties were determined for three curing periods. The Edyn was measured using the ultrasonic pulse velocity (UPV), while the Est was determined using three-dimensional digital image correlation (3D DIC). The influence of the amount of cement and rubber and the curing period on the UPV was determined. The development of prediction models for estimating the fc and Est of CBA modified with waste rubber based on the non-destructive test results is highlighted as the most significant contribution of this work. The curing period was statistically significant for the prediction of the Est, which points to the development of CBA elastic properties through different stages during the cement-hydration process. By contrast, the curing period was not statistically significant when estimating the fc, resulting in a simplified, practical and usable prediction model. Full article
Show Figures

Figure 1

20 pages, 4486 KiB  
Article
Effect of Coal Particle Breakage on Gas Desorption Rate during Coal and Gas Outburst
by Qiang Cheng, Gun Huang, Zhiqiang Li, Jie Zheng and Qinming Liang
Appl. Sci. 2024, 14(1), 469; https://doi.org/10.3390/app14010469 - 4 Jan 2024
Cited by 4 | Viewed by 1264
Abstract
The gas contained in coal plays a crucial role in triggering coal and gas outbursts. During an outburst, a large quantity of gas originally absorbed by coal is released from pulverized coal. The role this part of the gas plays in the process [...] Read more.
The gas contained in coal plays a crucial role in triggering coal and gas outbursts. During an outburst, a large quantity of gas originally absorbed by coal is released from pulverized coal. The role this part of the gas plays in the process of coal and gas outbursts has not been clearly elucidated yet. Therefore, investigating the changes in gas desorption rate from coal particles of different sizes could provide some meaningful insights into the outburst process and improve our understanding of the outburst mechanism. First, combining the diffusivity of coal of different particle sizes and the distribution function of broken coal, we present a gas desorption model for fragmented gas-bearing coal that can quantify gas desorption from coal particles within a certain range of size. Second, the gas desorption rate ratio is defined as the ratio of the gas desorption rate from coal being crushed to that from coal before breaking. The desorption rate ratio is mainly determined by the desorption index (γ) and the granularity distribution index (α). Within the limit range of coal particle sizes, the ratio of effective diffusion coefficient for coal particles with different sizes is directly proportional to the reciprocal of the ratio of particle sizes. Under uniform particle size conditions before and after fragmentation, the gas desorption rate ratio is the square root of the reciprocal of the effective diffusion coefficient. The gas desorption model quantitatively elucidates the accelerated desorption of adsorbed gas in coal during the continuous fragmentation process of coal during an outburst. Full article
Show Figures

Figure 1

20 pages, 1500 KiB  
Article
A New Method for 2D-Adapted Wavelet Construction: An Application in Mass-Type Anomalies Localization in Mammographic Images
by Damian Valdés-Santiago, Angela M. León-Mecías, Marta Lourdes Baguer Díaz-Romañach, Antoni Jaume-i-Capó, Manuel González-Hidalgo and Jose Maria Buades Rubio
Appl. Sci. 2024, 14(1), 468; https://doi.org/10.3390/app14010468 - 4 Jan 2024
Viewed by 1415
Abstract
This contribution presents a wavelet-based algorithm to detect patterns in images. A two-dimensional extension of the DST-II is introduced to construct adapted wavelets using the equation of the tensor product corresponding to the diagonal coefficients in the 2D discrete wavelet transform. A 1D [...] Read more.
This contribution presents a wavelet-based algorithm to detect patterns in images. A two-dimensional extension of the DST-II is introduced to construct adapted wavelets using the equation of the tensor product corresponding to the diagonal coefficients in the 2D discrete wavelet transform. A 1D filter was then estimated that meets finite energy conditions, vanished moments, orthogonality, and four new detection conditions. These allow, when performing the 2D transform, for the filter to detect the pattern by taking the diagonal coefficients with values of the normalized similarity measure, defined by Guido, as greater than 0.7, and α=0.1. The positions of these coefficients are used to estimate the position of the pattern in the original image. This strategy has been used successfully to detect artificial patterns and localize mass-like abnormalities in digital mammography images. In the case of the latter, high sensitivity and positive predictive value in detection were achieved but not high specificity or negative predictive value, contrary to what occurred in the 1D strategy. This means that the proposed detection algorithm presents a high number of false negatives, which can be explained by the complexity of detection in these types of images. Full article
(This article belongs to the Special Issue Artificial Intelligence for Health and Well-Being)
Show Figures

Figure 1

13 pages, 4345 KiB  
Article
Design Optimization of Underground Mining Vehicles Based on Regenerative Braking Energy Recovery
by Pengcheng Liu, Jian Hao, Hui Hu, Xuekun Luan and Bingqian Meng
Appl. Sci. 2024, 14(1), 467; https://doi.org/10.3390/app14010467 - 4 Jan 2024
Viewed by 1893
Abstract
This article addresses the issue of energy waste resulting from frequent braking of underground mine cars and proposes an optimization design to address this. The proposed solution involves the installation of a regenerative braking device within the mine cars to capture and reuse [...] Read more.
This article addresses the issue of energy waste resulting from frequent braking of underground mine cars and proposes an optimization design to address this. The proposed solution involves the installation of a regenerative braking device within the mine cars to capture and reuse the energy wasted during braking. This implementation improves the endurance capabilities of the underground mine cars. The article begins by analyzing the working characteristics of underground mine cars and proposing a design optimization method based on regenerative braking energy. Subsequently, a regenerative braking device specifically designed for underground mine cars is introduced. Finally, through physical modeling, a comparison is made between the energy consumption of the underground mine cars before and after the installation of the energy recovery system, allowing for an estimation of the actual benefits of energy recovery. The results demonstrate that the regenerative braking system successfully recovers approximately 60% of the braking energy during operation, resulting in an improvement of around 20% in the endurance capabilities of the underground mine cars. This significant enhancement contributes to the improved energy utilization efficiency of coal mine electric cars, reducing system energy consumption and lowering CO2 emissions. Full article
Show Figures

Figure 1

14 pages, 21379 KiB  
Article
A 3D-0D Computational Model of the Left Ventricle for Investigating Blood Flow Patterns for Cases of Systolic Anterior Motion and after Anterior Mitral Leaflet Splitting
by Yousef Alharbi
Appl. Sci. 2024, 14(1), 466; https://doi.org/10.3390/app14010466 - 4 Jan 2024
Cited by 1 | Viewed by 1544
Abstract
Valvular heart conditions significantly contribute to the occurrence of cardiovascular disease, affecting around 2–3 million people in the United States. The anatomical characteristics of cardiac muscles and valves can significantly influence blood flow patterns inside the ventricles. Understanding the interaction between the mitral [...] Read more.
Valvular heart conditions significantly contribute to the occurrence of cardiovascular disease, affecting around 2–3 million people in the United States. The anatomical characteristics of cardiac muscles and valves can significantly influence blood flow patterns inside the ventricles. Understanding the interaction between the mitral valve and left ventricle structures enables using fluid–structure interaction simulations as a precise and user-friendly approach to investigating outcomes that cannot be captured using experimental approaches. This study aims to develop a 3D-0D computational model to simulate the consequences of extending the anterior mitral leaflet towards the left ventricle in the presence of the thickness of the left ventricular septum and the mitral valve device. The simulations presented in this paper successfully showcased the ability of the model to replicate occlusion occurring at the left ventricular outflow tract and illustrated the impact of this blockage on the flow pattern and pressure gradient. Furthermore, these simulations conducted following anterior mitral leaflet splitting can emphasize the significance of this technique in reducing the obstruction at the left ventricle outflow tract. The computational model presented in this study, combining 3D and 0D elements, provides significant insights into the flow patterns occurring in the left ventricle before and after anterior leaflet splitting. Thus, expanding this model can help explore other cardiac phenomena and investigate potential post-procedural complications. Full article
(This article belongs to the Section Biomedical Engineering)
Show Figures

Figure 1

17 pages, 4569 KiB  
Article
Research on Retinal Vessel Segmentation Algorithm Based on a Modified U-Shaped Network
by Xialan He, Ting Wang and Wankou Yang
Appl. Sci. 2024, 14(1), 465; https://doi.org/10.3390/app14010465 - 4 Jan 2024
Cited by 2 | Viewed by 2052
Abstract
Due to the limitations of traditional retinal blood vessel segmentation algorithms in feature extraction, vessel breakage often occurs at the end. To address this issue, a retinal vessel segmentation algorithm based on a modified U-shaped network is proposed in this paper. This algorithm [...] Read more.
Due to the limitations of traditional retinal blood vessel segmentation algorithms in feature extraction, vessel breakage often occurs at the end. To address this issue, a retinal vessel segmentation algorithm based on a modified U-shaped network is proposed in this paper. This algorithm can extract multi-scale vascular features and perform segmentation in an end-to-end manner. First, in order to improve the low contrast of the original image, pre-processing methods are employed. Second, a multi-scale residual convolution module is employed to extract image features of different granularities, while residual learning improves feature utilization efficiency and reduces information loss. In addition, a selective kernel unit is incorporated into the skip connections to obtain multi-scale features with varying receptive field sizes achieved through soft attention. Subsequently, to further extract vascular features and improve processing speed, a residual attention module is constructed at the decoder stage. Finally, a weighted joint loss function is implemented to address the imbalance between positive and negative samples. The experimental results on the DRIVE, STARE, and CHASE_DB1 datasets demonstrate that MU-Net exhibits better sensitivity and a higher Matthew’s correlation coefficient (0.8197, 0.8051; STARE: 0.8264, 0.7987; CHASE_DB1: 0.8313, 0.7960) compared to several state-of-the-art methods. Full article
Show Figures

Figure 1

19 pages, 872 KiB  
Article
Quantum Key Distribution with Post-Processing Driven by Physical Unclonable Functions
by Georgios M. Nikolopoulos and Marc Fischlin
Appl. Sci. 2024, 14(1), 464; https://doi.org/10.3390/app14010464 - 4 Jan 2024
Viewed by 1696
Abstract
Quantum key distribution protocols allow two honest distant parties to establish a common truly random secret key in the presence of powerful adversaries, provided that the two users share a short secret key beforehand. This pre-shared secret key is used mainly for authentication [...] Read more.
Quantum key distribution protocols allow two honest distant parties to establish a common truly random secret key in the presence of powerful adversaries, provided that the two users share a short secret key beforehand. This pre-shared secret key is used mainly for authentication purposes in the post-processing of classical data that have been obtained during the quantum communication stage, and it prevents a man-in-the-middle attack. The necessity of a pre-shared key is usually considered to be the main drawback of quantum key distribution protocols, and it becomes even stronger for large networks involving more than two users. Here, we discuss the conditions under which physical unclonable functions can be integrated in currently available quantum key distribution systems in order to facilitate the generation and the distribution of the necessary pre-shared key with the smallest possible cost in the security of the systems. Moreover, the integration of physical unclonable functions in quantum key distribution networks allows for real-time authentication of the devices that are connected to the network. Full article
(This article belongs to the Special Issue Advances in Quantum-Enabled Cybersecurity)
Show Figures

Figure 1

25 pages, 3835 KiB  
Article
Shared eHMI: Bridging Human–Machine Understanding in Autonomous Wheelchair Navigation
by Xiaochen Zhang, Ziyang Song, Qianbo Huang, Ziyi Pan, Wujing Li, Ruining Gong and Bi Zhao
Appl. Sci. 2024, 14(1), 463; https://doi.org/10.3390/app14010463 - 4 Jan 2024
Viewed by 1973
Abstract
As automated driving system (ADS) technology is adopted in wheelchairs, clarity on the vehicle’s imminent path becomes essential for both users and pedestrians. For users, understanding the imminent path helps mitigate anxiety and facilitates real-time adjustments. For pedestrians, this insight aids in predicting [...] Read more.
As automated driving system (ADS) technology is adopted in wheelchairs, clarity on the vehicle’s imminent path becomes essential for both users and pedestrians. For users, understanding the imminent path helps mitigate anxiety and facilitates real-time adjustments. For pedestrians, this insight aids in predicting their next move when near the wheelchair. This study introduces an on-ground projection-based shared eHMI approach for autonomous wheelchairs. By visualizing imminent motion intentions on the ground by integrating real and virtual elements, the approach quickly clarifies wheelchair behaviors for all parties, promoting proactive measures to reduce collision risks and ensure smooth wheelchair driving. To explore the practical application of the shared eHMI, a user interface was designed and incorporated into an autonomous wheelchair simulation platform. An observation-based pilot study was conducted with both experienced wheelchair users and pedestrians using structured questionnaires to assess the usability, user experience, and social acceptance of this interaction. The results indicate that the proposed shared eHMI offers clearer motion intentions display and appeal, emphasizing its potential contribution to the field. Future work should focus on improving visibility, practicality, safety, and trust in autonomous wheelchair interactions. Full article
(This article belongs to the Special Issue New Insights into Human-Computer Interaction)
Show Figures

Figure 1

13 pages, 878 KiB  
Article
Determination of the Probabilistic Properties of the Critical Fracture Energy of Concrete Integrating Scale Effect Aspects
by Mariane Rodrigues Rita, Pierre Rossi, Eduardo de Moraes Rego Fairbairn and Fernando Luiz Bastos Ribeiro
Appl. Sci. 2024, 14(1), 462; https://doi.org/10.3390/app14010462 - 4 Jan 2024
Cited by 1 | Viewed by 1016
Abstract
This paper presents an extension of the validation domain of a previously validated three-dimensional probabilistic semi-explicit cracking numerical model, which was initially validated for a specific concrete mix design. This model is implemented in a finite element code. The primary objective of this [...] Read more.
This paper presents an extension of the validation domain of a previously validated three-dimensional probabilistic semi-explicit cracking numerical model, which was initially validated for a specific concrete mix design. This model is implemented in a finite element code. The primary objective of this study is to propose a function that enables the estimation of the critical fracture energy parameter utilized in the model and validate its effectiveness for various concrete mix designs. The model focuses on macrocrack propagation and introduces significant aspects such as employing volume elements for simulating macrocrack propagation and incorporating two key factors in governing its behavior. Firstly, macrocrack initiation is linked to the uniaxial tensile strength (ft). Secondly, macrocrack propagation is influenced by a post-cracking dissipation energy in tension. This energy is taken equal to the mode I critical fracture energy (GIC) based on the linear elastic fracture mechanics theory. Importantly, both ft and GIC are probabilistic properties influenced by the volume of concrete under consideration. Consequently, in the numerical model, they are dependent on the volume of the finite elements employed. To achieve this objective, numerical simulations of fracture mechanical tests are conducted on a large double cantilever beam specimen. Through these simulations, we validate the proposed function, which is a crucial step towards expanding the model’s applicability to all concrete mix designs. Full article
(This article belongs to the Special Issue Advanced Finite Element Method and Its Applications)
Show Figures

Figure 1

13 pages, 3675 KiB  
Article
A Modified Protocol for Staining of Undecalcified Bone Samples Using Toluidine Blue—A Histological Study in Rabbit Models
by Stefan Peev, Ivaylo Parushev and Ralitsa Yotsova
Appl. Sci. 2024, 14(1), 461; https://doi.org/10.3390/app14010461 - 4 Jan 2024
Viewed by 2953
Abstract
Undecalcified bone histology is a valuable diagnostic method for studying bone microarchitecture and provides information on bone formation, resorption, and turnover. It has various clinical and research applications. Toluidine blue has been widely adopted as a staining technique for hard-tissue specimens. It provides [...] Read more.
Undecalcified bone histology is a valuable diagnostic method for studying bone microarchitecture and provides information on bone formation, resorption, and turnover. It has various clinical and research applications. Toluidine blue has been widely adopted as a staining technique for hard-tissue specimens. It provides a clear identification of bone structural and cellular features and the distinctions between them. Furthermore, the method allows for an excellent definition of the cement lines that mark the fields of bone remodeling. Some of the suggested and currently used processing and staining protocols are too complex and time-consuming, which necessitates their modification and/or optimization. This research aims to develop a simplified protocol for staining plastic-embedded undecalcified bone specimens with toluidine blue. The samples were obtained from the tibial bones of rabbits, and experiments with and without pre-etching were conducted. Our results demonstrated that the optimal visualization of the bone microstructure and its cellular components was achieved in the samples without acid pre-etching and dehydration after staining. Full article
Show Figures

Figure 1

18 pages, 1153 KiB  
Article
Finsformer: A Novel Approach to Detecting Financial Attacks Using Transformer and Cluster-Attention
by Hao An, Ruotong Ma, Yuhan Yan, Tailai Chen, Yuchen Zhao, Pan Li, Jifeng Li, Xinyue Wang, Dongchen Fan and Chunli Lv
Appl. Sci. 2024, 14(1), 460; https://doi.org/10.3390/app14010460 - 4 Jan 2024
Cited by 3 | Viewed by 1884
Abstract
This paper aims to address the increasingly severe security threats in financial systems by proposing a novel financial attack detection model, Finsformer. This model integrates the advanced Transformer architecture with the innovative cluster-attention mechanism, dedicated to enhancing the accuracy of financial attack behavior [...] Read more.
This paper aims to address the increasingly severe security threats in financial systems by proposing a novel financial attack detection model, Finsformer. This model integrates the advanced Transformer architecture with the innovative cluster-attention mechanism, dedicated to enhancing the accuracy of financial attack behavior detection to counter complex and varied attack strategies. A key innovation of the Finsformer model lies in its effective capture of key information and patterns within financial transaction data. Comparative experiments with traditional deep learning models such as RNN, LSTM, Transformer, and BERT have demonstrated that Finsformer excels in key metrics such as precision, recall, and accuracy, achieving scores of 0.97, 0.94, and 0.95, respectively. Moreover, ablation studies on different feature extractors further confirm the effectiveness of the Transformer feature extractor in processing complex financial data. Additionally, it was found that the model’s performance heavily depends on the quality and scale of data and may face challenges in computational resources and efficiency in practical applications. Future research will focus on optimizing the Finsformer model, including enhancing computational efficiency, expanding application scenarios, and exploring its application on larger and more diversified datasets. Full article
Show Figures

Figure 1

18 pages, 728 KiB  
Article
TodBR: Target-Oriented Dialog with Bidirectional Reasoning on Knowledge Graph
by Zongfeng Qu, Zhitong Yang, Bo Wang and Qinghua Hu
Appl. Sci. 2024, 14(1), 459; https://doi.org/10.3390/app14010459 - 4 Jan 2024
Cited by 1 | Viewed by 1201
Abstract
Target-oriented dialog explores how a dialog agent connects two topics cooperatively and coherently, which aims to generate a “bridging” utterance connecting the new topic to the previous conversation turn. The central focus of this task entails multi-hop reasoning on a knowledge graph (KG) [...] Read more.
Target-oriented dialog explores how a dialog agent connects two topics cooperatively and coherently, which aims to generate a “bridging” utterance connecting the new topic to the previous conversation turn. The central focus of this task entails multi-hop reasoning on a knowledge graph (KG) to achieve the desired target. However, current target-oriented dialog approaches suffer from inefficiencies in reasoning and the inability to locate pertinent key information without bidirectional reason. To address these limitations, we present a bidirectional reasoning model for target-oriented dialog implemented on a commonsense knowledge graph. Furthermore, we introduce an automated technique for constructing dialog subgraphs, which aids in acquiring multi-hop reasoning capabilities. Our experiments demonstrate that our proposed method attains superior performance in reaching the target while providing more coherent responses. Full article
Show Figures

Figure 1

16 pages, 11369 KiB  
Article
Effect of Drying–Wetting Cycle and Vibration on Strength Properties of Granite Residual Soil
by Jiarun Tang and Dongxia Chen
Appl. Sci. 2024, 14(1), 458; https://doi.org/10.3390/app14010458 - 4 Jan 2024
Cited by 1 | Viewed by 1047
Abstract
Granite residual soil (GRS) exhibits favorable engineering properties in its natural state. However, a hot and rainy climate, combined with vibrations generated during mechanical construction, can cause a notable decrease in its strength. In this study, the evolution of stress–strain curves and strength [...] Read more.
Granite residual soil (GRS) exhibits favorable engineering properties in its natural state. However, a hot and rainy climate, combined with vibrations generated during mechanical construction, can cause a notable decrease in its strength. In this study, the evolution of stress–strain curves and strength parameters (cohesion c and internal friction angle φ), unconfined compression strength (UCS) under drying and wetting(DW) cycles and vibration were investigated by means of direct shear test and UCS test. Furthermore, modified formulas for calculating shear strength and UCS under DW cycles and vibration were proposed, and their accuracy was verified. The results are as follows: The stress–strain curve of shear strength exhibits strain-hardening characteristics, and the shear compressibility of the sample increases with the number of DW cycles and vibration time. However, the stress–strain curve of UCS shows strain-softening properties, and the peak strength shifts forward with the number of DW cycles and vibrations. With the increase in the number of DW cycles and the vibration time, c shows a non-linear degradation, with a maximum degradation of 58.6%. φ fluctuates and increases due to the densification effect of DW cycles, but the influence of vibration on φ decreases with the increase in the number of DW cycles. UCS rapidly decreases and gradually stabilizes after DW cycles and vibration, with a maximum degradation of 81.1%. This study can serve as a reference for the stability analysis of GRS pits subjected to long-term influences of hot and rainy climates and mechanical vibration, providing valuable insights for future research. Full article
(This article belongs to the Section Civil Engineering)
Show Figures

Figure 1

21 pages, 5480 KiB  
Article
Parametric Investigation of Parallel Deposition Passes on the Microstructure and Mechanical Properties of 7075 Aluminum Alloy Processed with Additive Friction Stir Deposition
by L. P. Cahalan, M. B. Williams, L. N. Brewer, M. M. McDonnell, M. R. Kelly, A. D. Lalonde, P. G. Allison and J. B. Jordon
Appl. Sci. 2024, 14(1), 457; https://doi.org/10.3390/app14010457 - 4 Jan 2024
Cited by 3 | Viewed by 1646
Abstract
Large-scale metal additive manufacturing (AM) provides a unique solution to rapidly develop prototype components with net-shape or near-net shape geometries. Specifically, additive friction stir deposition (AFSD) is a solid-state method for large-scale metal AM that produces near-net shape depositions capable of high deposition [...] Read more.
Large-scale metal additive manufacturing (AM) provides a unique solution to rapidly develop prototype components with net-shape or near-net shape geometries. Specifically, additive friction stir deposition (AFSD) is a solid-state method for large-scale metal AM that produces near-net shape depositions capable of high deposition rates. As AFSD is utilized for a broader range of applications, there is a need to understand deposition strategies for larger and more complex geometries. In particular, components with larger surface areas will require overlapping deposition passes within a single layer. In this study, the AFSD process was used to create depositions utilizing multiple passes with a varying deposition path overlap width. The effects of overlapping parallel pass depositions on the mechanical and microstructural properties of aluminum alloy 7075 were examined. The grain size and microstructural features of the deposited material were analyzed to evaluate material mixing and plastic flow in the observed overlap regions. Additionally, hardness and tensile experiments were conducted to observe the relationship between the overlap width and as-deposited material behavior. In this study, an ideal overlap width was found that produced acceptable as-deposited material properties. Full article
(This article belongs to the Special Issue Alloys: Evolution of Microstructure and Texture)
Show Figures

Figure 1

15 pages, 3344 KiB  
Article
Genetic Multi-Objective Optimization of Sensor Placement for SHM of Composite Structures
by Tomasz Rogala, Mateusz Ścieszka, Andrzej Katunin and Sandris Ručevskis
Appl. Sci. 2024, 14(1), 456; https://doi.org/10.3390/app14010456 - 4 Jan 2024
Cited by 1 | Viewed by 1374
Abstract
Increasingly often, due to the high sensitivity level of diagnostic systems, they are also sensitive to the occurrence of a significant number of false alarms. In particular, in structural health monitoring (SHM), the problem of optimal sensor placement (OSP) is appearing due to [...] Read more.
Increasingly often, due to the high sensitivity level of diagnostic systems, they are also sensitive to the occurrence of a significant number of false alarms. In particular, in structural health monitoring (SHM), the problem of optimal sensor placement (OSP) is appearing due to the need to reach a balance between performance and cost of the diagnostic system. The applied approach of considering nondominated solutions allows for adaption of the system parameters to the user’s expectations, treating this optimization problem as multi-objective. For this purpose, the NSGA-II algorithm was selected for the determination of an optimal set of parameters in the OSP problem for the detection of delamination in composite structures. The objectives comprise minimization of type-I and type-II errors, and number of sensors to be placed. The advantage of the proposed approach is that it is based on experimental data from the healthy structure, whereas all cases with a presence of delamination were acquired from numerical experiments. This makes it possible to develop a customized SHM system for the arbitrary location of damage. Full article
Show Figures

Figure 1

18 pages, 9346 KiB  
Article
GNSS-Assisted Visual Dynamic Localization Method in Unknown Environments
by Jun Dai, Chunfeng Zhang, Songlin Liu, Xiangyang Hao, Zongbin Ren and Yunzhu Lv
Appl. Sci. 2024, 14(1), 455; https://doi.org/10.3390/app14010455 - 4 Jan 2024
Viewed by 1341
Abstract
Autonomous navigation and localization are the foundations of unmanned intelligent systems, therefore, continuous, stable, and reliable position services in unknown environments are especially important for autonomous navigation and localization. Aiming at the problem where GNSS cannot continuously localize in complex environments due to [...] Read more.
Autonomous navigation and localization are the foundations of unmanned intelligent systems, therefore, continuous, stable, and reliable position services in unknown environments are especially important for autonomous navigation and localization. Aiming at the problem where GNSS cannot continuously localize in complex environments due to weak signals, poor penetration ability, and susceptibility to interference and that visual navigation and localization are only relative, this paper proposes a GNSS-aided visual dynamic localization method that can provide global localization services in unknown environments. Taking the three frames of images and their corresponding GNSS coordinates as the constraint data, the GNSS coordinate system and world coordinate system transformation matrix are obtained through horn coordinate transformation, and the relative positions of the subsequent image sequences in the world coordinate system are obtained through epipolar geometry constraints, homography matrix transformations, and 2D–3D position and orientation solving, which ultimately yields the global position data of unmanned carriers in GNSS coordinate systems when GNSS is temporarily unavailable. Both the dataset validation and measured data validation showed that the GNSS initial-assisted positioning algorithm could be applied to situations where intermittent GNSS signals exist, and it can provide global positioning coordinates with high positioning accuracy in a short period of time; however, the algorithm would drift when used for a long period of time. We further compared the errors of the GNSS initial-assisted positioning and GNSS continuous-assisted positioning systems, and the results showed that the accuracy of the GNSS continuous-assisted positioning system was two to three times better than that of the GNSS initial-assisted positioning system, which proved that the GNSS continuous-assisted positioning algorithm could maintain positioning accuracy for a long time and it had good reliability and applicability in unknown environments. Full article
Show Figures

Figure 1

16 pages, 10304 KiB  
Article
BWLM: A Balanced Weight Learning Mechanism for Long-Tailed Image Recognition
by Baoyu Fan, Han Ma, Yue Liu and Xiaochen Yuan
Appl. Sci. 2024, 14(1), 454; https://doi.org/10.3390/app14010454 - 4 Jan 2024
Cited by 2 | Viewed by 1398
Abstract
With the growth of data in the real world, datasets often encounter the problem of long-tailed distribution of class sample sizes. In long-tailed image recognition, existing solutions usually adopt a class rebalancing strategy, such as reweighting based on the effective sample size of [...] Read more.
With the growth of data in the real world, datasets often encounter the problem of long-tailed distribution of class sample sizes. In long-tailed image recognition, existing solutions usually adopt a class rebalancing strategy, such as reweighting based on the effective sample size of each class, which leans towards common classes in terms of higher accuracy. However, increasing the accuracy of rare classes while maintaining the accuracy of common classes is the key to solving the problem of long-tailed image recognition. This research explores a direction that balances the accuracy of both common and rare classes simultaneously. Firstly, a two-stage training is adopted, motivated by the use of transfer learning to balance features of common and rare classes. Secondly, a balanced weight function called Balanced Focal Softmax (BFS) loss is proposed, which combines balanced softmax loss focusing on common classes with balanced focal loss focusing on rare classes to achieve dual balance in long-tailed image recognition. Subsequently, a Balanced Weight Learning Mechanism (BWLM) to further utilize the feature of weight decay is proposed, where the weight decay as the weight balancing technique for the BFS loss tends to make the model learn smaller balanced weights by punishing the larger weights. Through extensive experiments on five long-tailed image datasets, it proves that transferring the weights from the first stage to the second stage can alleviate the bias of the naive models toward common classes. The proposed BWLM not only balances the weights of common and rare classes, but also greatly improves the accuracy of long-tailed image recognition and outperforms many state-of-the-art algorithms. Full article
(This article belongs to the Special Issue State-of-the-Art of Computer Vision and Pattern Recognition)
Show Figures

Figure 1

21 pages, 8538 KiB  
Article
Experimental Study on the Mechanical Properties of Hybrid Basalt-Polypropylene Fibre-Reinforced Gangue Concrete
by Yu Yang, Changhao Xin, Yidan Sun, Junzhen Di and Pengfei Liang
Appl. Sci. 2024, 14(1), 453; https://doi.org/10.3390/app14010453 - 4 Jan 2024
Cited by 2 | Viewed by 1202
Abstract
Incomplete data indicate that coal gangue is accumulated in China, with over 2000 gangue hills covering an area exceeding 200,000 mu and an annual growth rate surpassing 800 million tons. This accumulation not only signifies a substantial waste of resources but also poses [...] Read more.
Incomplete data indicate that coal gangue is accumulated in China, with over 2000 gangue hills covering an area exceeding 200,000 mu and an annual growth rate surpassing 800 million tons. This accumulation not only signifies a substantial waste of resources but also poses a significant danger to the environment. Utilizing coal gangue as an aggregate in the production of coal-gangue concrete offers an effective avenue for coal-gangue recycling. However, compared with ordinary concrete, the strength and ductility of coal-gangue concrete require enhancement. Due to coal-gangue concrete having higher brittleness and lower deformation resistance than ordinary concrete, basalt fibre (BF) is a green, high-performance fibre that exhibits excellent bonding properties with cement-based materials, and polypropylene fibre (PF) is a flexible fibre with high deformability; thus, we determine if adding BF and PF to coal-gangue concrete can enhance its ductility and strength. In this paper, the stress–strain curve trends of different hybrid basalt–polypropylene fibre-reinforced coal-gangue concrete (HBPRGC) specimens under uniaxial compression are studied when the matrix strengths are C20 and C30. The effects of BF and PF on the mechanical and energy conversion behaviours of coal-gangue concrete are analysed. The results show that the ductile deformation of coal-gangue concrete can be markedly enhanced at a 0.1% hybrid-fibre volume content; HBPRGC-20-0.1 and HBPRGC-30-0.1 have elevations of 53.66% and 51.45% in total strain energy and 54.11% and 50% in dissipative energy, respectively. And HBPRGC-20-0.2 and HBPRGC-30-0.2 have elevations of 31.95% and 30.32% in total strain energy and −3.46% and 28.71% in dissipative energy, respectively. With hybrid-fibre volume content increased, the elastic modulus, the total strain energy, and the dissipative energy all show a downward trend. Therefore, 0.1% seems to be the optimum hybrid-fibre volume content for well-enhancing the ductility and strength of coal-gangue concrete. Finally, the damage evolution and deformation trends of coal-gangue concrete doped with fibre under uniaxial action are studied theoretically, and the constitutive model and damage evolution equation of HBPRGC are established based on Weibull theory The model and the equation are in good agreement with the experimental results. Full article
Show Figures

Figure 1

23 pages, 1344 KiB  
Article
Optimizing Data Processing: A Comparative Study of Big Data Platforms in Edge, Fog, and Cloud Layers
by Thanda Shwe and Masayoshi Aritsugi
Appl. Sci. 2024, 14(1), 452; https://doi.org/10.3390/app14010452 - 4 Jan 2024
Cited by 3 | Viewed by 2693
Abstract
Intelligent applications in several areas increasingly rely on big data solutions to improve their efficiency, but the processing and management of big data incur high costs. Although cloud-computing-based big data management and processing offer a promising solution to provide scalable and abundant resources, [...] Read more.
Intelligent applications in several areas increasingly rely on big data solutions to improve their efficiency, but the processing and management of big data incur high costs. Although cloud-computing-based big data management and processing offer a promising solution to provide scalable and abundant resources, the current cloud-based big data management platforms do not properly address the high latency, privacy, and bandwidth consumption challenges that arise when sending large volumes of user data to the cloud. Computing in the edge and fog layers is quickly emerging as an extension of cloud computing used to reduce latency and bandwidth consumption, resulting in some of the processing tasks being performed in edge/fog-layer devices. Although these devices are resource-constrained, recent increases in resource capacity provide the potential for collaborative big data processing. We investigated the deployment of data processing platforms based on three different computing paradigms, namely batch processing, stream processing, and function processing, by aggregating the processing power from a diverse set of nodes in the local area. Herein, we demonstrate the efficacy and viability of edge-/fog-layer big data processing across a variety of real-world applications and in comparison to the cloud-native approach in terms of performance. Full article
(This article belongs to the Special Issue Cloud Computing and Big Data Systems: New Trends and Applications)
Show Figures

Figure 1

12 pages, 3736 KiB  
Article
A Low-Cost Printed Log-Periodic Dipole Array for DVB-T2 Digital TV Applications
by Giovanni Andrea Casula, Giacomo Muntoni, Paolo Maxia and Giorgio Montisci
Appl. Sci. 2024, 14(1), 451; https://doi.org/10.3390/app14010451 - 4 Jan 2024
Viewed by 1231
Abstract
A printed log-periodic dipole array (LPDA) for DVB-T2 Digital TV applications, covering the whole DVB-T2 UHF band from Channel 21 to Channel 69 (470 MHz–860 MHz), is presented. The presented antenna offers a compact size and a lower cost compared to both wire [...] Read more.
A printed log-periodic dipole array (LPDA) for DVB-T2 Digital TV applications, covering the whole DVB-T2 UHF band from Channel 21 to Channel 69 (470 MHz–860 MHz), is presented. The presented antenna offers a compact size and a lower cost compared to both wire and similar printed LPDAs, with a normalized area of only 0.26 λ2 (where λ is the free-space wavelength at the central frequency) and a similar (or higher) average gain. It is composed of meandered radiating dipoles, and it is implemented on FR4, the cheapest dielectric substrate available on the market. Moreover, the antenna size has been reduced to an A4 sheet dimension (210 mm × 297 mm) to cut down the production cost. The antenna has been designed starting from Carrel’s theory and using a general-purpose 3D CAD, CST Studio Suite. The results show that the proposed antenna can be used for broadband applications (≈74% bandwidth) in the whole operating frequency band of Digital TV, with a satisfactory end-fire radiation pattern, a stable gain, and a radiation efficiency over the required frequency range (average values 6.56 dB and 97%, respectively). Full article
Show Figures

Figure 1

19 pages, 2850 KiB  
Article
Hot Strip Mill Gearbox Monitoring and Diagnosis Based on Convolutional Neural Networks Using the Pseudo-Labeling Method
by Myung-Kyo Seo and Won-Young Yun
Appl. Sci. 2024, 14(1), 450; https://doi.org/10.3390/app14010450 - 4 Jan 2024
Cited by 2 | Viewed by 1258
Abstract
The steel industry is typical process manufacturing, and the quality and cost of the products can be improved by efficient operation of equipment. This paper proposes an efficient diagnosis and monitoring method for the gearbox, which is a key piece of mechanical equipment [...] Read more.
The steel industry is typical process manufacturing, and the quality and cost of the products can be improved by efficient operation of equipment. This paper proposes an efficient diagnosis and monitoring method for the gearbox, which is a key piece of mechanical equipment in steel manufacturing. In particular, an equipment maintenance plan for stable operation is essential. Therefore, equipment monitoring and diagnosis to prevent unplanned plant shutdowns are important to operate the equipment efficiently and economically. Most plant data collected on-site have no precise information about equipment malfunctions. Therefore, it is difficult to directly apply supervised learning algorithms to diagnose and monitor the equipment with the operational data collected. The purpose of this paper is to propose a pseudo-label method to enable supervised learning for equipment data without labels. Pseudo-normal (PN) and pseudo-abnormal (PA) vibration datasets are defined and labeled to apply classification analysis algorithms to unlabeled equipment data. To find an anomalous state in the equipment based on vibration data, the initial PN vibration dataset is compared with a PA vibration dataset collected over time, and the equipment is monitored for potential failure. Continuous wavelet transform (CWT) is applied to the vibration signals collected to obtain an image dataset, which is then entered into a convolutional neural network (an image classifier) to determine classification accuracy and detect equipment abnormalities. As a result of Steps 1 to 4, abnormal signals have already been detected in the dataset, and alarms and warnings have already been generated. The classification accuracy was over 0.95 at d=4, confirming quantitatively that the status of the equipment had changed significantly. In this way, a catastrophic failure can be avoided by performing a detailed equipment inspection in advance. Lastly, a catastrophic failure occurred in Step 9, and the classification accuracy ranged from 0.95 to 1.0. It was possible to prevent secondary equipment damage, such as motors connected to gearboxes, by identifying catastrophic failures promptly. This case study shows that the proposed procedure gives good results in detecting operation abnormalities of key unit equipment. In the conclusion, further promising topics are discussed. Full article
(This article belongs to the Special Issue Machine Diagnostics and Vibration Analysis)
Show Figures

Figure 1

11 pages, 3037 KiB  
Article
Pantograph–Catenary Interaction Prediction Model Based on SCSA-RBF Network
by Mengzhen Wu, Xianghong Xu, Haochen Zhang, Rui Zhou and Jianshan Wang
Appl. Sci. 2024, 14(1), 449; https://doi.org/10.3390/app14010449 - 4 Jan 2024
Viewed by 1094
Abstract
As a traditional numerical simulation method for pantograph–catenary interaction research, the pantograph–catenary finite element model cannot be applied to the real-time monitoring of pantograph–catenary contact force, and the computational cost required for the multi-parameter joint optimization of the pantograph–catenary system with the finite [...] Read more.
As a traditional numerical simulation method for pantograph–catenary interaction research, the pantograph–catenary finite element model cannot be applied to the real-time monitoring of pantograph–catenary contact force, and the computational cost required for the multi-parameter joint optimization of the pantograph–catenary system with the finite element model is very high. In this paper, based on the selective crow search algorithm–radial basis function (SCSA-RBF) network, the time-domain signal of the panhead acceleration, which can be obtained in real-time through non-contact test technology, is taken as the boundary condition to directly solve the pantograph dynamic equation and a data-physics coupling model that can quickly predict the pantograph–catenary interaction is proposed. The prediction model is trained and verified using the dataset generated through the finite element model. Furthermore, the prediction model is applied to the multi-parameter joint optimization of six pantograph dynamic parameters and nine pantograph dynamic parameters, considering nonlinear panhead stiffness, and optimization suggestions under various speeds and filtering frequencies are given. Full article
Show Figures

Figure 1

13 pages, 2160 KiB  
Article
Contact Force Surrogate Model and Its Application in Pantograph–Catenary Parameter Optimization
by Rui Zhou and Xianghong Xu
Appl. Sci. 2024, 14(1), 448; https://doi.org/10.3390/app14010448 - 4 Jan 2024
Viewed by 1181
Abstract
The significant increase in the speed of high-speed trains has made the optimization of pantograph–catenary parameters aimed at improving current collection quality become one of the key issues that urgently need to be addressed. In this paper, a method and solutions are proposed [...] Read more.
The significant increase in the speed of high-speed trains has made the optimization of pantograph–catenary parameters aimed at improving current collection quality become one of the key issues that urgently need to be addressed. In this paper, a method and solutions are proposed for optimizing multiple pantograph–catenary parameters, taking into account the speed levels and engineering feasibility, for pantograph–catenary systems that contain dozens of parameters and exhibit strong nonlinear coupling characteristics. Firstly, a surrogate model capable of accurately predicting the standard deviation of contact force based on speed and 14 pantograph–catenary parameters was constructed by using the pantograph–catenary finite element model and feedforward neural network. Secondly, sensitivity analysis and rating of the pantograph–catenary parameters under different speeds were conducted using the variance-based method and the surrogate model. Finally, by combining the sensitivity analysis results and the Selective Crow Search Algorithm, joint optimization of 10 combinations of the pantograph–catenary parameters across the entire speed range was performed, providing efficient pantograph–catenary parameter optimization solutions for various engineering conditions. Full article
Show Figures

Figure 1

15 pages, 4365 KiB  
Article
Method of Improving the Management of Cancer Risk Groups by Coupling a Features-Attention Mechanism to a Deep Neural Network
by Darian M. Onchis, Flavia Costi, Codruta Istin, Ciprian Cosmin Secasan and Gabriel V. Cozma
Appl. Sci. 2024, 14(1), 447; https://doi.org/10.3390/app14010447 - 4 Jan 2024
Cited by 1 | Viewed by 1320
Abstract
(1) Background: Lung cancers are the most common cancers worldwide, and prostate cancers are among the second in terms of the frequency of cancers diagnosed in men. Automatic ranking of the risk groups of such diseases is highly in demand, but the clinical [...] Read more.
(1) Background: Lung cancers are the most common cancers worldwide, and prostate cancers are among the second in terms of the frequency of cancers diagnosed in men. Automatic ranking of the risk groups of such diseases is highly in demand, but the clinical practice has shown us that, for a sensitive screening of the clinical parameters using an artificial intelligence system, a customarily defined deep neural network classifier is not sufficient given the usually small size of medical datasets. (2) Methods: In this paper, we propose a new management method of cancer risk groups based on a supervised neural network model that is further enhanced by using a features attention mechanism in order to boost its level of accuracy. For the analysis of each clinical parameter, we used local interpretable model-agnostic explanations, which is a post hoc model-agnostic technique that outlines feature importance. After that, we applied the feature-attention mechanism in order to obtain a higher weight after training. We tested the method on two datasets, one for binary-class in cases of thoracic cancer and one for multi-class classification in cases of urological cancer, to demonstrate the wide availability and versatility of the method. (3) Results: The accuracy levels of the models trained in this way reached values of more than 80% for both clinical tasks. (4) Conclusions: Our experiments demonstrate that, by using explainability results as feedback signals in conjunction with the attention mechanism, we were able to increase the accuracy of the base model by more than 20% on small medical datasets, reaching a critical threshold for providing recommendations based on the collected clinical parameters. Full article
(This article belongs to the Special Issue Artificial Intelligence for Health and Well-Being)
Show Figures

Figure 1

28 pages, 14624 KiB  
Article
Modeling the Spatial Distribution of Population Based on Random Forest and Parameter Optimization Methods: A Case Study of Sichuan, China
by Yunzhou Chen, Shumin Wang, Ziying Gu and Fan Yang
Appl. Sci. 2024, 14(1), 446; https://doi.org/10.3390/app14010446 - 3 Jan 2024
Cited by 2 | Viewed by 2042
Abstract
Spatial population distribution data is the discretization of demographic data into spatial grids, which has vital reference significance for disaster emergency response, disaster assessment, emergency rescue resource allocation, and post-disaster reconstruction. The random forest (RF) model, as a prominent method for modeling the [...] Read more.
Spatial population distribution data is the discretization of demographic data into spatial grids, which has vital reference significance for disaster emergency response, disaster assessment, emergency rescue resource allocation, and post-disaster reconstruction. The random forest (RF) model, as a prominent method for modeling the spatial distribution of population, has been studied by many scholars, both domestically and abroad. Specifically, research has focused on aspects such as multi-source data fusion, feature selection, and data accuracy evaluation within the modeling process. However, discussions about parameter optimization methods during the modeling process and the impact of different optimization methods on modeling accuracy are relatively limited. In light of the above circumstances, this paper employs the RF model to conduct research on population spatialization with multi-source spatial information data. The study primarily explores the differences in model parameter optimization achieved through random search algorithms, grid search algorithms, genetic algorithms, simulated annealing algorithms, Bayesian optimization based on Gaussian process algorithms, and Bayesian optimization based on gradient boosting regression tree algorithms. Additionally, the study investigates the influence of different optimization algorithms on the accuracy of population spatialization modeling. Subsequently, the model with the highest accuracy is selected as the prediction model for population spatialization. Based on this model, a spatial population distribution dataset of Sichuan Province at a 1 km resolution is generated. Finally, the population dataset created in this paper is compared and validated with open datasets such as GPW, LandScan, and WorldPop. Experimental results indicate that the spatial population distribution dataset produced by the Bayesian optimization-based random forest model proposed in this paper exhibits a higher fitting accuracy with real data. The Coefficient of Determination (R2) is 0.6628, the Mean Absolute Error (MAE) is 12,459, and the Root Mean Squared Error (RMSE) is 25,037. Compared to publicly available international datasets, the dataset generated in this paper more accurately represents the spatial distribution of the population. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

13 pages, 2975 KiB  
Article
Thermoforming Simulation of Woven Carbon Fiber Fabric/Polyurethane Composite Materials
by Shun-Fa Hwang, Yi-Chen Tsai, Cho-Liang Tsai, Chih-Hsian Wang and Hsien-Kuang Liu
Appl. Sci. 2024, 14(1), 445; https://doi.org/10.3390/app14010445 - 3 Jan 2024
Cited by 2 | Viewed by 1471
Abstract
A finite element simulation was utilized in this work to analyze the thermoforming process of woven carbon fiber fabric/polyurethane thermoplastic composite sheets. In the simulation that may be classified as a discrete method, the woven carbon fiber fabric was treated as an undulated [...] Read more.
A finite element simulation was utilized in this work to analyze the thermoforming process of woven carbon fiber fabric/polyurethane thermoplastic composite sheets. In the simulation that may be classified as a discrete method, the woven carbon fiber fabric was treated as an undulated fill yarn crossed over an undulated warp yarn, and the resin was considered separately. Then, they were combined to represent the composite sheets. To verify this simulation, bias extension tests under three constant temperatures were executed. After that, the composite was thermoformed into a U-shaped structure and small luggage. From the bias extension tests, the finite element simulation and material properties of the fiber and resin were confirmed. From the comparison of the thermoformed products, the present simulation could provide the deformed profile and fiber-included angles and has good agreement with the experiment. The results also indicate that the stacking sequences of [(0°/90°)]4 and [(+45°/−45°)]4 have quite different product profiles and fiber-included angles. Full article
(This article belongs to the Special Issue Advanced Finite Element Method and Its Applications)
Show Figures

Figure 1

23 pages, 43088 KiB  
Article
Physical and Mechanical Properties and Damage Mechanism of Sandstone at High Temperatures
by Yadong Zheng, Lianying Zhang, Peng Wu, Xiaoqian Guo, Ming Li and Fuqiang Zhu
Appl. Sci. 2024, 14(1), 444; https://doi.org/10.3390/app14010444 - 3 Jan 2024
Cited by 2 | Viewed by 1701
Abstract
The physical and mechanical properties of rocks change significantly after being subjected to high temperatures, which poses safety hazards to underground projects such as coal underground gasification. In order to investigate the effect of temperature on the macroscopic and microscopic properties of rocks, [...] Read more.
The physical and mechanical properties of rocks change significantly after being subjected to high temperatures, which poses safety hazards to underground projects such as coal underground gasification. In order to investigate the effect of temperature on the macroscopic and microscopic properties of rocks, this paper has taken sandstone as the research object and conducted uniaxial compression tests on sandstone specimens at different temperatures (20–1000 °C) and different heating rates (5–30 °C/min). At the same time, the acoustic emission (AE) test system was used to observe the acoustic emission characteristics of the rock damage process, and the microstructural changes after high temperature were analyzed with the help of a scanning electron microscope (SEM). The test results show that the effect of temperature on sandstone is mainly divided into three stages: Stage I (20–500 °C) is the strengthening zone, the evaporation of water and the contraction of primary fissures, and sandstone densification is enhanced. In particular, the compressive strength and elastic modulus increase, the macroscopic damage mode is dominated by shear damage, and the fracture micromorphology is mainly brittle fracture. Stage II (500–600 °C) is the transition zone, 500 °C is the threshold temperature for the compressive strength and modulus of elasticity, and the damage mode changes from shear to cleavage damage, and the sandstone undergoes brittle–ductile transition in this temperature interval. Stage III is the physicochemical deterioration stage. The changes in the physical and chemical properties make the sandstone compressive strength and modulus of elasticity continue to decline, the macroscopic damage mode is mainly dominated by cleavage damage, and the fracture microscopic morphology is of a more toughness fracture. The effect of different heating rates on the mechanical properties of sandstone was further studied, and it was found that the mechanical properties of the rock further deteriorated under higher heating rates. Full article
Show Figures

Figure 1

22 pages, 8403 KiB  
Article
Seismic Upgrade of an Existing Reinforced Concrete Building Using Steel Plate Shear Walls (SPSW)
by Niki Balkamou and George Papagiannopoulos
Appl. Sci. 2024, 14(1), 443; https://doi.org/10.3390/app14010443 - 3 Jan 2024
Cited by 1 | Viewed by 1866
Abstract
Steel Plate Shear Walls (SPSW) provide significant lateral load capacity and can be utilized in the seismic retrofit and upgrade of existing reinforced concrete (r/c) buildings. In this study, the application of SPSW to retrofit a r/c building designed according to older seismic [...] Read more.
Steel Plate Shear Walls (SPSW) provide significant lateral load capacity and can be utilized in the seismic retrofit and upgrade of existing reinforced concrete (r/c) buildings. In this study, the application of SPSW to retrofit a r/c building designed according to older seismic provisions is presented. Three different options to model SPSW are utilized, i.e., by equivalent braces, by finite elements, and by membrane elements, seeking not only to appropriately simulate the actual behavior of the SPSW but also to achieve the desired seismic behavior of the retrofitted building. Specific seismic response indices, including plastic hinge formations, are derived by non-linear time-history analyses in order to assess the seismic behavior of the retrofitted r/c building. Inspection of the results provided by non-linear analyses in conjunction with the different modeling options of the SPSW leads to the conclusion that the model with the membrane elements exhibits the best performance, implying that for the seismic retrofit and upgrade of existing r/c buildings, the use of membrane elements to model the SPSW is recommended. Full article
(This article belongs to the Special Issue Seismic Assessment and Design of Structures: Volume 2)
Show Figures

Figure 1

20 pages, 3764 KiB  
Article
Potential of Radioactive Isotopes Production in DEMO for Commercial Use
by Pavel Pereslavtsev, Christian Bachmann, Joelle Elbez-Uzan and Jin Hun Park
Appl. Sci. 2024, 14(1), 442; https://doi.org/10.3390/app14010442 - 3 Jan 2024
Viewed by 2109
Abstract
There is widespread use of nuclear radiation for medical imagery and treatments. Worldwide, almost 40 million treatments are performed per year. There are also applications of radiation sources in other commercial fields, e.g., for weld inspection or steelmaking processes, in consumer products, in [...] Read more.
There is widespread use of nuclear radiation for medical imagery and treatments. Worldwide, almost 40 million treatments are performed per year. There are also applications of radiation sources in other commercial fields, e.g., for weld inspection or steelmaking processes, in consumer products, in the food industry, and in agriculture. The large number of neutrons generated in a fusion reactor such as DEMO could potentially contribute to the production of the required radioactive isotopes. The associated commercial value of these isotopes could mitigate the capital investments and operating costs of a large fusion plant. The potential of producing various radioactive isotopes was studied from material pieces arranged inside a DEMO equatorial port plug. In this location, they are exposed to an intensive neutron spectrum suitable for a high isotope production rate. For this purpose, the full 3D geometry of one DEMO toroidal sector with an irradiation chamber in the equatorial port plug was modeled with an MCNP code to perform neutron transport simulations. Subsequent activation calculations provide detailed information on the quality and composition of the produced radioactive isotopes. The technical feasibility and the commercial potential of the production of various isotopes in the DEMO port are reported. Full article
(This article belongs to the Special Issue Advances in Fusion Engineering and Design Volume II)
Show Figures

Figure 1

Previous Issue
Back to TopTop