Next Issue
Volume 12, March
Previous Issue
Volume 12, January
 
 

Technologies, Volume 12, Issue 2 (February 2024) – 15 articles

Cover Story (view full-size image): Atrial fibrillation (AF) has an increasing prevalence and association with major adverse cardiovascular events (MACE). In recent years, there has been growing interest in identifying new predictors of MACE in AF patients. This novel approach is associated with a reduction in the risk of MACE. The role of artificial intelligence and machine learning techniques offer a promising avenue for more effective prediction of AF progression. Incorporating machine learning algorithms into the clinical management of people at high risk of AF and those with AF offers potential benefits, such as personalised risk assessment, data-driven decision support and improved patient care. This study shows that the application of machine learning is highly effective in predicting MACE in patients with newly diagnosed AF. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
22 pages, 21332 KiB  
Communication
ARSIP: Automated Robotic System for Industrial Painting
by Hossam A. Gabbar and Muhammad Idrees
Technologies 2024, 12(2), 27; https://doi.org/10.3390/technologies12020027 - 19 Feb 2024
Cited by 2 | Viewed by 4799
Abstract
This manuscript addresses the critical need for precise paint application to ensure product durability and aesthetics. While manual work carries risks, robotic systems promise accuracy, yet programming diverse product trajectories remains a challenge. This study aims to develop an autonomous system capable of [...] Read more.
This manuscript addresses the critical need for precise paint application to ensure product durability and aesthetics. While manual work carries risks, robotic systems promise accuracy, yet programming diverse product trajectories remains a challenge. This study aims to develop an autonomous system capable of generating paint trajectories based on object geometries for user-defined spraying processes. By emphasizing energy efficiency, process time, and coating thickness on complex surfaces, a hybrid optimization technique enhances overall efficiency. Extensive hardware and software development results in a robust robotic system leveraging the Robot Operating System (ROS). Integrating a low-cost 3D scanner, calibrator, and trajectory optimizer creates an autonomous painting system. Hardware components, including sensors, motors, and actuators, are seamlessly integrated with a Python and ROS-based software framework, enabling the desired automation. A web-based GUI, powered by JavaScript, allows user control over two robots, facilitating trajectory dispatch, 3D scanning, and optimization. Specific nodes manage calibration, validation, process settings, and real-time video feeds. The use of open-source software and an ROS ecosystem makes it a good choice for industrial-scale implementation. The results indicate that the proposed system can achieve the desired automation, contingent upon surface geometries, spraying processes, and robot dynamics. Full article
(This article belongs to the Section Assistive Technologies)
Show Figures

Figure 1

22 pages, 4811 KiB  
Article
A National Innovation System Concept-Based Analysis of Autonomous Vehicles’ Potential in Reaching Zero-Emission Fleets
by Nalina Hamsaiyni Venkatesh and Laurencas Raslavičius
Technologies 2024, 12(2), 26; https://doi.org/10.3390/technologies12020026 - 8 Feb 2024
Viewed by 2632
Abstract
Change management for technology adoption in the transportation sector is often used to address long-term challenges characterized by complexity, uncertainty, and ambiguity. Especially when technology is still evolving, an analysis of these challenges can help explore different alternative future pathways. Therefore, the analysis [...] Read more.
Change management for technology adoption in the transportation sector is often used to address long-term challenges characterized by complexity, uncertainty, and ambiguity. Especially when technology is still evolving, an analysis of these challenges can help explore different alternative future pathways. Therefore, the analysis of development trajectories, correlations between key system variables, and the rate of change within the entire road transportation system can guide action toward sustainability. By adopting the National Innovation System concept, we evaluated the possibilities of an autonomous vehicle option to reach a zero-emission fleet. A case-specific analysis was conducted to evaluate the industry capacities, performance of R&D organizations, main objectives of future market-oriented reforms in the power sector, policy implications, and other aspects to gain insightful perspectives. Environmental insights for transportation sector scenarios in 2021, 2030, and 2050 were explored and analyzed using the COPERT v5.5.1 software program. This study offers a new perspective for road transport decarbonization research and adds new insights to the obtained correlation between the NIS dynamics and achievement of sustainability goals. In 2050, it is expected to achieve 100% carbon neutrality in the PC segment and ~85% in the HDV segment. Finally, four broad conclusions emerged from this research as a consequence of the analysis. Full article
(This article belongs to the Section Environmental Technology)
Show Figures

Figure 1

16 pages, 4403 KiB  
Communication
Exploiting PlanetScope Imagery for Volcanic Deposits Mapping
by Maddalena Dozzo, Gaetana Ganci, Federico Lucchi and Simona Scollo
Technologies 2024, 12(2), 25; https://doi.org/10.3390/technologies12020025 - 8 Feb 2024
Viewed by 2118
Abstract
During explosive eruptions, tephra fallout represents one of the main volcanic hazards and can be extremely dangerous for air traffic, infrastructures, and human health. Here, we present a new technique aimed at identifying the area covered by tephra after an explosive event, based [...] Read more.
During explosive eruptions, tephra fallout represents one of the main volcanic hazards and can be extremely dangerous for air traffic, infrastructures, and human health. Here, we present a new technique aimed at identifying the area covered by tephra after an explosive event, based on processing PlanetScope imagery. We estimate the mean reflectance values of the visible (RGB) and near infrared (NIR) bands, analyzing pre- and post-eruptive data in specific areas and introducing a new index, which we call the ‘Tephra Fallout Index (TFI)’. We use the Google Earth Engine computing platform and define a threshold for the TFI of different eruptive events to distinguish the areas affected by the tephra fallout and quantify the surface coverage density. We apply our technique to the eruptive events occurring in 2021 at Mt. Etna (Italy), which mainly involved the eastern flank of the volcano, sometimes two or three times within a day, making field surveys difficult. Whenever possible, we compare our results with field data and find an optimal match. This work could have important implications for the identification and quantification of short-term volcanic hazard assessments in near real-time during a volcanic eruption, but also for the mapping of other hazardous events worldwide. Full article
(This article belongs to the Special Issue Image and Signal Processing)
Show Figures

Figure 1

10 pages, 3004 KiB  
Communication
High Affinity of Nanoparticles and Matrices Based on Acid-Base Interaction for Nanoparticle-Filled Membrane
by Tsutomu Makino, Keisuke Tabata, Takaaki Saito, Yosimasa Matsuo and Akito Masuhara
Technologies 2024, 12(2), 24; https://doi.org/10.3390/technologies12020024 - 7 Feb 2024
Viewed by 2077
Abstract
The introduction of nanoparticles into the polymer matrix is a useful technique for creating highly functional composite membranes. Our research focuses on the development of nanoparticle-filled proton exchange membranes (PEMs). PEMs play a crucial role in efficiently controlling the electrical energy conversion process [...] Read more.
The introduction of nanoparticles into the polymer matrix is a useful technique for creating highly functional composite membranes. Our research focuses on the development of nanoparticle-filled proton exchange membranes (PEMs). PEMs play a crucial role in efficiently controlling the electrical energy conversion process by facilitating the movement of specific ions. This is achieved by creating functionalized nanoparticles with polymer coatings on their surfaces, which are then combined with resins to create proton-conducting membranes. In this study, we prepared PEMs by coating the surfaces of silica nanoparticles with acidic polymers and integrating them into a basic matrix. This process resulted in the formation of a direct bond between the nanoparticles and the matrix, leading to composite membranes with a high dispersion and densely packed nanoparticles. This fabrication technique significantly improved mechanical strength and retention stability, resulting in high-performance membranes. Moreover, the proton conductivity of these membranes showed a remarkable enhancement of more than two orders of magnitude compared to the pristine basic matrix, reaching 4.2 × 10−4 S/cm at 80 °C and 95% relative humidity. Full article
(This article belongs to the Special Issue Smart Systems (SmaSys2023))
Show Figures

Figure 1

16 pages, 334 KiB  
Article
Multistage Malware Detection Method for Backup Systems
by Pavel Novak, Vaclav Oujezsky, Patrik Kaura, Tomas Horvath and Martin Holik
Technologies 2024, 12(2), 23; https://doi.org/10.3390/technologies12020023 - 5 Feb 2024
Viewed by 2522
Abstract
This paper proposes an innovative solution to address the challenge of detecting latent malware in backup systems. The proposed detection system utilizes a multifaceted approach that combines similarity analysis with machine learning algorithms to improve malware detection. The results demonstrate the potential of [...] Read more.
This paper proposes an innovative solution to address the challenge of detecting latent malware in backup systems. The proposed detection system utilizes a multifaceted approach that combines similarity analysis with machine learning algorithms to improve malware detection. The results demonstrate the potential of advanced similarity search techniques, powered by the Faiss model, in strengthening malware discovery within system backups and network traffic. Implementing these techniques will lead to more resilient cybersecurity practices, protecting essential systems from hidden malware threats. This paper’s findings underscore the potential of advanced similarity search techniques to enhance malware discovery in system backups and network traffic, and the implications of implementing these techniques include more resilient cybersecurity practices and protecting essential systems from malicious threats hidden within backup archives and network data. The integration of AI methods improves the system’s efficiency and speed, making the proposed system more practical for real-world cybersecurity. This paper’s contribution is a novel and comprehensive solution designed to detect latent malware in backups, preventing the backup of compromised systems. The system comprises multiple analytical components, including a system file change detector, an agent to monitor network traffic, and a firewall, all integrated into a central decision-making unit. The current progress of the research and future steps are discussed, highlighting the contributions of this project and potential enhancements to improve cybersecurity practices. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Graphical abstract

16 pages, 1505 KiB  
Article
Angle Calculus-Based Thrust Force Determination on the Blades of a 10 kW Wind Turbine
by José Rafael Dorrego-Portela, Adriana Eneida Ponce-Martínez, Eduardo Pérez-Chaltell, Jaime Peña-Antonio, Carlos Alberto Mateos-Mendoza, José Billerman Robles-Ocampo, Perla Yazmin Sevilla-Camacho, Marcos Aviles and Juvenal Rodríguez-Reséndiz
Technologies 2024, 12(2), 22; https://doi.org/10.3390/technologies12020022 - 5 Feb 2024
Cited by 1 | Viewed by 2065
Abstract
In this article, the behavior of the thrust force on the blades of a 10 kW wind turbine was obtained by considering the characteristic wind speed of the Isthmus of Tehuantepec. Analyzing mechanical forces is essential to efficiently and safely design the different [...] Read more.
In this article, the behavior of the thrust force on the blades of a 10 kW wind turbine was obtained by considering the characteristic wind speed of the Isthmus of Tehuantepec. Analyzing mechanical forces is essential to efficiently and safely design the different elements that make up the wind turbine because the thrust forces are related to the location point and the blade rotation. For this reason, the thrust force generated in each of the three blades of a low-power wind turbine was analyzed. The angular position (θ) of each blade varied from 0° to 120°, the blades were segmented (r), and different wind speeds were tested, such as cutting, design, average, and maximum. The results demonstrate that the thrust force increases proportionally with increasing wind speed and height, but it behaves differently on each blade segment and each angular position. This method determines the angular position and the exact blade segment where the smallest and the most considerable thrust force occurred. Blade 1, positioned at an angular position of 90°, is the blade most affected by the thrust force on P15. When the blade rotates 180°, the thrust force decreases by 9.09 N; this represents a 66.74% decrease. In addition, this study allows the designers to know the blade deflection caused by the thrust force. This information can be used to avoid collision with the tower. The thrust forces caused blade deflections of 10% to 13% concerning the rotor radius used in this study. These results guarantee the operation of the tested generator under their working conditions. Full article
(This article belongs to the Collection Electrical Technologies)
Show Figures

Figure 1

26 pages, 6974 KiB  
Review
Energy Efficiency in Additive Manufacturing: Condensed Review
by Ismail Fidan, Vivekanand Naikwadi, Suhas Alkunte, Roshan Mishra and Khalid Tantawi
Technologies 2024, 12(2), 21; https://doi.org/10.3390/technologies12020021 - 5 Feb 2024
Cited by 7 | Viewed by 4274
Abstract
Today, it is significant that the use of additive manufacturing (AM) has growing in almost every aspect of the daily life. A high number of sectors are adapting and implementing this revolutionary production technology in their domain to increase production volumes, reduce the [...] Read more.
Today, it is significant that the use of additive manufacturing (AM) has growing in almost every aspect of the daily life. A high number of sectors are adapting and implementing this revolutionary production technology in their domain to increase production volumes, reduce the cost of production, fabricate light weight and complex parts in a short period of time, and respond to the manufacturing needs of customers. It is clear that the AM technologies consume energy to complete the production tasks of each part. Therefore, it is imperative to know the impact of energy efficiency in order to economically and properly use these advancing technologies. This paper provides a holistic review of this important concept from the perspectives of process, materials science, industry, and initiatives. The goal of this research study is to collect and present the latest knowledge blocks related to the energy consumption of AM technologies from a number of recent technical resources. Overall, they are the collection of surveys, observations, experimentations, case studies, content analyses, and archival research studies. The study highlights the current trends and technologies associated with energy efficiency and their influence on the AM community. Full article
(This article belongs to the Collection Review Papers Collection for Advanced Technologies)
Show Figures

Figure 1

24 pages, 8024 KiB  
Article
Parametric Metamodeling Based on Optimal Transport Applied to Uncertainty Evaluation
by Sergio Torregrosa, David Muñoz, Vincent Herbert and Francisco Chinesta
Technologies 2024, 12(2), 20; https://doi.org/10.3390/technologies12020020 - 2 Feb 2024
Viewed by 1779
Abstract
When training a parametric surrogate to represent a real-world complex system in real time, there is a common assumption that the values of the parameters defining the system are known with absolute confidence. Consequently, during the training process, our focus is directed exclusively [...] Read more.
When training a parametric surrogate to represent a real-world complex system in real time, there is a common assumption that the values of the parameters defining the system are known with absolute confidence. Consequently, during the training process, our focus is directed exclusively towards optimizing the accuracy of the surrogate’s output. However, real physics is characterized by increased complexity and unpredictability. Notably, a certain degree of uncertainty may exist in determining the system’s parameters. Therefore, in this paper, we account for the propagation of these uncertainties through the surrogate using a standard Monte Carlo methodology. Subsequently, we propose a novel regression technique based on optimal transport to infer the impact of the uncertainty of the surrogate’s input on its output precision in real time. The OT-based regression allows for the inference of fields emulating physical reality more accurately than classical regression techniques, including advanced ones. Full article
Show Figures

Figure 1

15 pages, 3607 KiB  
Article
An Optimum Load Forecasting Strategy (OLFS) for Smart Grids Based on Artificial Intelligence
by Asmaa Hamdy Rabie, Ahmed I. Saleh, Said H. Abd Elkhalik and Ali E. Takieldeen
Technologies 2024, 12(2), 19; https://doi.org/10.3390/technologies12020019 - 1 Feb 2024
Cited by 4 | Viewed by 2013
Abstract
Recently, the application of Artificial Intelligence (AI) in many areas of life has allowed raising the efficiency of systems and converting them into smart ones, especially in the field of energy. Integrating AI with power systems allows electrical grids to be smart enough [...] Read more.
Recently, the application of Artificial Intelligence (AI) in many areas of life has allowed raising the efficiency of systems and converting them into smart ones, especially in the field of energy. Integrating AI with power systems allows electrical grids to be smart enough to predict the future load, which is known as Intelligent Load Forecasting (ILF). Hence, suitable decisions for power system planning and operation procedures can be taken accordingly. Moreover, ILF can play a vital role in electrical demand response, which guarantees a reliable transitioning of power systems. This paper introduces an Optimum Load Forecasting Strategy (OLFS) for predicting future load in smart electrical grids based on AI techniques. The proposed OLFS consists of two sequential phases, which are: Data Preprocessing Phase (DPP) and Load Forecasting Phase (LFP). In the former phase, an input electrical load dataset is prepared before the actual forecasting takes place through two essential tasks, namely feature selection and outlier rejection. Feature selection is carried out using Advanced Leopard Seal Optimization (ALSO) as a new nature-inspired optimization technique, while outlier rejection is accomplished through the Interquartile Range (IQR) as a measure of statistical dispersion. On the other hand, actual load forecasting takes place in LFP using a new predictor called the Weighted K-Nearest Neighbor (WKNN) algorithm. The proposed OLFS has been tested through extensive experiments. Results have shown that OLFS outperforms recent load forecasting techniques as it introduces the maximum prediction accuracy with the minimum root mean square error. Full article
(This article belongs to the Collection Electrical Technologies)
Show Figures

Figure 1

32 pages, 13657 KiB  
Article
A Comprehensive Performance Analysis of a 48-Watt Transformerless DC-DC Boost Converter Using a Proportional–Integral–Derivative Controller with Special Attention to Inductor Design and Components Reliability
by Kuldeep Jayaswal, D. K. Palwalia and Josep M. Guerrero
Technologies 2024, 12(2), 18; https://doi.org/10.3390/technologies12020018 - 30 Jan 2024
Cited by 2 | Viewed by 2236
Abstract
In this research paper, a comprehensive performance analysis was carried out for a 48-watt transformerless DC-DC boost converter using a Proportional–Integral–Derivative (PID) controller through dynamic modeling. In a boost converter, the optimal design of the magnetic element plays an important role in efficient [...] Read more.
In this research paper, a comprehensive performance analysis was carried out for a 48-watt transformerless DC-DC boost converter using a Proportional–Integral–Derivative (PID) controller through dynamic modeling. In a boost converter, the optimal design of the magnetic element plays an important role in efficient energy transfer. This research paper emphasizes the design of an inductor using the Area Product Technique (APT) to analyze factors such as area product, window area, number of turns, and wire size. Observations were made by examining its response to changes in load current, supply voltage, and load resistance at frequency levels of 100 and 500 kHz. Moreover, this paper extended its investigation by analyzing the failure rates and reliability of active and passive components in a 48-watt boost converter, providing valuable insights about failure behavior and reliability. Frequency domain analysis was conducted to assess the controller’s stability and robustness. The results conclusively underscore the benefits of incorporating the designed PID controller in terms of achieving the desired regulation and rapid response to disturbances at 100 and 500 kHz. The findings emphasize the outstanding reliability of the inductor, evident from the significantly low failure rates in comparison to other circuit components. Conversely, the research also reveals the inherent vulnerability of the switching device (MOSFET), characterized by a higher failure rate and lower reliability. The MATLAB® Simulink platform was utilized to investigate the results. Full article
(This article belongs to the Collection Electrical Technologies)
Show Figures

Figure 1

37 pages, 14719 KiB  
Review
Comprehensive Study of Compression and Texture Integration for Digital Imaging and Communications in Medicine Data Analysis
by Amit Kumar Shakya and Anurag Vidyarthi
Technologies 2024, 12(2), 17; https://doi.org/10.3390/technologies12020017 - 24 Jan 2024
Cited by 8 | Viewed by 2819
Abstract
In response to the COVID-19 pandemic and its strain on healthcare resources, this study presents a comprehensive review of various techniques that can be used to integrate image compression techniques and statistical texture analysis to optimize the storage of Digital Imaging and Communications [...] Read more.
In response to the COVID-19 pandemic and its strain on healthcare resources, this study presents a comprehensive review of various techniques that can be used to integrate image compression techniques and statistical texture analysis to optimize the storage of Digital Imaging and Communications in Medicine (DICOM) files. In evaluating four predominant image compression algorithms, i.e., discrete cosine transform (DCT), discrete wavelet transform (DWT), the fractal compression algorithm (FCA), and the vector quantization algorithm (VQA), this study focuses on their ability to compress data while preserving essential texture features such as contrast, correlation, angular second moment (ASM), and inverse difference moment (IDM). A pivotal observation concerns the direction-independent Grey Level Co-occurrence Matrix (GLCM) in DICOM analysis, which reveals intriguing variations between two intermediate scans measured with texture characteristics. Performance-wise, the DCT, DWT, FCA, and VQA algorithms achieved minimum compression ratios (CRs) of 27.87, 37.91, 33.26, and 27.39, respectively, with maximum CRs at 34.48, 68.96, 60.60, and 38.74. This study also undertook a statistical analysis of distinct CT chest scans from COVID-19 patients, highlighting evolving texture patterns. Finally, this work underscores the potential of coupling image compression and texture feature quantification for monitoring changes related to human chest conditions, offering a promising avenue for efficient storage and diagnostic assessment of critical medical imaging. Full article
(This article belongs to the Topic Smart Healthcare: Technologies and Applications)
Show Figures

Graphical abstract

19 pages, 3509 KiB  
Article
Attention-Based Ensemble Network for Effective Breast Cancer Classification over Benchmarks
by Su Myat Thwin, Sharaf J. Malebary, Anas W. Abulfaraj and Hyun-Seok Park
Technologies 2024, 12(2), 16; https://doi.org/10.3390/technologies12020016 - 23 Jan 2024
Cited by 6 | Viewed by 2814
Abstract
Globally, breast cancer (BC) is considered a major cause of death among women. Therefore, researchers have used various machine and deep learning-based methods for its early and accurate detection using X-ray, MRI, and mammography image modalities. However, the machine learning model requires domain [...] Read more.
Globally, breast cancer (BC) is considered a major cause of death among women. Therefore, researchers have used various machine and deep learning-based methods for its early and accurate detection using X-ray, MRI, and mammography image modalities. However, the machine learning model requires domain experts to select an optimal feature, obtains a limited accuracy, and has a high false positive rate due to handcrafting features extraction. The deep learning model overcomes these limitations, but these models require large amounts of training data and computation resources, and further improvement in the model performance is needed. To do this, we employ a novel framework called the Ensemble-based Channel and Spatial Attention Network (ECS-A-Net) to automatically classify infected regions within BC images. The proposed framework consists of two phases: in the first phase, we apply different augmentation techniques to enhance the size of the input data, while the second phase includes an ensemble technique that parallelly leverages modified SE-ResNet50 and InceptionV3 as a backbone for feature extraction, followed by Channel Attention (CA) and Spatial Attention (SA) modules in a series manner for more dominant feature selection. To further validate the ECS-A-Net, we conducted extensive experiments between several competitive state-of-the-art (SOTA) techniques over two benchmarks, including DDSM and MIAS, where the proposed model achieved 96.50% accuracy for the DDSM and 95.33% accuracy for the MIAS datasets. Additionally, the experimental results demonstrated that our network achieved a better performance using various evaluation indicators, including accuracy, sensitivity, and specificity among other methods. Full article
Show Figures

Figure 1

40 pages, 12154 KiB  
Review
A Review of Machine Learning and Deep Learning for Object Detection, Semantic Segmentation, and Human Action Recognition in Machine and Robotic Vision
by Nikoleta Manakitsa, George S. Maraslidis, Lazaros Moysis and George F. Fragulis
Technologies 2024, 12(2), 15; https://doi.org/10.3390/technologies12020015 - 23 Jan 2024
Cited by 21 | Viewed by 20092
Abstract
Machine vision, an interdisciplinary field that aims to replicate human visual perception in computers, has experienced rapid progress and significant contributions. This paper traces the origins of machine vision, from early image processing algorithms to its convergence with computer science, mathematics, and robotics, [...] Read more.
Machine vision, an interdisciplinary field that aims to replicate human visual perception in computers, has experienced rapid progress and significant contributions. This paper traces the origins of machine vision, from early image processing algorithms to its convergence with computer science, mathematics, and robotics, resulting in a distinct branch of artificial intelligence. The integration of machine learning techniques, particularly deep learning, has driven its growth and adoption in everyday devices. This study focuses on the objectives of computer vision systems: replicating human visual capabilities including recognition, comprehension, and interpretation. Notably, image classification, object detection, and image segmentation are crucial tasks requiring robust mathematical foundations. Despite the advancements, challenges persist, such as clarifying terminology related to artificial intelligence, machine learning, and deep learning. Precise definitions and interpretations are vital for establishing a solid research foundation. The evolution of machine vision reflects an ambitious journey to emulate human visual perception. Interdisciplinary collaboration and the integration of deep learning techniques have propelled remarkable advancements in emulating human behavior and perception. Through this research, the field of machine vision continues to shape the future of computer systems and artificial intelligence applications. Full article
(This article belongs to the Collection Review Papers Collection for Advanced Technologies)
Show Figures

Figure 1

12 pages, 3139 KiB  
Communication
Comparison of Shallow (−20 °C) and Deep Cryogenic Treatment (−196 °C) to Enhance the Properties of a Mg/2wt.%CeO2 Nanocomposite
by Shwetabh Gupta, Gururaj Parande and Manoj Gupta
Technologies 2024, 12(2), 14; https://doi.org/10.3390/technologies12020014 - 23 Jan 2024
Cited by 3 | Viewed by 1998
Abstract
Magnesium and its composites have been used in various applications owing to their high specific strength properties and low density. However, the application is limited to room-temperature conditions owing to the lack of research available on the ability of magnesium alloys to perform [...] Read more.
Magnesium and its composites have been used in various applications owing to their high specific strength properties and low density. However, the application is limited to room-temperature conditions owing to the lack of research available on the ability of magnesium alloys to perform in sub-zero conditions. The present study attempted, for the first time, the effects of two cryogenic temperatures (−20 °C/253 K and −196 °C/77 K) on the physical, thermal, and mechanical properties of a Mg/2wt.%CeO2 nanocomposite. The materials were synthesized using the disintegrated melt deposition method followed by hot extrusion. The results revealed that the shallow cryogenically treated (refrigerated at −20 °C) samples display a reduction in porosity, lower ignition resistance, similar microhardness, compressive yield, and ultimate strength and failure strain when compared to deep cryogenically treated samples in liquid nitrogen at −196 °C. Although deep cryogenically treated samples showed an overall edge, the extent of the increase in properties may not be justified, as samples exposed at −20 °C display very similar mechanical properties, thus reducing the overall cost of the cryogenic process. The results were compared with the data available in the open literature, and the mechanisms behind the improvement of the properties were evaluated. Full article
(This article belongs to the Special Issue Advanced Processing Technologies of Innovative Materials)
Show Figures

Figure 1

14 pages, 2538 KiB  
Article
Machine Learning Approaches to Predict Major Adverse Cardiovascular Events in Atrial Fibrillation
by Pedro Moltó-Balado, Silvia Reverté-Villarroya, Victor Alonso-Barberán, Cinta Monclús-Arasa, Maria Teresa Balado-Albiol, Josep Clua-Queralt and Josep-Lluis Clua-Espuny
Technologies 2024, 12(2), 13; https://doi.org/10.3390/technologies12020013 - 23 Jan 2024
Cited by 1 | Viewed by 2899
Abstract
The increasing prevalence of atrial fibrillation (AF) and its association with Major Adverse Cardiovascular Events (MACE) presents challenges in early identification and treatment. Although existing risk factors, biomarkers, genetic variants, and imaging parameters predict MACE, emerging factors may be more decisive. Artificial intelligence [...] Read more.
The increasing prevalence of atrial fibrillation (AF) and its association with Major Adverse Cardiovascular Events (MACE) presents challenges in early identification and treatment. Although existing risk factors, biomarkers, genetic variants, and imaging parameters predict MACE, emerging factors may be more decisive. Artificial intelligence and machine learning techniques (ML) offer a promising avenue for more effective AF evolution prediction. Five ML models were developed to obtain predictors of MACE in AF patients. Two-thirds of the data were used for training, employing diverse approaches and optimizing to minimize prediction errors, while the remaining third was reserved for testing and validation. AdaBoost emerged as the top-performing model (accuracy: 0.9999; recall: 1; F1 score: 0.9997). Noteworthy features influencing predictions included the Charlson Comorbidity Index (CCI), diabetes mellitus, cancer, the Wells scale, and CHA2DS2-VASc, with specific associations identified. Elevated MACE risk was observed, with a CCI score exceeding 2.67 ± 1.31 (p < 0.001), CHA2DS2-VASc score of 4.62 ± 1.02 (p < 0.001), and an intermediate-risk Wells scale classification. Overall, the AdaBoost ML offers an alternative predictive approach to facilitate the early identification of MACE risk in the assessment of patients with AF. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop