Technologies doi: 10.3390/technologies12030041
Authors: Rosa M. Woo-García José M. Pérez-Vista Adrián Sánchez-Vidal Agustín L. Herrera-May Edith Osorio-de-la-Rosa Felipe Caballero-Briones Francisco López-Huerta
Nowadays, the need to monitor different physical variables is constantly increasing and can be used in different applications, from humidity monitoring to disease detection in living beings, using a local or wireless sensor network (WSN). The Internet of Things has become a valuable approach to climate monitoring, daily parcel monitoring, early disease detection, crop plant counting, and risk assessment. Herein, an autonomous energy wireless sensor network for monitoring environmental variables is proposed. The network’s tree topology configuration, which involves master and slave modules, is managed by microcontrollers embedded with sensors, constituting a key part of the WSN architecture. The system’s slave modules are equipped with sensors for temperature, humidity, gas, and light detection, along with a photovoltaic cell to energize the system, and a WiFi module for data transmission. The receiver incorporates a user interface and the necessary computing components for efficient data handling. In an open-field configuration, the transceiver range of the proposed system reaches up to 750 m per module. The advantages of this approach are its scalability, energy efficiency, and the system’s ability to provide real-time environmental monitoring over a large area, which is particularly beneficial for applications in precision agriculture and environmental management.
]]>Technologies doi: 10.3390/technologies12030040
Authors: Alireza Kamran-Pishhesari Amin Moniri-Morad Javad Sattarvand
Although multiview platforms have enhanced work efficiency in mining teleoperation systems, they also induce “cognitive tunneling” and depth-detection issues for operators. These issues inadvertently focus their attention on a restricted central view. Fully immersive virtual reality (VR) has recently attracted the attention of specialists in the mining industry to address these issues. Nevertheless, developing VR teleoperation systems remains a formidable challenge, particularly in achieving a realistic 3D model of the environment. This study investigates the existing gap in fully immersive teleoperation systems within the mining industry, aiming to identify the most optimal methods for their development and ensure operator’s safety. To achieve this purpose, a literature search is employed to identify and extract information from the most relevant sources. The most advanced teleoperation systems are examined by focusing on their visualization types. Then, various 3D reconstruction techniques applicable to mining VR teleoperation are investigated, and their data acquisition methods, sensor technologies, and algorithms are analyzed. Ultimately, the study discusses challenges associated with 3D reconstruction techniques for mining teleoperation. The findings demonstrated that the real-time 3D reconstruction of underground mining environments primarily involves depth-based techniques. In contrast, point cloud generation techniques can mostly be employed for 3D reconstruction in open-pit mining operations.
]]>Technologies doi: 10.3390/technologies12030039
Authors: Mohamed A. Afifi Mostafa I. Marei Ahmed M. I. Mohamad
As the world grapples with the energy crisis, integrating renewable energy sources into the power grid has become increasingly crucial. Microgrids have emerged as a vital solution to this challenge. However, the reliance on renewable energy sources in microgrids often leads to low inertia. Renewable energy sources interfaced with the network through interlinking converters lack the inertia of conventional synchronous generators, and hence, need to provide frequency support through virtual inertia techniques. This paper presents a new control algorithm that utilizes the reinforcement learning agents Twin Delayed Deep Deterministic Policy Gradient (TD3) and Deep Deterministic Policy Gradient (DDPG) to support the frequency in low-inertia microgrids. The RL agents are trained using the system-linearized model and then extended to the nonlinear model to reduce the computational burden. The proposed system consists of an AC–DC microgrid comprising a renewable energy source on the DC microgrid, along with constant and resistive loads. On the AC microgrid side, a synchronous generator is utilized to represent the low inertia of the grid, which is accompanied by dynamic and static loads. The model of the system is developed and verified using Matlab/Simulink and the reinforcement learning toolbox. The system performance with the proposed AI-based methods is compared to conventional low-pass and high-pass filter (LPF and HPF) controllers.
]]>Technologies doi: 10.3390/technologies12030038
Authors: Enamul Karim Hamza Reza Pavel Sama Nikanfar Aref Hebri Ayon Roy Harish Ram Nambiappan Ashish Jaiswal Glenn R. Wylie Fillia Makedon
Cognitive fatigue, a state of reduced mental capacity arising from prolonged cognitive activity, poses significant challenges in various domains, from road safety to workplace productivity. Accurately detecting and mitigating cognitive fatigue is crucial for ensuring optimal performance and minimizing potential risks. This paper presents a comprehensive survey of the current landscape in cognitive fatigue detection. We systematically review various approaches, encompassing physiological, behavioral, and performance-based measures, for robust and objective fatigue detection. The paper further analyzes different challenges, including the lack of standardized ground truth and the need for context-aware fatigue assessment. This survey aims to serve as a valuable resource for researchers and practitioners seeking to understand and address the multifaceted challenge of cognitive fatigue detection.
]]>Technologies doi: 10.3390/technologies12030037
Authors: Sandra Morelli Carla Daniele Giuseppe D’Avenio Mauro Grigioni Daniele Giansanti
The field of technology assessment in telemedicine is garnering increasing attention due to the widespread adoption of this discipline and its complex and heterogeneous system characteristics, making its application complex. As part of a national telemedicine project, the National Center for Innovative Technologies in Public Health at the Italian National Institute of Health played the role of promoting and utilizing technology assessment tools within partnership projects. This study aims to outline the design, development, and application of assessment methodologies within the telemedicine project proposed by the ISS team, utilizing a specific framework developed within the project. The sub-objectives include evaluating the proposed methodology’s effectiveness and feasibility, gathering feedback for improvement, and assessing its impact on various project components. The study emphasizes the multifaceted nature of action domains and underscores the crucial role of technology assessments in telemedicine, highlighting its impact across diverse realms through iterative interaction cycles with project partners. Both the impact and the acceptance of the methodology have been assessed by means of specific computer-aided web interviewing (CAWI) tools. The proposed methodology received significant acceptance, providing valuable insights for refining future frameworks. The impact assessment revealed a consistent quality improvement trend in the project’s products, evident in methodological consolidations. The overall message encourages similar initiatives in this domain, shedding light on the intricacies of technology assessment implementation. In conclusion, the study serves as a comprehensive outcome of the national telemedicine project, witnessing the success and adaptability of the technology assessment methodology and advocating for further exploration and implementation in analogous contexts.
]]>Technologies doi: 10.3390/technologies12030036
Authors: Reinier Jiménez Borges Luis Angel Iturralde Carrera Eduardo Julio Lopez Bastida José R. García-Martínez Roberto V. Carrillo-Serrano Juvenal Rodríguez-Reséndiz
There are numerous analytical and/or computational tools for evaluating the energetic sustainability of biomass in the sugar industry. However, the simultaneous integration of the energetic–exergetic and emergetic criteria for such evaluation is still insufficient. The objective of the present work is to propose a range of indicators to evaluate the sustainability of the use of biomass as fuel in the sugar industry. For this purpose, energy, exergy, and emergy evaluation tools were theoretically used as sustainability indicators. They were validated in five variants of different biomass and their mixtures in two studies of technologies used in Cuba for the sugar industry. As a result, the energy method showed, for all variants, an increase in efficiency of about 5% in the VU-40 technology compared to the Retal technology. There is an increase in energy efficiency when considering AHRs of 2.8% or Marabu (Dichrostachys cinerea) (5.3%) compared to the V1 variant. Through the study of the exergetic efficiency, an increase of 2% was determined in both technologies for the case of the V1 variant, and an increase in efficiency is observed in the V2 variant of 5% and the V3 variant (5.6%) over the V1 variant. The emergetic method showed superior results for the VU-40 technology over the Retal technology due to higher fuel utilization. In the case of the V1 variant, there was a 7% increase in the renewability ratio and an 11.07% increase in the sustainability index. This is because more energy is produced per unit of environmental load.
]]>Technologies doi: 10.3390/technologies12030035
Authors: Chul-Woo Byeon
In this paper, we present a highly linear direct in-phase/quadrature (I/Q) up-conversion mixer for 5G millimeter-wave applications. To enhance the linearity of the mixer, we propose a complementary derivative superposition technique with pre-distortion. The proposed up-conversion mixer consists of a quadrature generator, LO buffer amplifiers, and an I/Q up-conversion mixer core and achieves an output third-order intercept point of 15.7 dBm and an output 1 dB compression point of 2 dBm at 27.6 GHz, while it consumes 15 mW at a supply voltage of 1 V. The conversion gain is 11.4 dB and the LO leakage and image rejection ratio are −56 dBc and 61 dB, respectively, in the measurement. The proposed I/Q up-conversion mixer is suitable for 5G cellular communication systems.
]]>Technologies doi: 10.3390/technologies12030034
Authors: Pedro Almeida Vítor Carvalho Alberto Simões
Artificial Intelligence bots are extensively used in multiplayer First-Person Shooter (FPS) games. By using Machine Learning techniques, we can improve their performance and bring them to human skill levels. In this work, we focused on comparing and combining two Reinforcement Learning training architectures, Curriculum Learning and Behaviour Cloning, applied to an FPS developed in the Unity Engine. We have created four teams of three agents each: one team for Curriculum Learning, one for Behaviour Cloning, and another two for two different methods of combining Curriculum Learning and Behaviour Cloning. After completing the training, each agent was matched to battle against another agent of a different team until each pairing had five wins or ten time-outs. In the end, results showed that the agents trained with Curriculum Learning achieved better performance than the ones trained with Behaviour Cloning by a matter of 23.67% more average victories in one case. In terms of the combination attempts, not only did the agents trained with both devised methods had problems during training, but they also achieved insufficient results in the battle, with an average of 0 wins.
]]>Technologies doi: 10.3390/technologies12030033
Authors: Chiara Zandonà Andrea Roberti Davide Costanzi Burçin Gül Özge Akbulut Paolo Fiorini Andrea Calanca
Transperineal prostate biopsy is the most reliable technique for detecting prostate cancer, and robot-assisted needle insertion has the potential to improve the accuracy of this procedure. Modeling the interaction between a bevel-tip needle and the tissue, considering tissue heterogeneity, needle bending, and tissue/organ deformation and movement is a required step to enable robotic needle insertion. Even if several models exist, they have never been compared on experimental grounds. Based on this motivation, this paper proposes an experimental comparison for kinematic models of needle insertion, considering different needle insertion speeds and different degrees of tissue stiffness. The experimental comparison considers automated insertions of needles into transparent silicone phantoms under stereo-image guidance. The comparison evaluates the accuracy of existing models in predicting needle deformation.
]]>Technologies doi: 10.3390/technologies12030032
Authors: Barouch Giechaskiel Anastasios Melas Jacopo Franzetti Victor Valverde Michaël Clairotte Ricardo Suarez-Bertoa
Light-duty vehicle emission regulations worldwide set limits for the following gaseous pollutants: carbon monoxide (CO), nitric oxides (NOX), hydrocarbons (HCs), and/or non-methane hydrocarbons (NMHCs). Carbon dioxide (CO2) is indirectly limited by fleet CO2 or fuel consumption targets. Measurements are carried out at the dilution tunnel with “standard” laboratory-grade instruments following well-defined principles of operation: non-dispersive infrared (NDIR) analyzers for CO and CO2, flame ionization detectors (FIDs) for hydrocarbons, and chemiluminescence analyzers (CLAs) or non-dispersive ultraviolet detectors (NDUVs) for NOX. In the United States in 2012 and in China in 2020, with Stage 6, nitrous oxide (N2O) was also included. Brazil is phasing in NH3 in its regulation. Alternative instruments that can measure some or all these pollutants include Fourier transform infrared (FTIR)- and laser absorption spectroscopy (LAS)-based instruments. In the second category, quantum cascade laser (QCL) spectroscopy in the mid-infrared area or laser diode spectroscopy (LDS) in the near-infrared area, such as tunable diode laser absorption spectroscopy (TDLAS), are included. According to current regulations and technical specifications, NH3 is the only component that has to be measured at the tailpipe to avoid ammonia losses due to its hydrophilic properties and adsorption on the transfer lines. There are not many studies that have evaluated such instruments, in particular those for “non-regulated” worldwide pollutants. For this reason, we compared laboratory-grade “standard” analyzers with FTIR- and TDLAS-based instruments measuring NH3. One diesel and two gasoline vehicles at different ambient temperatures and with different test cycles produced emissions in a wide range. In general, the agreement among the instruments was very good (in most cases, within ±10%), confirming their suitability for the measurement of pollutants.
]]>Technologies doi: 10.3390/technologies12030031
Authors: Denisse Kim Bernardo Cánovas-Segura Manuel Campos Jose M. Juarez
In recent years, the proliferation of health data sources due to computer technologies has prompted the use of visualization techniques to tackle epidemiological challenges. However, existing reviews lack a specific focus on the spatial and temporal analysis of epidemiological data using visualization tools. This study aims to address this gap by conducting a scoping review following the PRISMA-ScR guidelines, examining the literature from 2000 to 2024 on spatial–temporal visualization techniques when applied to epidemics, across five databases: PubMed, IEEE Xplore, Scopus, Google Scholar, and ACM Digital Library until 24 January 2024. Among 1312 papers reviewed, 114 were selected, emphasizing aggregate measures, web platform tools, and geospatial data representation, particularly favoring choropleth maps and extended charts. Visualization techniques were predominantly utilized for real-time data presentation, trend analysis, and predictions. Evaluation methods, categorized into standard methodology, user experience, task efficiency, and accuracy, were observed. Although various open-access datasets were available, only a few were commonly used, mainly those related to COVID-19. This study sheds light on the current trends in visualizing epidemiological data over the past 24 years, highlighting the gaps in standardized evaluation methodologies and the limited exploration of individual epidemiological data and diseases acquired in hospitals during epidemics.
]]>Technologies doi: 10.3390/technologies12030030
Authors: Venkatasubramanian Krishnamoorthy Ashvita Anitha John Shubrajit Bhaumik Viorel Paleu
This work investigates the stick–slip phenomenon during sliding motion between solid lubricant-impregnated epoxy polymer-coated steel bars and AISI 52,100 steel balls. An acoustic sensor detected the stick–slip phenomenon during the tribo-pair interaction. The wear characteristics of the workpiece coated with different epoxy coatings were observed and scrutinized. The RMS values of the acoustic sensor were correlated with the frictional coefficient to develop a standard based on the acoustic sensor, leading to the detection of the stick–slip phenomenon. As per the findings, the acoustic waveform remained relatively similar to the friction coefficient observed during the study and can be used effectively in detecting the stick–slip phenomenon between steel and polymer interaction. This work will be highly beneficial in industrial and automotive applications with a significant interaction of polymer and steel surfaces.
]]>Technologies doi: 10.3390/technologies12030029
Authors: Jianhan Chen Rohen Prinsloo Xiongwei Ni
By planting LEDs on the surfaces of orifice baffles, a novel batch oscillatory baffled photoreactor (OBPR) together with polymer-supported Rose Bengal (Ps-RB) beads are here used to investigate the reaction kinetics of a photo-oxidation reaction between α-terpinene and singlet oxygen (1O2). In the mode of NMR data analysis that is widely used for this reaction, α-terpinene and ascaridole are treated as a reaction pair, assuming kinetically singlet oxygen is in excess or constant. We have, for the first time, here examined the validity of the method, discovered that increasing α-terpinene initially leads to an increase in ascaridole, indicating that the supply of singlet oxygen is in excess. Applying a kinetic analysis, a pseudo-first-order reaction kinetics is confirmed, supporting this assumption. We have subsequently initiated a methodology of estimating the 1O2 concentrations based on the proportionality of ascaridole concentrations with respect to its maximum under these conditions. With the help of the estimated singlet oxygen data, the efficiency of 1O2 utilization and the photo efficiency of converting molecular oxygen to 1O2 are further proposed and evaluated. We have also identified conditions under which a further increase in α-terpinene has caused decreases in ascaridole, implying kinetically that 1O2 has now become a limiting reagent, and the method of treating α-terpinene and ascaridole as a reaction pair in the data analysis would no longer be valid under those conditions.
]]>Technologies doi: 10.3390/technologies12030028
Authors: Luis Felipe Estrella-Ibarra Alejandro de León-Cuevas Saul Tovar-Arriaga
In 3D segmentation, point-based models excel but face difficulties in precise class delineation at class intersections, an inherent challenge in segmentation models. This is particularly critical in medical applications, influencing patient care and surgical planning, where accurate 3D boundary identification is essential for assisting surgery and enhancing medical training through advanced simulations. This study introduces the Nested Contrastive Boundary Learning Point Transformer (NCBL-PT), specially designed for 3D point cloud segmentation. NCBL-PT employs contrastive learning to improve boundary point representation by enhancing feature similarity within the same class. NCBL-PT incorporates a border-aware distinction within the same class points, allowing the model to distinctly learn from both points in proximity to the class intersection and from those beyond. This reduces semantic confusion among the points of different classes in the ambiguous class intersection zone, where similarity in features due to proximity could lead to incorrect associations. The model operates within subsampled point clouds at each encoder block stage of the point transformer architecture. It applies self-attention with k = 16 nearest neighbors to local neighborhoods, aligning with NCBL calculations for consistent self-attention regularization in local contexts. NCBL-PT improves 3D segmentation at class intersections, as evidenced by a 3.31% increase in Intersection over Union (IOU) for aneurysm segmentation compared to the base point transformer model.
]]>Technologies doi: 10.3390/technologies12020027
Authors: Hossam A. Gabbar Muhammad Idrees
This manuscript addresses the critical need for precise paint application to ensure product durability and aesthetics. While manual work carries risks, robotic systems promise accuracy, yet programming diverse product trajectories remains a challenge. This study aims to develop an autonomous system capable of generating paint trajectories based on object geometries for user-defined spraying processes. By emphasizing energy efficiency, process time, and coating thickness on complex surfaces, a hybrid optimization technique enhances overall efficiency. Extensive hardware and software development results in a robust robotic system leveraging the Robot Operating System (ROS). Integrating a low-cost 3D scanner, calibrator, and trajectory optimizer creates an autonomous painting system. Hardware components, including sensors, motors, and actuators, are seamlessly integrated with a Python and ROS-based software framework, enabling the desired automation. A web-based GUI, powered by JavaScript, allows user control over two robots, facilitating trajectory dispatch, 3D scanning, and optimization. Specific nodes manage calibration, validation, process settings, and real-time video feeds. The use of open-source software and an ROS ecosystem makes it a good choice for industrial-scale implementation. The results indicate that the proposed system can achieve the desired automation, contingent upon surface geometries, spraying processes, and robot dynamics.
]]>Technologies doi: 10.3390/technologies12020026
Authors: Nalina Hamsaiyni Venkatesh Laurencas Raslavičius
Change management for technology adoption in the transportation sector is often used to address long-term challenges characterized by complexity, uncertainty, and ambiguity. Especially when technology is still evolving, an analysis of these challenges can help explore different alternative future pathways. Therefore, the analysis of development trajectories, correlations between key system variables, and the rate of change within the entire road transportation system can guide action toward sustainability. By adopting the National Innovation System concept, we evaluated the possibilities of an autonomous vehicle option to reach a zero-emission fleet. A case-specific analysis was conducted to evaluate the industry capacities, performance of R&D organizations, main objectives of future market-oriented reforms in the power sector, policy implications, and other aspects to gain insightful perspectives. Environmental insights for transportation sector scenarios in 2021, 2030, and 2050 were explored and analyzed using the COPERT v5.5.1 software program. This study offers a new perspective for road transport decarbonization research and adds new insights to the obtained correlation between the NIS dynamics and achievement of sustainability goals. In 2050, it is expected to achieve 100% carbon neutrality in the PC segment and ~85% in the HDV segment. Finally, four broad conclusions emerged from this research as a consequence of the analysis.
]]>Technologies doi: 10.3390/technologies12020025
Authors: Maddalena Dozzo Gaetana Ganci Federico Lucchi Simona Scollo
During explosive eruptions, tephra fallout represents one of the main volcanic hazards and can be extremely dangerous for air traffic, infrastructures, and human health. Here, we present a new technique aimed at identifying the area covered by tephra after an explosive event, based on processing PlanetScope imagery. We estimate the mean reflectance values of the visible (RGB) and near infrared (NIR) bands, analyzing pre- and post-eruptive data in specific areas and introducing a new index, which we call the ‘Tephra Fallout Index (TFI)’. We use the Google Earth Engine computing platform and define a threshold for the TFI of different eruptive events to distinguish the areas affected by the tephra fallout and quantify the surface coverage density. We apply our technique to the eruptive events occurring in 2021 at Mt. Etna (Italy), which mainly involved the eastern flank of the volcano, sometimes two or three times within a day, making field surveys difficult. Whenever possible, we compare our results with field data and find an optimal match. This work could have important implications for the identification and quantification of short-term volcanic hazard assessments in near real-time during a volcanic eruption, but also for the mapping of other hazardous events worldwide.
]]>Technologies doi: 10.3390/technologies12020024
Authors: Tsutomu Makino Keisuke Tabata Takaaki Saito Yosimasa Matsuo Akito Masuhara
The introduction of nanoparticles into the polymer matrix is a useful technique for creating highly functional composite membranes. Our research focuses on the development of nanoparticle-filled proton exchange membranes (PEMs). PEMs play a crucial role in efficiently controlling the electrical energy conversion process by facilitating the movement of specific ions. This is achieved by creating functionalized nanoparticles with polymer coatings on their surfaces, which are then combined with resins to create proton-conducting membranes. In this study, we prepared PEMs by coating the surfaces of silica nanoparticles with acidic polymers and integrating them into a basic matrix. This process resulted in the formation of a direct bond between the nanoparticles and the matrix, leading to composite membranes with a high dispersion and densely packed nanoparticles. This fabrication technique significantly improved mechanical strength and retention stability, resulting in high-performance membranes. Moreover, the proton conductivity of these membranes showed a remarkable enhancement of more than two orders of magnitude compared to the pristine basic matrix, reaching 4.2 × 10−4 S/cm at 80 °C and 95% relative humidity.
]]>Technologies doi: 10.3390/technologies12020023
Authors: Pavel Novak Vaclav Oujezsky Patrik Kaura Tomas Horvath Martin Holik
This paper proposes an innovative solution to address the challenge of detecting latent malware in backup systems. The proposed detection system utilizes a multifaceted approach that combines similarity analysis with machine learning algorithms to improve malware detection. The results demonstrate the potential of advanced similarity search techniques, powered by the Faiss model, in strengthening malware discovery within system backups and network traffic. Implementing these techniques will lead to more resilient cybersecurity practices, protecting essential systems from hidden malware threats. This paper’s findings underscore the potential of advanced similarity search techniques to enhance malware discovery in system backups and network traffic, and the implications of implementing these techniques include more resilient cybersecurity practices and protecting essential systems from malicious threats hidden within backup archives and network data. The integration of AI methods improves the system’s efficiency and speed, making the proposed system more practical for real-world cybersecurity. This paper’s contribution is a novel and comprehensive solution designed to detect latent malware in backups, preventing the backup of compromised systems. The system comprises multiple analytical components, including a system file change detector, an agent to monitor network traffic, and a firewall, all integrated into a central decision-making unit. The current progress of the research and future steps are discussed, highlighting the contributions of this project and potential enhancements to improve cybersecurity practices.
]]>Technologies doi: 10.3390/technologies12020022
Authors: José Rafael Dorrego-Portela Adriana Eneida Ponce-Martínez Eduardo Pérez-Chaltell Jaime Peña-Antonio Carlos Alberto Mateos-Mendoza José Billerman Robles-Ocampo Perla Yazmin Sevilla-Camacho Marcos Aviles Juvenal Rodríguez-Reséndiz
In this article, the behavior of the thrust force on the blades of a 10 kW wind turbine was obtained by considering the characteristic wind speed of the Isthmus of Tehuantepec. Analyzing mechanical forces is essential to efficiently and safely design the different elements that make up the wind turbine because the thrust forces are related to the location point and the blade rotation. For this reason, the thrust force generated in each of the three blades of a low-power wind turbine was analyzed. The angular position (θ) of each blade varied from 0° to 120°, the blades were segmented (r), and different wind speeds were tested, such as cutting, design, average, and maximum. The results demonstrate that the thrust force increases proportionally with increasing wind speed and height, but it behaves differently on each blade segment and each angular position. This method determines the angular position and the exact blade segment where the smallest and the most considerable thrust force occurred. Blade 1, positioned at an angular position of 90°, is the blade most affected by the thrust force on P15. When the blade rotates 180°, the thrust force decreases by 9.09 N; this represents a 66.74% decrease. In addition, this study allows the designers to know the blade deflection caused by the thrust force. This information can be used to avoid collision with the tower. The thrust forces caused blade deflections of 10% to 13% concerning the rotor radius used in this study. These results guarantee the operation of the tested generator under their working conditions.
]]>Technologies doi: 10.3390/technologies12020021
Authors: Ismail Fidan Vivekanand Naikwadi Suhas Alkunte Roshan Mishra Khalid Tantawi
Today, it is significant that the use of additive manufacturing (AM) has growing in almost every aspect of the daily life. A high number of sectors are adapting and implementing this revolutionary production technology in their domain to increase production volumes, reduce the cost of production, fabricate light weight and complex parts in a short period of time, and respond to the manufacturing needs of customers. It is clear that the AM technologies consume energy to complete the production tasks of each part. Therefore, it is imperative to know the impact of energy efficiency in order to economically and properly use these advancing technologies. This paper provides a holistic review of this important concept from the perspectives of process, materials science, industry, and initiatives. The goal of this research study is to collect and present the latest knowledge blocks related to the energy consumption of AM technologies from a number of recent technical resources. Overall, they are the collection of surveys, observations, experimentations, case studies, content analyses, and archival research studies. The study highlights the current trends and technologies associated with energy efficiency and their influence on the AM community.
]]>Technologies doi: 10.3390/technologies12020020
Authors: Sergio Torregrosa David Muñoz Vincent Herbert Francisco Chinesta
When training a parametric surrogate to represent a real-world complex system in real time, there is a common assumption that the values of the parameters defining the system are known with absolute confidence. Consequently, during the training process, our focus is directed exclusively towards optimizing the accuracy of the surrogate’s output. However, real physics is characterized by increased complexity and unpredictability. Notably, a certain degree of uncertainty may exist in determining the system’s parameters. Therefore, in this paper, we account for the propagation of these uncertainties through the surrogate using a standard Monte Carlo methodology. Subsequently, we propose a novel regression technique based on optimal transport to infer the impact of the uncertainty of the surrogate’s input on its output precision in real time. The OT-based regression allows for the inference of fields emulating physical reality more accurately than classical regression techniques, including advanced ones.
]]>Technologies doi: 10.3390/technologies12020019
Authors: Asmaa Hamdy Rabie Ahmed I. Saleh Said H. Abd Elkhalik Ali E. Takieldeen
Recently, the application of Artificial Intelligence (AI) in many areas of life has allowed raising the efficiency of systems and converting them into smart ones, especially in the field of energy. Integrating AI with power systems allows electrical grids to be smart enough to predict the future load, which is known as Intelligent Load Forecasting (ILF). Hence, suitable decisions for power system planning and operation procedures can be taken accordingly. Moreover, ILF can play a vital role in electrical demand response, which guarantees a reliable transitioning of power systems. This paper introduces an Optimum Load Forecasting Strategy (OLFS) for predicting future load in smart electrical grids based on AI techniques. The proposed OLFS consists of two sequential phases, which are: Data Preprocessing Phase (DPP) and Load Forecasting Phase (LFP). In the former phase, an input electrical load dataset is prepared before the actual forecasting takes place through two essential tasks, namely feature selection and outlier rejection. Feature selection is carried out using Advanced Leopard Seal Optimization (ALSO) as a new nature-inspired optimization technique, while outlier rejection is accomplished through the Interquartile Range (IQR) as a measure of statistical dispersion. On the other hand, actual load forecasting takes place in LFP using a new predictor called the Weighted K-Nearest Neighbor (WKNN) algorithm. The proposed OLFS has been tested through extensive experiments. Results have shown that OLFS outperforms recent load forecasting techniques as it introduces the maximum prediction accuracy with the minimum root mean square error.
]]>Technologies doi: 10.3390/technologies12020018
Authors: Kuldeep Jayaswal D. K. Palwalia Josep M. Guerrero
In this research paper, a comprehensive performance analysis was carried out for a 48-watt transformerless DC-DC boost converter using a Proportional–Integral–Derivative (PID) controller through dynamic modeling. In a boost converter, the optimal design of the magnetic element plays an important role in efficient energy transfer. This research paper emphasizes the design of an inductor using the Area Product Technique (APT) to analyze factors such as area product, window area, number of turns, and wire size. Observations were made by examining its response to changes in load current, supply voltage, and load resistance at frequency levels of 100 and 500 kHz. Moreover, this paper extended its investigation by analyzing the failure rates and reliability of active and passive components in a 48-watt boost converter, providing valuable insights about failure behavior and reliability. Frequency domain analysis was conducted to assess the controller’s stability and robustness. The results conclusively underscore the benefits of incorporating the designed PID controller in terms of achieving the desired regulation and rapid response to disturbances at 100 and 500 kHz. The findings emphasize the outstanding reliability of the inductor, evident from the significantly low failure rates in comparison to other circuit components. Conversely, the research also reveals the inherent vulnerability of the switching device (MOSFET), characterized by a higher failure rate and lower reliability. The MATLAB® Simulink platform was utilized to investigate the results.
]]>Technologies doi: 10.3390/technologies12020017
Authors: Amit Kumar Shakya Anurag Vidyarthi
In response to the COVID-19 pandemic and its strain on healthcare resources, this study presents a comprehensive review of various techniques that can be used to integrate image compression techniques and statistical texture analysis to optimize the storage of Digital Imaging and Communications in Medicine (DICOM) files. In evaluating four predominant image compression algorithms, i.e., discrete cosine transform (DCT), discrete wavelet transform (DWT), the fractal compression algorithm (FCA), and the vector quantization algorithm (VQA), this study focuses on their ability to compress data while preserving essential texture features such as contrast, correlation, angular second moment (ASM), and inverse difference moment (IDM). A pivotal observation concerns the direction-independent Grey Level Co-occurrence Matrix (GLCM) in DICOM analysis, which reveals intriguing variations between two intermediate scans measured with texture characteristics. Performance-wise, the DCT, DWT, FCA, and VQA algorithms achieved minimum compression ratios (CRs) of 27.87, 37.91, 33.26, and 27.39, respectively, with maximum CRs at 34.48, 68.96, 60.60, and 38.74. This study also undertook a statistical analysis of distinct CT chest scans from COVID-19 patients, highlighting evolving texture patterns. Finally, this work underscores the potential of coupling image compression and texture feature quantification for monitoring changes related to human chest conditions, offering a promising avenue for efficient storage and diagnostic assessment of critical medical imaging.
]]>Technologies doi: 10.3390/technologies12020016
Authors: Su Myat Thwin Sharaf J. Malebary Anas W. Abulfaraj Hyun-Seok Park
Globally, breast cancer (BC) is considered a major cause of death among women. Therefore, researchers have used various machine and deep learning-based methods for its early and accurate detection using X-ray, MRI, and mammography image modalities. However, the machine learning model requires domain experts to select an optimal feature, obtains a limited accuracy, and has a high false positive rate due to handcrafting features extraction. The deep learning model overcomes these limitations, but these models require large amounts of training data and computation resources, and further improvement in the model performance is needed. To do this, we employ a novel framework called the Ensemble-based Channel and Spatial Attention Network (ECS-A-Net) to automatically classify infected regions within BC images. The proposed framework consists of two phases: in the first phase, we apply different augmentation techniques to enhance the size of the input data, while the second phase includes an ensemble technique that parallelly leverages modified SE-ResNet50 and InceptionV3 as a backbone for feature extraction, followed by Channel Attention (CA) and Spatial Attention (SA) modules in a series manner for more dominant feature selection. To further validate the ECS-A-Net, we conducted extensive experiments between several competitive state-of-the-art (SOTA) techniques over two benchmarks, including DDSM and MIAS, where the proposed model achieved 96.50% accuracy for the DDSM and 95.33% accuracy for the MIAS datasets. Additionally, the experimental results demonstrated that our network achieved a better performance using various evaluation indicators, including accuracy, sensitivity, and specificity among other methods.
]]>Technologies doi: 10.3390/technologies12020015
Authors: Nikoleta Manakitsa George S. Maraslidis Lazaros Moysis George F. Fragulis
Machine vision, an interdisciplinary field that aims to replicate human visual perception in computers, has experienced rapid progress and significant contributions. This paper traces the origins of machine vision, from early image processing algorithms to its convergence with computer science, mathematics, and robotics, resulting in a distinct branch of artificial intelligence. The integration of machine learning techniques, particularly deep learning, has driven its growth and adoption in everyday devices. This study focuses on the objectives of computer vision systems: replicating human visual capabilities including recognition, comprehension, and interpretation. Notably, image classification, object detection, and image segmentation are crucial tasks requiring robust mathematical foundations. Despite the advancements, challenges persist, such as clarifying terminology related to artificial intelligence, machine learning, and deep learning. Precise definitions and interpretations are vital for establishing a solid research foundation. The evolution of machine vision reflects an ambitious journey to emulate human visual perception. Interdisciplinary collaboration and the integration of deep learning techniques have propelled remarkable advancements in emulating human behavior and perception. Through this research, the field of machine vision continues to shape the future of computer systems and artificial intelligence applications.
]]>Technologies doi: 10.3390/technologies12020014
Authors: Shwetabh Gupta Gururaj Parande Manoj Gupta
Magnesium and its composites have been used in various applications owing to their high specific strength properties and low density. However, the application is limited to room-temperature conditions owing to the lack of research available on the ability of magnesium alloys to perform in sub-zero conditions. The present study attempted, for the first time, the effects of two cryogenic temperatures (−20 °C/253 K and −196 °C/77 K) on the physical, thermal, and mechanical properties of a Mg/2wt.%CeO2 nanocomposite. The materials were synthesized using the disintegrated melt deposition method followed by hot extrusion. The results revealed that the shallow cryogenically treated (refrigerated at −20 °C) samples display a reduction in porosity, lower ignition resistance, similar microhardness, compressive yield, and ultimate strength and failure strain when compared to deep cryogenically treated samples in liquid nitrogen at −196 °C. Although deep cryogenically treated samples showed an overall edge, the extent of the increase in properties may not be justified, as samples exposed at −20 °C display very similar mechanical properties, thus reducing the overall cost of the cryogenic process. The results were compared with the data available in the open literature, and the mechanisms behind the improvement of the properties were evaluated.
]]>Technologies doi: 10.3390/technologies12020013
Authors: Pedro Moltó-Balado Silvia Reverté-Villarroya Victor Alonso-Barberán Cinta Monclús-Arasa Maria Teresa Balado-Albiol Josep Clua-Queralt Josep-Lluis Clua-Espuny
The increasing prevalence of atrial fibrillation (AF) and its association with Major Adverse Cardiovascular Events (MACE) presents challenges in early identification and treatment. Although existing risk factors, biomarkers, genetic variants, and imaging parameters predict MACE, emerging factors may be more decisive. Artificial intelligence and machine learning techniques (ML) offer a promising avenue for more effective AF evolution prediction. Five ML models were developed to obtain predictors of MACE in AF patients. Two-thirds of the data were used for training, employing diverse approaches and optimizing to minimize prediction errors, while the remaining third was reserved for testing and validation. AdaBoost emerged as the top-performing model (accuracy: 0.9999; recall: 1; F1 score: 0.9997). Noteworthy features influencing predictions included the Charlson Comorbidity Index (CCI), diabetes mellitus, cancer, the Wells scale, and CHA2DS2-VASc, with specific associations identified. Elevated MACE risk was observed, with a CCI score exceeding 2.67 ± 1.31 (p < 0.001), CHA2DS2-VASc score of 4.62 ± 1.02 (p < 0.001), and an intermediate-risk Wells scale classification. Overall, the AdaBoost ML offers an alternative predictive approach to facilitate the early identification of MACE risk in the assessment of patients with AF.
]]>Technologies doi: 10.3390/technologies12010012
Authors: Daniel Mateu-Gomez Francisco José Martínez-Peral Carlos Perez-Vidal
This article addresses the problem of automating a multi-arm pick-and-place robotic system. The objective is to optimize the execution time of a task simultaneously performed by multiple robots, sharing the same workspace, and determining the order of operations to be performed. Due to its ability to address decision-making problems of all kinds, the system is modeled under the mathematical framework of the Markov Decision Process (MDP). In this particular work, the model is adjusted to a deterministic, single-agent, and fully observable system, which allows for its comparison with other resolution methods such as graph search algorithms and Planning Domain Definition Language (PDDL). The proposed approach provides three advantages: it plans the trajectory to perform the task in minimum time; it considers how to avoid collisions between robots; and it automatically generates the robot code for any robot manufacturer and any initial objects’ positions in the workspace. The result meets the objectives and is a fast and robust system that can be safely employed in a production line.
]]>Technologies doi: 10.3390/technologies12010011
Authors: Asma Almusayli Tanveer Zia Emad-ul-Haq Qazi
In recent years, drones have become increasingly popular tools in criminal investigations, either as means of committing crimes or as tools to assist in investigations due to their capability to gather evidence and conduct surveillance, which has been effective. However, the increasing use of drones has also brought about new difficulties in the field of digital forensic investigation. This paper aims to contribute to the growing body of research on digital forensic investigations of drone accidents by proposing an innovative approach based on the use of digital twin technology to investigate drone accidents. The simulation is implemented as part of the digital twin solution using Robot Operating System (ROS version 2) and simulated environments such as Gazebo and Rviz, demonstrating the potential of this technology to improve investigation accuracy and efficiency. This research work can contribute to the development of new and innovative investigation techniques.
]]>Technologies doi: 10.3390/technologies12010010
Authors: Manish Varun Yadav Chandru Kumar R Swati Varun Yadav Tanweer Ali Jaume Anguera
This article introduces a miniaturized antenna for 5G-II band millimeter-wave communication. The antenna’s performance is meticulously examined through comprehensive simulations carried out using CST Microwave Studio, employing an FR-4 substrate with dimensions measuring 12 × 14 × 1.6 mm3. The proposed design exhibits exceptional qualities, featuring an impressive impedance bandwidth of 70.4% and a remarkable return loss of −35 dBi. The operational frequency range of this antenna extends from 16.2 GHz to 33.8 GHz, featuring a central frequency of 25 GHz, positioning it effectively within the 5G-II Band. The antenna consistently maintains polar patterns throughout this spectrum, which guarantees dependable and efficient performance. It showcases a substantial gain of 3.85 dBi and an impressive efficiency rating of 82.9%. Renowned for its versatility, this antenna is well suited for a diverse range of applications, including but not limited to Ka band, Ku band, 5G-II bands, and various other purposes in microwaves.
]]>Technologies doi: 10.3390/technologies12010009
Authors: Yuehan Zhu Tomohiro Fukuda Nobuyoshi Yabuki
In contemporary society, “Indoor Generation” is becoming increasingly prevalent, and spending long periods of time indoors affects well-being. Therefore, it is essential to research biophilic indoor environments and their impact on occupants. When it comes to existing building stocks, which hold significant social, economic, and environmental value, renovation should be considered before new construction. Providing swift feedback in the early stages of renovation can help stakeholders achieve consensus. Additionally, understanding proposed plans can greatly enhance the design of indoor environments. This paper presents a real-time system for architectural designers and stakeholders that integrates mixed reality (MR), diminished reality (DR), and generative adversarial networks (GANs). The system enables the generation of interior renovation drawings based on user preferences and designer styles via GANs. The system’s seamless integration of MR, DR, and GANs provides a unique and innovative approach to interior renovation design. MR and DR technologies then transform these 2D drawings into immersive experiences that help stakeholders evaluate and understand renovation proposals. In addition, we assess the quality of GAN-generated images using full-reference image quality assessment (FR-IQA) methods. The evaluation results indicate that most images demonstrate moderate quality. Almost all objects in the GAN-generated images can be identified by their names and purposes without any ambiguity or confusion. This demonstrates the system’s effectiveness in producing viable renovation visualizations. This research emphasizes the system’s role in enhancing feedback efficiency during renovation design, enabling stakeholders to fully evaluate and understand proposed renovations.
]]>Technologies doi: 10.3390/technologies12010008
Authors: Mohammed Mahmoud
Big Data analysis is one of the most contemporary areas of development and research in the present day [...]
]]>Technologies doi: 10.3390/technologies12010007
Authors: Jaime-Rodrigo González-Rodríguez Diana-Margarita Córdova-Esparza Juan Terven Julio-Alejandro Romero-González
People with hearing disabilities often face communication barriers when interacting with hearing individuals. To address this issue, this paper proposes a bidirectional Sign Language Translation System that aims to bridge the communication gap. Deep learning models such as recurrent neural networks (RNN), bidirectional RNN (BRNN), LSTM, GRU, and Transformers are compared to find the most accurate model for sign language recognition and translation. Keypoint detection using MediaPipe is employed to track and understand sign language gestures. The system features a user-friendly graphical interface with modes for translating between Mexican Sign Language (MSL) and Spanish in both directions. Users can input signs or text and obtain corresponding translations. Performance evaluation demonstrates high accuracy, with the BRNN model achieving 98.8% accuracy. The research emphasizes the importance of hand features in sign language recognition. Future developments could focus on enhancing accessibility and expanding the system to support other sign languages. This Sign Language Translation System offers a promising solution to improve communication accessibility and foster inclusivity for individuals with hearing disabilities.
]]>Technologies doi: 10.3390/technologies12010006
Authors: Fu-Cheng Wang Hsiao-Tzu Huang
This paper proposes extended-window algorithms for model prediction and applies them to optimize hybrid power systems. We consider a hybrid power system comprising solar panels, batteries, a fuel cell, and a chemical hydrogen generation system. The proposed algorithms enable the periodic updating of prediction models and corresponding changes in system parts and power management based on the accumulated data. We first develop a hybrid power model to evaluate system responses under different conditions. We then build prediction models using five artificial intelligence algorithms. Among them, the light gradient boosting machine and extreme gradient boosting methods achieve the highest accuracies for predicting solar radiation and load responses, respectively. Therefore, we apply these two models to forecast solar and load responses. Third, we introduce extended-window algorithms and investigate the effects of window sizes and replacement costs on system performance. The results show that the optimal window size is one week, and the system cost is 13.57% lower than the cost of the system that does not use the extended-window algorithms. The proposed method also tends to make fewer component replacements when the replacement cost increases. Finally, we design experiments to demonstrate the feasibility and effectiveness of systems using extended-window model prediction.
]]>Technologies doi: 10.3390/technologies12010005
Authors: Pau Figuera Pablo García Bringas
This manuscript provides a comprehensive exploration of Probabilistic latent semantic analysis (PLSA), highlighting its strengths, drawbacks, and challenges. The PLSA, originally a tool for information retrieval, provides a probabilistic sense for a table of co-occurrences as a mixture of multinomial distributions spanned over a latent class variable and adjusted with the expectation–maximization algorithm. The distributional assumptions and the iterative nature lead to a rigid model, dividing enthusiasts and detractors. Those drawbacks have led to several reformulations: the extension of the method to normal data distributions and a non-parametric formulation obtained with the help of Non-negative matrix factorization (NMF) techniques. Furthermore, the combination of theoretical studies and programming techniques alleviates the computational problem, thus making the potential of the method explicit: its relation with the Singular value decomposition (SVD), which means that PLSA can be used to satisfactorily support other techniques, such as the construction of Fisher kernels, the probabilistic interpretation of Principal component analysis (PCA), Transfer learning (TL), and the training of neural networks, among others. We also present open questions as a practical and theoretical research window.
]]>Technologies doi: 10.3390/technologies12010004
Authors: Prabu Pachiyannan Musleh Alsulami Deafallah Alsadie Abdul Khader Jilani Saudagar Mohammed AlKhathami Ramesh Chandra Poonia
Congenital heart disease (CHD) represents a multifaceted medical condition that requires early detection and diagnosis for effective management, given its diverse presentations and subtle symptoms that manifest from birth. This research article introduces a groundbreaking healthcare application, the Machine Learning-based Congenital Heart Disease Prediction Method (ML-CHDPM), tailored to address these challenges and expedite the timely identification and classification of CHD in pregnant women. The ML-CHDPM model leverages state-of-the-art machine learning techniques to categorize CHD cases, taking into account pertinent clinical and demographic factors. Trained on a comprehensive dataset, the model captures intricate patterns and relationships, resulting in precise predictions and classifications. The evaluation of the model’s performance encompasses sensitivity, specificity, accuracy, and the area under the receiver operating characteristic curve. Remarkably, the findings underscore the ML-CHDPM’s superiority across six pivotal metrics: accuracy, precision, recall, specificity, false positive rate (FPR), and false negative rate (FNR). The method achieves an average accuracy rate of 94.28%, precision of 87.54%, recall rate of 96.25%, specificity rate of 91.74%, FPR of 8.26%, and FNR of 3.75%. These outcomes distinctly demonstrate the ML-CHDPM’s effectiveness in reliably predicting and classifying CHD cases. This research marks a significant stride toward early detection and diagnosis, harnessing advanced machine learning techniques within the realm of ECG signal processing, specifically tailored to pregnant women.
]]>Technologies doi: 10.3390/technologies12010003
Authors: Gabriel Antonesi Alexandru Rancea Tudor Cioara Ionut Anghel
Cognitive decline represents a significant public health concern due to its severe implications on memory and general health. Early detection is crucial to initiate timely interventions and improve patient outcomes. However, traditional diagnosis methods often rely on personal interpretations or biases, may not detect the early stages of cognitive decline, or involve invasive screening procedures; thus, there is a growing interest in developing non-invasive methods benefiting also from the technological advances. Wearable devices and Internet of Things sensors can monitor various aspects of daily life together with health parameters and can provide valuable data regarding people’s behavior. In this paper, we propose a technical solution that can be useful for potentially supporting cognitive decline assessment in early stages, by employing advanced machine learning techniques for detecting higher activity fragmentation based on daily activity monitoring using wearable devices. Our approach also considers data coming from wellbeing assessment questionnaires that can offer other important insights about a monitored person. We use deep neural network models to capture complex, non-linear relationships in the daily activities data and graph learning for the structural wellbeing information in the questionnaire answers. The proposed solution is evaluated in a simulated environment on a large synthetic dataset, the results showing that our approach can offer an alternative as a support for early detection of cognitive decline during patient-assessment processes.
]]>Technologies doi: 10.3390/technologies12010002
Authors: Smera Premkumar J. Anitha Daniela Danciulescu D. Jude Hemanth
Heart rate estimation from face videos is an emerging technology that offers numerous potential applications in healthcare and human–computer interaction. However, most of the existing approaches often overlook the importance of long-range spatiotemporal dependencies, which is essential for robust measurement of heart rate prediction. Additionally, they involve extensive pre-processing steps to enhance the prediction accuracy, resulting in high computational complexity. In this paper, we propose an innovative solution called LGTransPPG. This end-to-end transformer-based framework eliminates the need for pre-processing steps while achieving improved efficiency and accuracy. LGTransPPG incorporates local and global aggregation techniques to capture fine-grained facial features and contextual information. By leveraging the power of transformers, our framework can effectively model long-range dependencies and temporal dynamics, enhancing the heart rate prediction process. The proposed approach is evaluated on three publicly available datasets, demonstrating its robustness and generalizability. Furthermore, we achieved a high Pearson correlation coefficient (PCC) value of 0.88, indicating its superior efficiency and accuracy between the predicted and actual heart rate values.
]]>Technologies doi: 10.3390/technologies12010001
Authors: Igor Lebedev Anastasia Uvarova Natalia Menshutina
An information-analytical software has been developed for creating digital models of structures of porous materials. The information-analytical software allows you to select a model that accurately reproduces structures of porous materials—aerogels—creating a digital model by which you can predict their properties. In addition, the software contains models for calculating various properties of aerogels based on their structure, such as pore size distribution and mechanical properties. Models have been implemented that allow the description of various processes in porous structures—hydrodynamics of multicomponent systems, heat and mass transfer processes, dissolution, sorption and desorption. With the models implemented in this software, various digital models for different types of aerogels can be developed. As a comparison parameter, pore size distribution is chosen. Deviation of the calculated pore size distribution curves from the experimental ones does not exceed 15%, which indicates that the obtained digital model corresponds to the experimental sample. The software contains both the existing models that are used for porous structures modeling and the original models that were developed for different studied aerogels and processes, such as the dissolution of active pharmaceutical ingredients and mass transportation in porous media.
]]>Technologies doi: 10.3390/technologies11060185
Authors: Nikola Anđelić Sandi Baressi Šegota
The study addresses the formidable challenge of calculating atomic coordinates for carbon nanotubes (CNTs) using density functional theory (DFT), a process that can endure for days. To tackle this issue, the research leverages the Genetic Programming Symbolic Regression (GPSR) method on a publicly available dataset. The primary aim is to assess if the resulting Mathematical Equations (MEs) from GPSR can accurately estimate calculated atomic coordinates obtained through DFT. Given the numerous hyperparameters in GPSR, a Random Hyperparameter Value Search (RHVS) method is devised to pinpoint the optimal combination of hyperparameter values, maximizing estimation accuracy. Two distinct approaches are considered. The first involves applying GPSR to estimate calculated coordinates (uc, vc, wc) using all input variables (initial atomic coordinates u, v, w, and integers n, m specifying the chiral vector). The second approach applies GPSR to estimate each calculated atomic coordinate using integers n and m alongside the corresponding initial atomic coordinates. This results in the creation of six different dataset variations. The GPSR algorithm undergoes training via a 5-fold cross-validation process. The evaluation metrics include the coefficient of determination (R2), mean absolute error (MAE), root mean squared error (RMSE), and the depth and length of generated MEs. The findings from this approach demonstrate that GPSR can effectively estimate CNT atomic coordinates with high accuracy, as indicated by an impressive R2≈1.0. This study not only contributes to the advancement of accurate estimation techniques for atomic coordinates but also introduces a systematic approach for optimizing hyperparameters in GPSR, showcasing its potential for broader applications in materials science and computational chemistry.
]]>Technologies doi: 10.3390/technologies11060183
Authors: Siddhant Jain Joseph Geraci Harry E. Ruda
The field of computer vision has long grappled with the challenging task of image synthesis, which entails the creation of novel high-fidelity images. This task is underscored by the Generative Learning Trilemma, which posits that it is not possible for any image synthesis model to simultaneously excel at high-quality sampling, achieve mode convergence with diverse sample representation, and perform rapid sampling. In this paper, we explore the potential of Quantum Boltzmann Machines (QBMs) for image synthesis, leveraging the D-Wave 2000Q quantum annealer. We undertake a comprehensive performance assessment of QBMs in comparison to established generative models in the field: Restricted Boltzmann Machines (RBMs), Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), and Denoising Diffusion Probabilistic Models (DDPMs). Our evaluation is grounded in widely recognized scoring metrics, including the Fréchet Inception Distance (FID), Kernel Inception Distance (KID), and Inception Scores. The results of our study indicate that QBMs do not significantly outperform the conventional models in terms of the three evaluative criteria. Moreover, QBMs have not demonstrated the capability to overcome the challenges outlined in the Trilemma of Generative Learning. Through our investigation, we contribute to the understanding of quantum computing’s role in generative learning and identify critical areas for future research to enhance the capabilities of image synthesis models.
]]>Technologies doi: 10.3390/technologies11060184
Authors: Victor A. Kovtunenko
Loss of electrochemical surface area in proton-exchange membrane is of large practical importance, since membrane degradation largely affects the durability and life of fuel cells. In this paper, the electrokinetic model developed by Holby and Morgan is considered. The paper describes degradation mechanisms in membrane catalyst presented by platinum dissolution, platinum diffusion, and platinum oxide formation. A one-dimensional model is governed by nonlinear reaction–diffusion equations given in a cathodic catalyst layer using Butler–Volmer relationships for reaction rates. The governing system is endowed with initial conditions, mixed no-flux boundary condition at the interface with gas diffusion layer, and a perfectly absorbing condition at the membrane boundary. In cyclic voltammetry tests, a non-symmetric square waveform is applied for the electric potential difference between 0.6 and 0.9 V held for 10 and 30 s, respectively, according to the protocol of European Fuel Cell and Hydrogen Joint Undertaking. Aimed at mitigation strategies, the impact of cycling operating conditions and model parameters on the loss rate of active area is investigated. The global behavior with respect to variation of parameters is performed using the method of sensitivity analysis. Finding feasible and unfeasible values helps to determine the range of test parameters employed in the model. Comprehensive results of numerical simulation tests are presented and discussed.
]]>Technologies doi: 10.3390/technologies11060182
Authors: Tassadaq Nawaz Ramasamy Srinivasaga Naidu
Cognitive radio is a promising technology that emerged as a potential solution to the spectrum shortage problem by enabling opportunistic spectrum access. In many cases, cognitive radios are required to sense a wide range of frequencies to locate the spectrum white spaces; hence, wideband spectrum comes into play, which is also an essential step in future wireless systems to boost the throughput. Cognitive radios are intelligent devices and therefore can be opted for the development of modern jamming and anti-jamming solutions. To this end, our article introduces a novel AI-enabled energy-efficient and robust technique for wideband radio spectrum characterization. Our work considers a wideband radio spectrum made up of numerous narrowband signals, which could be normal communications or signals disrupted by a stealthy jammer. First, the receiver recovers the wideband from significantly low sub-Nyquist rate samples by exploiting compressive sensing technique to decrease the overhead caused by the high complexity analog-to-digital conversion process. Once the wideband is recovered, each available narrowband signal is given to a cyclostationary feature detector that computes the corresponding spectral correlation function and extracts the feature vectors in the form of cycle and frequency profiles. Then profiles are concatenated and given as input features set to an artificial neural network which in turn classifies each NB signal as legitimate communication with a specific modulation or disrupted by a stealthy jammer. The results show a classification accuracy of about 0.99 is achieved. Moreover, the algorithm highlights significantly high performances in comparison to recently reported spectrum classification techniques. The proposed technique can be used to design anti-jamming systems for military communication systems.
]]>Technologies doi: 10.3390/technologies11060181
Authors: Michael Johanes Amirin Adli Bin Gombari Manoj Gupta
A magnesium-based multi-component alloy (MCA), Mg70Al18Zn6Ca4Y2, was successfully synthesized using the Turning-Induced Deformation (TID) method, with promising improvements in multiple properties such as damping capabilities, hardness (11% to 34% increase), and strength (5% to 15% increase) over its conventional cast and extruded equivalent which has already been established as a high-performance MCA exhibiting superior mechanical properties over other Mg-based materials while retaining acceptable ductility. This new TID-based MCA comes only at a slight compromise in the aspects of ductility, ignition resistance, and corrosion resistance, which was previously observed in other TID-based materials. In addition, the general microstructure and secondary phases of this MCA were retained even when using the TID method, with only minimal porosity (<1%) incurred during the process. Furthermore, the ignition temperature of the TID Mg70Al18Zn6Ca4Y2 remained very high at 915 °C, positioning it as a potential Mg-based material suitable for aerospace applications with a high ignition resistance. This is tantamount to a successful application of TID to yet another class of Mg-based materials and opening the door to future explorations of such materials.
]]>Technologies doi: 10.3390/technologies11060180
Authors: Abbas Tamadon Arvand Baghestani Mohammad Ebrahim Bajgholi
The authors wish to make the following correction to this paper [...]
]]>Technologies doi: 10.3390/technologies11060179
Authors: Cormac D. Fay Liang Wu
We present an advanced, low-cost 3D printing system capable of fabricating intricate silicone structures using commercially available off-the-shelf materials. Our system used a custom-designed, motorised syringe pump with a driving lead screw and excellent control of material extrusion to accommodate the high viscosity of silicone printing ink, which is composed of polydimethylsiloxane (PDMS), diluent, and a photo-initiator (LAP). We modified an open-source desktop 3D printer to mount the syringe pump and programmed it to deposit controlled intricate patterns in a layer-by-layer fashion. To ensure the structural integrity of the printed objects, we introduced an intra-layer curing approach that fused the deposited layers using a custom-built UV curing system. Our experiments demonstrated the successful fabrication of silicone structures at different infill percentages, with excellent resolution and mechanical properties. Our low-cost solution (costing less than USD 1000 and requiring no specialised facilities or equipment) shows great promise for practical applications in areas such as micro-fluidics, prosthesis, and biomedical engineering based on our initial findings of 300 μm width channels (with excellent scope for smaller channels where desirable) and tunable structural properties. Our work represents a significant advance in low-cost desktop 3D printing capabilities, and we anticipate that it could have a broad impact on the field by providing these capabilities to scholars without the means to purchase expensive fabrication systems.
]]>Technologies doi: 10.3390/technologies11060178
Authors: Giorgio Cascelli Cataldo Guaragnella Raffaele Nutricato Khalid Tijani Alberto Morea Nicolò Ricciardi Davide Oscar Nitti
Synthetic Aperture Radar (SAR) is a well-established 2D imaging technique employed as a consolidated practice in several oil spill monitoring services. In this scenario, onboard detection undoubtedly represents an interesting solution to reduce the latency of these services, also enabling transmission to the ground segment of alert signals with a notable reduction in the required downlink bandwidth. However, the reduced computational capabilities available onboard require alternative approaches with respect to the standard processing flows. In this work, we propose a feasibility study of oil spill detection applied directly to raw data, which is a solution not sufficiently addressed in the literature that has the advantage of not requiring the execution of the focusing step. The study is concentrated only on the accuracy of detection, while computational cost analysis is not within the scope of this work. More specifically, we propose a complete framework based on the use of a Residual Neural Network (ResNet), including a simple and automatic simulation method for generating the training data set. The final tests with ERS real data demonstrate the feasibility of the proposed approach showing that the trained ResNet correctly detects ships with a Signal-to-Clutter Ratio (SCR) > 10.3 dB.
]]>Technologies doi: 10.3390/technologies11060177
Authors: Javeed Shaikh-Mohammed Yousef Alharbi Abdulrahman Alqahtani
To the authors’ knowledge, currently, there is no review covering the different technologies applied to opening manual doors. Therefore, this review presents a summary of the various technologies available on the market as well as those under research and development for opening manual doors. Four subtopics—doorknob accessories, wheelchair-mounted door-opening accessories, door-opening robots, and door-opening drones—were used to group the various technologies for manually opening doors. It is evident that opening doors is a difficult process, and there are different ways to solve this problem in terms of the technology used and the cost of the end product. The search for an affordable assistive technology for opening manual doors is ongoing. This work is an attempt to provide wheelchair users and their healthcare providers with a one-stop source for door-opening technologies. At least one of these door-opening solutions could prove beneficial to the elderly and some wheelchair users for increased independence. The ideal option would depend on an individual’s needs and capabilities, and occupational therapists could assess and recommend the right solutions.
]]>Technologies doi: 10.3390/technologies11060176
Authors: Andrzej Szymon Borkowski
In the scientific community, it is difficult to find a consensus on defining BIM. Just as the acronym BIM has developed in different ways, it is also understood in different ways. Depending on its understanding, different definitions emerge. It is defined differently by organizations and standards, differently still even by academics. Many years of academic discourse on the subject have failed to produce a solution. Despite the fact that the acronym BIM has already done its work for the construction industry, it still stirs up excitement. There is still no clear definition, as the view of BIM varies from one perspective to another. This article attempts to sort out the definitions cited so far by important organizations and key academics. This review was based on a deep literature study that has attempted to be inclusive and consistent. The question still remains open: do we need a single, correct definition of BIM? The aim of this article is to try to answer this question, open up a renewed discussion and come to a satisfactory consensus. BIM can be identified with an activity, product or system. This article breaks down the definitions of BIM, identifies six key attributes of BIM, presents the evolution of the understanding of BIM and proposes new definitions in a narrow and broad approach.
]]>Technologies doi: 10.3390/technologies11060175
Authors: Che-Min Cheng Yu-Hsin Chen Sheng-Yi Lin Sheng-Der Chao Shun-Feng Tsai
This study investigated the shielding effectiveness (SE) of glass materials with conductive coatings by establishing a 3000 × 3000 × 3000 mm electromagnetic pulse (EMP)—shielded room according to the EMP shielding requirements in the US military standard MIL-STD-188-125-1. The EMP SE of conductive-coated glass samples was measured and verified with the broadband EMP conditions of 10 kHz∼1 GHz. The conductive thin film coating on the glass was made by mixing conductive materials, including In2O3, SnO2, Ta2O5, NbO, SiO2, TiO2, and Al2O3, at different ratios. The mixed solutions were then coated onto the glass targets to facilitate conductive continuity between the conductive oxides and the shielding metal structure. The glass samples had dimensions of 1000 × 600 mm, which had electrolytic conductivity σ = 4.0064 × 103∼4.7438 × 103 (S/cm), 74∼77% transmittance, and 6.4∼6.8 Ω/□ film resistance. The experimental results indicated that the glass had SE of 35∼40 dB under 1 GHz EMP, satisfying the US National Coordinating Center for Communications’ Level 3 shielding protection requirement of at least 30 dB. The glass attenuated energy density by more than 1000 times, which is equivalent to shielding over 97% of EMP energy. Accordingly, the glass materials can be used as high-transmittance conductive glass for windows of automobiles, vessels, and aircrafts to protect from EMPs.
]]>Technologies doi: 10.3390/technologies11060174
Authors: Daniele Marletta Alessandro Midolo Emiliano Tramontana
The detection of photovoltaic panels from images is an important field, as it leverages the possibility of forecasting and planning green energy production by assessing the level of energy autonomy for communities. Many existing approaches for detecting photovoltaic panels are based on machine learning; however, they require large annotated datasets and extensive training, and the results are not always accurate or explainable. This paper proposes an automatic approach that can detect photovoltaic panels conforming to a properly formed significant range of colours extracted according to the given conditions of light exposure in the analysed images. The significant range of colours was automatically formed from an annotated dataset of images, and consisted of the most frequent panel colours differing from the colours of surrounding parts. Such colours were then used to detect panels in other images by analysing panel colours and reckoning the pixel density and comparable levels of light. The results produced by our approach were more precise than others in the previous literature, as our tool accurately reveals the contours of panels notwithstanding their shape or the colours of surrounding objects and the environment.
]]>Technologies doi: 10.3390/technologies11060173
Authors: Sadeeshvara Silva Thotabaddadurage
The discovery of the transient-surge-withstanding capability of electrochemical dual-layer capacitors (EDLCs) led to the development of a unique, commercially beneficial circuit topology known as a supercapacitor transient suppressor (STS). Despite its low component count, the new design consists of a transient-absorbing magnetic core which takes the form of a coupled inductor placed between the AC-main- and load-side varistors. With an introduction to the structural features of metal oxide varistors (MOVs), gas tubes, thyristors, and EDLCs, this research presents a frequency (S)-domain analysis of an STS circuit to accurately model the surge propagation through its coupled inductor. Transient energy distribution trends among STS components are estimated in this paper, with an emphasis on peak energies absorbed and dissipated by the various inductive, capacitive, and resistive circuit elements. Moreover, this study reveals STS transient-mode test waveforms validated by a standard lightning surge simulator with supporting simulation plots based on LTSpice numerical techniques. Both experimental and simulation results are consistent, with the analytical findings showing 90% of the peak transient propagating through the primary coil, whereas only 10% is shared into the secondary coil of the coupled inductor. In addition, it is proven that the two STS MOVs dissipate over 50% of the transient energy for a standard 6 kV/3 kA combinational surge, while the magnetic core absorbs over 20% of the energy. All test procedures conducted during this research adhere to IEEE C62.41/IEC 61000-4-5 standards.
]]>Technologies doi: 10.3390/technologies11060172
Authors: Francesca Fiani Samuele Russo Christian Napoli
For this work, a preliminary study proposed virtual interfaces for remote psychotherapy and psychology practices. This study aimed to verify the efficacy of such approaches in obtaining results comparable to in-presence psychotherapy, when the therapist is physically present in the room. In particular, we implemented several joint machine-learning techniques for distance detection, camera calibration and eye tracking, assembled to create a full virtual environment for the execution of a psychological protocol for a self-induced mindfulness meditative state. Notably, such a protocol is also applicable for the desensitization phase of EMDR therapy. This preliminary study has proven that, compared to a simple control task, such as filling in a questionnaire, the application of the mindfulness protocol in a fully virtual setting greatly improves concentration and lowers stress for the subjects it has been tested on, therefore proving the efficacy of a remote approach when compared to an in-presence one. This opens up the possibility of deepening the study, to create a fully working interface which will be applicable in various on-field applications of psychotherapy where the presence of the therapist cannot be always guaranteed.
]]>Technologies doi: 10.3390/technologies11060171
Authors: Luis A. Avila-Sánchez Carlos Sánchez-López Rocío Ochoa-Montiel Fredy Montalvo-Galicia Luis A. Sánchez-Gaspariano Carlos Hernández-Mejía Hugo G. González-Hernández
Advances in the development of collision-free path planning algorithms are the main need not only to solve mazes with robotic systems, but also for their use in modern product transportation or green logistics systems and planning merchandise deliveries inside or outside a factory. This challenge increases as the complexity of the task in its structure also increases. This paper deals with the development of a novel methodology for solving mazes with a mobile robot, using image processing techniques and graph theory. The novelty is that the mobile robot can find the shortest path from a start-point to the end-point into irregular mazes with abundant irregular obstacles, a situation that is not far from reality. Maze information is acquired from an image and depending on the size of the mobile robot, a grid of nodes with the same dimensions of the maze is built. Another contribution of this paper is that the size of the maze can be scaled from 1 m × 1 m to 66 m × 66 m, maintaining the essence of the proposed collision-free path planning methodology. Afterwards, graph theory is used to find the shortest path within the grid of reduced nodes due to the elimination of those nodes absorbed by the irregular obstacles. To avoid the mobile robot to travel through those nodes very close to obstacles and borders, resulting in a collision, each image of the obstacle and border is dilated taking into account the size of the mobile robot. The methodology was validated with two case studies with a mobile robot in different mazes. We emphasize that the maze solution is found in a single computational step, from the maze image as input until the generation of the Path vector. Experimental results show the usefulness of the proposed methodology, which can be used in applications such as intelligent traffic control, military, agriculture and so on.
]]>Technologies doi: 10.3390/technologies11060170
Authors: Rodrigo Antunes Martim Lima Aguiar Pedro Dinis Gaspar
This study presents an innovative pedagogical approach aimed at enhancing the teaching of robotics within the broader context of STEM (science, technology, engineering, and mathematics) education across diverse academic levels. The integration of mobile robotics kits into a dynamic STEM-focused curriculum offers students an immersive and hands-on learning experience, fostering programming skills, advanced problem-solving, critical thinking, and spatial awareness. The motivation behind this research lies in improving the effectiveness of robotics education by addressing existing gaps in current strategies. It aims to better prepare students for this rapidly evolving field’s dynamic challenges and opportunities. To achieve this, detailed protocols were formulated that not only facilitate student learning but also cater to teacher training and involvement. These protocols encompass code documentation and examples, providing tangible representations of the practical outcomes of the course. In addition to the presented curriculum, this paper introduces the developed methodology that strategically leverages 3D-printing technology. The primary focus of this approach is to create captivating add-ons and establish a versatile workspace, actively promoting heightened engagement and facilitating the acquisition of knowledge among students. The research involves the development of tailored laboratory protocols suited to various academic levels, employing a systematic methodology aimed at deepening students’ comprehension of STEM concepts. Furthermore, an adaptable infrastructure for laboratory protocols and in-class testing was developed. The efficacy of this teaching/learning methodology is evaluated through student surveys, ensuring its continuous improvement. These protocols are to be integrated into both the robotics courses and teacher-training initiatives. This study aims to contribute to the field by using a dynamic STEM-driven approach based on mobile robotics. It outlines a strategic vision for better-preparing students and educators in the ever-evolving landscape of robotics education demanded by Industry 4.0 technologies.
]]>Technologies doi: 10.3390/technologies11060169
Authors: Cristóbal Araya Francisco J. Peña Ariel Norambuena Bastián Castorene Patricio Vargas
We studied the performance of a quantum magnetic Stirling cycle that uses a working substance composed of two entangled antiferromagnetic qubits (J) under the influence of an external magnetic field (Bz) and an uniaxial anisotropy field (K) along the total spin in the y-direction. The efficiency and work were calculated as a function of Bz and for different values of the anisotropy constant K given hot and cold reservoir temperatures. The anisotropy has been shown to extend the region of the external magnetic field in which the Stirling cycle is more efficient compared to the ideal case.
]]>Technologies doi: 10.3390/technologies11060168
Authors: Bismark Kweku Asiedu Asante Hiroki Imamura
We propose a novel obstacle avoidance strategy implemented in a wearable assistive device, which serves as an electronic travel aid (ETA), designed to enhance the safety of visually impaired persons (VIPs) during navigation to their desired destinations. This method is grounded in the assumption that objects in close proximity and within a short distance from VIPs pose potential obstacles and hazards. Furthermore, objects that are farther away appear smaller in the camera’s field of view. To adapt this method for accurate obstacle selection, we employ an adaptable grid generated based on the apparent size of objects. These objects are detected using a custom lightweight YOLOv5 model. The grid helps select and prioritize the most immediate and dangerous obstacle within the user’s proximity. We also incorporate an audio feedback mechanism with an innovative neural perception system to alert the user. Experimental results demonstrate that our proposed system can detect obstacles within a range of 20 m and effectively prioritize obstacles within 2 m of the user. The system achieves an accuracy rate of 95% for both obstacle detection and prioritization of critical obstacles. Moreover, the ETA device provides real-time alerts, with a response time of just 5 s, preventing collisions with nearby objects.
]]>Technologies doi: 10.3390/technologies11060167
Authors: Mehdi Imani Hamid Reza Arabnia
This paper explores the application of various machine learning techniques for predicting customer churn in the telecommunications sector. We utilized a publicly accessible dataset and implemented several models, including Artificial Neural Networks, Decision Trees, Support Vector Machines, Random Forests, Logistic Regression, and gradient boosting techniques (XGBoost, LightGBM, and CatBoost). To mitigate the challenges posed by imbalanced datasets, we adopted different data sampling strategies, namely SMOTE, SMOTE combined with Tomek Links, and SMOTE combined with Edited Nearest Neighbors. Moreover, hyperparameter tuning was employed to enhance model performance. Our evaluation employed standard metrics, such as Precision, Recall, F1-score, and the Receiver Operating Characteristic Area Under Curve (ROC AUC). In terms of the F1-score metric, CatBoost demonstrates superior performance compared to other machine learning models, achieving an outstanding 93% following the application of Optuna hyperparameter optimization. In the context of the ROC AUC metric, both XGBoost and CatBoost exhibit exceptional performance, recording remarkable scores of 91%. This achievement for XGBoost is attained after implementing a combination of SMOTE with Tomek Links, while CatBoost reaches this level of performance after the application of Optuna hyperparameter optimization.
]]>Technologies doi: 10.3390/technologies11060166
Authors: José Pereira Reinaldo Souza António Moreira Ana Moita
The current review offers a critical survey on published studies concerning the simultaneous use of PCMs and nanofluids for solar thermal energy storage and conversion processes. Also, the main thermophysical properties of PCMs and nanofluids are discussed in detail. On one hand, the properties of these types of nanofluids are analyzed, as well as those of the general types of nanofluids, like the thermal conductivity and latent heat capacity. On the other hand, there are specific characteristics of PCMs like, for instance, the phase-change duration and the phase-change temperature. Moreover, the main improvement techniques in order for PCMs and nanofluids to be used in solar thermal applications are described in detail, including the inclusion of highly thermal conductive nanoparticles and other nanostructures in nano-enhanced PCMs and PCMs with extended surfaces, among others. Regarding those improvement techniques, it was found that, for instance, nanofluids can enhance the thermal conductivity of the base fluids by up to 100%. In addition, it was also reported that the simultaneous use of PCMs and nanofluids enhances the overall, thermal, and electrical efficiencies of solar thermal energy storage systems and photovoltaic-nano-enhanced PCM systems. Finally, the main limitations and guidelines are summarized for future research in the technological and research fields of nanofluids and PCMs.
]]>Technologies doi: 10.3390/technologies11060165
Authors: Jonathan Haase Peter B. Walker Olivia Berardi Waldemar Karwowski
This paper discusses the “Get Real Get Better” (GRGB) approach to implementing agile program management in the U.S. Navy, supported by advanced data analytics and artificial intelligence (AI). GRGB was designed as a set of foundational principles to advance Navy culture and support its core values. This article identifies a need for a more informed and efficient approach to program management by highlighting the benefits of implementing comprehensive data analytics that leverage recent advances in cloud computing and machine learning. The Jupiter enclave within Advana implemented by the U.S. Navy, is also discussed. The presented approach represents a practical framework that cultivates a “Get Real Get Better” mindset for implementing agile program management in the U.S. Navy.
]]>Technologies doi: 10.3390/technologies11060164
Authors: Nurgul Nalgozhina Abdul Razaque Uskenbayeva Raissa Joon Yoo
Robotic process automation (RPA) is a popular process automation technology that leverages software to play the function of humans when employing graphical user interfaces. RPA’s scope is limited, and various requirements must be met for it to be applied efficiently. Business process management (BPM), on the other hand, is a well-established area of research that may provide favorable conditions for RPA to thrive. We provide an efficient technique for merging RPA with BPM (RPABPM) to synchronize the technology for efficient automated business processes. The problem formulation process is carried out to cut management-related expenditures. The proposed RPABPM strategy includes the five stages (design, modeling, execution, monitoring, and optimization) for optimal business automation and energy savings. Effective business process management is proved by employing an end-to-end process. Furthermore, findings have been obtained employing three empirical investigations that are performed to assess the practicality and precision of the proposed RPABPM approach. The first objective of the initial study is to confirm the practicality and precision of the approach employed to evaluate the acceptance, possibility, significance, and integration of RPA with BPM. The second study attempts to verify the method’s high-quality characteristics. The third study attempts to assess the approach’s effectiveness in analyzing and identifying BPM that are best suited for RPA. The proposed RPABPM is validated on the industrial robot manufactured by ABB with six-axis IRB140 and supported with a Windows CE-based Flex Pendant (teach pendant). An IRC5 controller is used to run RobotWare 5.13.10371. A pre-installed .NET Compact Framework 3.5 is used. Finally, the proposed method is compared with state-of-the-art methods from an efficiency and power consumption perspective.
]]>Technologies doi: 10.3390/technologies11060163
Authors: Muhammad Talha Maria Kyrarini Ehsan Ali Buriro
In recent years, the usage of wearable systems in healthcare has gained much attention, as they can be easily worn by the subject and provide a continuous source of data required for the tracking and diagnosis of multiple kinds of abnormalities or diseases in the human body. Wearable systems can be made useful in improving a patient’s quality of life and at the same time reducing the overall cost of caring for individuals including the elderly. In this survey paper, the recent research in the development of intelligent wearable systems for the diagnosis of peripheral neuropathy is discussed. The paper provides detailed information about recent techniques based on different wearable sensors for the diagnosis of peripheral neuropathy including experimental protocols, biomarkers, and other specifications and parameters such as the type of signals and data processing methods, locations of sensors, the scales and tests used in the study, and the scope of the study. It also highlights challenges that are still present in order to make wearable devices more effective in the diagnosis of peripheral neuropathy in clinical settings.
]]>Technologies doi: 10.3390/technologies11060162
Authors: Rugved Kore Dorukalp Durmus
Solid-state lighting (SSL) devices are ubiquitous in several markets, including architectural, automotive, healthcare, heritage conservation, and entertainment lighting. Fine control of the LED light output is crucial for applications where spectral precision is required, but dimming LEDs can cause a nonlinear response in its output, shifting the chromaticity. The nonlinear response of a multi-color LEDs can be corrected by curve-fitting the measured data to input dimming controls. In this study, the spectral output of an RGB LED projector was corrected using polynomial curve fitting. The accuracy of four different measurement methods was compared in order to find the optimal correction approach in terms of the time and effort needed to perform measurements. The results suggest that the curve fitting of very high-resolution dimming steps (n = 125) significantly decreased the chromaticity shifts between measured (actual) and corrected spectra. The effect size between approaches indicates that the curve-fitting of the high-resolution approach (n = 23) performs equally well as at very high resolution (n = 125). The curve-fitting correction can be used as an alternative approach or in addition to existing methods, such as the closed-loop correction. The curve fitting method can be applied to any tunable multi-color LED lighting system to correct the nonlinear dimming response.
]]>Technologies doi: 10.3390/technologies11060161
Authors: Abdullah M. Alnajim Shabana Habib Muhammad Islam Su Myat Thwin Faisal Alotaibi
The Industrial Internet of Things (IIoT) ecosystem faces increased risks and vulnerabilities due to adopting Industry 4.0 standards. Integrating data from various places and converging several systems have heightened the need for robust security measures beyond fundamental connection encryption. However, it is difficult to provide adequate security due to the IIoT ecosystem’s distributed hardware and software. The most effective countermeasures must be suggested together with the crucial vulnerabilities, linked threats, and hazards in order to protect industrial equipment and ensure the secure functioning of IIoT systems. This paper presents a thorough analysis of events that target IIoT systems to alleviate such concerns. It also offers a comprehensive analysis of the responses that have been advanced in the most recent research. This article examines several kinds of attacks and the possible consequences to understand the security landscape in the IIoT area. Additionally, we aim to encourage the development of effective defenses that will lessen the hazards detected and secure the privacy, accessibility, and reliability of IIoT systems. It is important to note that we examine the issues and solutions related to IIoT security using the most recent findings from research and the literature on this subject. This study organizes and evaluates recent research to provide significant insight into the present security situation in IIoT systems. Ultimately, we provide outlines for future research and projects in this field.
]]>Technologies doi: 10.3390/technologies11060160
Authors: Kilian Brunner Stephen Dominiak Martin Ostertag
Broadband powerline communication is a technology developed mainly with consumer applications and bulk data transmission in mind. Typical use cases include file download, streaming, or last-mile internet access for residential buildings. Applications gaining momentum are smart metering and grid automation, where response time requirements are relatively moderate compared to industrial (real-time) control. This work investigates to which extent G.hn technology, with existing, commercial off-the-shelf components, can be used for real-time control applications. Maximum packet rate and latency statistics are investigated for different G.hn profiles and MAC algorithms. An elevator control system serves as an example application to define the latency and throughput requirements. The results show that G.hn is a feasible technology candidate for industrial IoT-type applications if certain boundary conditions can be ensured.
]]>Technologies doi: 10.3390/technologies11060159
Authors: Adina Giurgiuman Marian Gliga Adrian Bojita Sergiu Andreica Calin Munteanu Vasile Topa Claudia Constantinescu Claudia Pacurar
The evaluation of human exposure to electric and magnetic fields represents a subject of great scientific and public interest due to the biological effects of electromagnetic fields (EMFs) on the human body and the risks caused by them to living organisms. In this context, this article proposes a software program designed by the authors for the evaluation of human exposure to electric and magnetic fields at low frequencies (EMF software program), an application that can also be accessed from a mobile phone. The analytical model on which the EMF program is based is synthetically presented, and the application is then described. The first example implemented in the EMF program is taken from the existing literature on this subject, thus confirming the correctness and calculation precision of the program. Next, a case study is proposed for an overhead transmission line of 400 KV from the Cluj-Napoca area, Romania, for which the electric and magnetic fields are first measured experimentally and then using the EMF program. The validation of the EMF software program is performed by comparing the obtained results with those measured experimentally and with those obtained with a commercial software program.
]]>Technologies doi: 10.3390/technologies11060158
Authors: Jun-Seong Kim Kun-Woo Kim Seong-Won Yang Joong-Wha Chung Seong-Yong Moon
Vein blood sampling is a method of mass blood sampling that involves drawing blood from a vein for blood type discrimination, confirmation of various physiological indicators, disease diagnosis, etc.; it is the most commonly used blood sampling method. An important aspect of vein blood sampling is the search for the exact location of the vein for insertion of the syringe to draw blood. This is influenced by obesity as well as skin and blood vessel conditions in the patient and the experience of the clinical technologist, nurse, and resident who performs the blood sampling. Frequent practice is required to effectively perform blood sampling techniques. However, due to the many limitations of the practice room or laboratory, there is a problem of using only a limited environment and model for clinical practice. As a result, many medical educational institutions have situations in which only fragmentary clinical practices are performed, and it is difficult to practice many blood sampling skills, so they do not provide enough experience to understand the actual skill field. In this paper, we propose a virtual-reality-based vein blood sampling simulator that allows the practice of blood sampling techniques without limitation. The proposed vein blood sampling simulator can operate a 3D model related to vein blood sampling using an HMD controller and a haptic device in a virtual space for vein blood sampling practice by wearing an HMD (head-mounted display). Vein blood sampling can also be practiced through interaction with the patient 3D model. In addition, the effectiveness of a simulator developed for dental students was verified, and as a result of the verification, the potential of the proposed vein blood sampling simulator was confirmed.
]]>Technologies doi: 10.3390/technologies11060157
Authors: Mohomad Aqeel Abdhul Rahuman Nipun Shantha Kahatapitiya Viraj Niroshan Amarakoon Udaya Wijenayake Bhagya Nathali Silva Mansik Jeon Jeehyun Kim Naresh Kumar Ravichandran Ruchire Eranga Wijesinghe
Bio-mechatronics is an interdisciplinary scientific field that emphasizes the integration of biology and mechatronics to discover innovative solutions for numerous biomedical applications. The broad application spectrum of bio-mechatronics consists of minimally invasive surgeries, rehabilitation, development of prosthetics, and soft wearables to find engineering solutions for the human body. Fiber-optic-based sensors have recently become an indispensable part of bio-mechatronics systems, which are essential for position detection and control, monitoring measurements, compliance control, and various feedback applications. As a result, significant advancements have been introduced for designing and developing fiber-optic-based sensors in the past decade. This review discusses recent technological advancements in fiber-optical sensors, which have been potentially adapted for numerous bio-mechatronic applications. It also encompasses fundamental principles, different types of fiber-optical sensors based on recent development strategies, and characterizations of fiber Bragg gratings, optical fiber force myography, polymer optical fibers, optical tactile sensors, and Fabry–Perot interferometric applications. Hence, robust knowledge can be obtained regarding the technological enhancements in fiber-optical sensors for bio-mechatronics-based interdisciplinary developments. Therefore, this review offers a comprehensive exploration of recent technological advances in fiber-optical sensors for bio-mechatronics. It provides insights into their potential to revolutionize biomedical and bio-mechatronics applications, ultimately contributing to improved patient outcomes and healthcare innovation.
]]>Technologies doi: 10.3390/technologies11060156
Authors: Bayan Kaidar Gaukhar Smagulova Aigerim Imash Aruzhan Keneshbekova Akram Ilyanov Zulkhair Mansurov
This study investigates the synthesis and application of composite electrospun fibers incorporating coal tar pitch (CTP) and various nanomaterial additives, with a specific focus on their potential for eco-bio-applications. The research underscores the environmentally viable aspects of CTP following a thermal treatment process that eliminates volatile components and sulfur, rendering it amenable for fiber electrospinning and subsequent carbonization. Composite fibers were fabricated by integrating CTP with nanomaterials, including nickel oxide (NiO), titanium dioxide (TiO2), activated carbon (AC), and magnetite (Fe3O4). The C/NiO composite fibers exhibit notable acetone sensing capabilities, specifically displaying a rapid response time of 40.6 s to 100 ppm acetone at 220 °C. The C/TiO2 composite fibers exhibit a distinct “beads-on-a-string” structure and demonstrate a high efficiency of 96.13% in methylene blue decomposition, highlighting their potential for environmental remediation applications. Additionally, the C/AC composite fibers demonstrate effective adsorption properties, efficiently removing manganese (II) ions from aqueous solutions with an 88.62% efficiency, thereby suggesting their utility in water purification applications. This research employs an interdisciplinary approach by combining diverse methods, approaches, and materials, including the utilization of agricultural waste materials such as rice husks, to create composite materials with multifaceted applications. Beyond the immediate utility of the composite fibers, this study emphasizes the significance of deploying environmentally responsible materials and technologies to address pressing eco-bio-challenges.
]]>Technologies doi: 10.3390/technologies11060155
Authors: Mithun Kanchan Mohith Santhya Ritesh Bhat Nithesh Naik
Piezoelectric actuators find extensive application in delivering precision motion in the micrometer to nanometer range. The advantages of a broader range of motion, rapid response, higher stiffness, and large actuation force from piezoelectric actuators make them suitable for precision positioning applications. However, the inherent nonlinearity in the piezoelectric actuators under dynamic working conditions severely affects the accuracy of the generated motion. The nonlinearity in the piezoelectric actuators arises from hysteresis, creep, and vibration, which affect the performance of the piezoelectric actuator. Thus, there is a need for appropriate modeling and control approaches for piezoelectric actuators, which can model the nonlinearity phenomenon and provide adequate compensation to achieve higher motion accuracy. The present review covers different methods adopted for overcoming the nonlinearity issues in piezoelectric actuators. This review highlights the charge-based and voltage-based control methods that drive the piezoelectric actuators. The survey also includes different modeling approaches for the creep and hysteresis phenomenon of the piezoelectric actuators. In addition, the present review also highlights different control strategies and their applications in various types of piezoelectric actuators. An attempt is also made to compare the piezoelectric actuator’s different modeling and control approaches and highlight prospects.
]]>Technologies doi: 10.3390/technologies11060154
Authors: Lesia Mochurad Pavlo Horun
Using existing software technologies for imputing missing genetic data (GD), such as Beagle, HPImpute, Impute, MACH, AlphaPlantImpute, MissForest, and LinkImputeR, has its advantages and disadvantages. The wide range of input parameters and their nonlinear dependence on the target results require a lot of time and effort to find optimal values in each specific case. Thus, optimizing resources for GD imputation and improving its quality is an important current issue for the quality analysis of digitized deoxyribonucleic acid (DNA) samples. This work provides a critical analysis of existing methods and approaches for obtaining high-quality imputed GD. We observed that most of them do not investigate the problem of time and resource costs, which play a significant role in a mass approach. It is also worth noting that the considered articles are often characterized by high development complexity and, at times, unclear (or missing) descriptions of the input parameters for the methods, algorithms, or models under consideration. As a result, two algorithms were developed in this work. The first one aims to optimize the imputation time, allowing for real-time solutions, while the second one aims to improve imputation accuracy by selecting the best results at each iteration. The success of the first algorithm in improving imputation speed ranges from 47% (for small files) to 87% of the time (for medium and larger files), depending on the available resources. For the second algorithm, the accuracy has been improved by about 0.1%. This, in turn, encourages continued research on the latest version of Beagle software, particularly in the selection of optimal input parameters and possibly other models with similar or higher imputation accuracy.
]]>Technologies doi: 10.3390/technologies11060153
Authors: Miklas Scholz
Activated carbon has many potential applications in both the liquid and gas phases. How activated carbon can help practitioners in industry is explained. This practical teaching article introduces the first part of the special issue on Recent Advances in Applied Activated Carbon Research by providing a handbook explaining the basic applications, technologies, processes, methods and material characteristics to readers from different backgrounds. The aim is to improve the knowledge and understanding of the subject of activated carbon for non-adsorption experts such as professionals in industry. Therefore, it is written in a comprehensible manner and dispenses with detailed explanations to complex processes and many background references. This handbook does not claim to be complete and concentrates only on the areas that are of practical relevance for most activated carbon applications. Activated carbon and its activation and reactivation are initially explained. Adsorption and relevant processes are outlined. The mechanical, chemical and adsorption properties of activated carbon are explained. The heart of the handbook outlines key application technologies. Other carbonaceous adsorbents are only introduced briefly. The content of the second part of the special issue is highlighted at the end.
]]>Technologies doi: 10.3390/technologies11060152
Authors: Yana Suchikova Sergii Kovachov Ihor Bohdanov Artem L. Kozlovskiy Maxim V. Zdorovets Anatoli I. Popov
This article presents an enhanced method for synthesizing β-SiC on a silicon substrate, utilizing porous silicon as a buffer layer, followed by thermal carbide formation. This approach ensured strong adhesion of the SiC film to the substrate, facilitating the creation of a hybrid hetero-structure of SiC/por-Si/mono-Si. The surface morphology of the SiC film revealed islands measuring 2–6 μm in diameter, with detected micropores that were 70–80 nm in size. An XRD analysis confirmed the presence of spectra from crystalline silicon and crystalline silicon carbide in cubic symmetry. The observed shift in spectra to the low-frequency zone indicated the formation of nanostructures, correlating with our SEM analysis results. These research outcomes present prospects for the further utilization and optimization of β-SiC synthesis technology for electronic device development.
]]>Technologies doi: 10.3390/technologies11060151
Authors: Shuwen Yu William P. Marnane Geraldine B. Boylan Gordon Lightbody
A deep learning classifier is proposed for grading hypoxic-ischemic encephalopathy (HIE) in neonates. Rather than using handcrafted features, this architecture can be fed with raw EEG. Fully convolutional layers were adopted both in the feature extraction and classification blocks, which makes this architecture simpler, and deeper, but with fewer parameters. Here, two large (335 h and 338 h, respectively) multi-center neonatal continuous EEG datasets were used for training and testing. The model was trained based on weak labels and channel independence. A majority vote method was used for the post-processing of the classifier results (across time and channels) to increase the robustness of the prediction. A dimension reduction tool, UMAP, was used to visualize the model classification effect. The proposed system achieved an accuracy of 86.09% (95% confidence interval: 82.41–89.78%), an MCC of 0.7691, and an AUC of 86.23% on the large unseen test set. Two convolutional neural network architectures which utilized time-frequency distribution features were selected as the baseline as they had been developed or tested on the same datasets. A relative improvement of 23.65% in test accuracy was obtained as compared with the best baseline. In addition, if only one channel was available, the test accuracy was only reduced by 2.63–5.91% compared with making decisions based on the eight channels.
]]>Technologies doi: 10.3390/technologies11050150
Authors: Elyor Berdimurodov Omar Dagdag Khasan Berdimuradov Wan Mohd Norsani Wan Nik Ilyos Eliboev Mansur Ashirov Sherzod Niyozkulov Muslum Demir Chinmurot Yodgorov Nizomiddin Aliev
Green electrospinning harnesses the potential of renewable biomaterials to craft biodegradable nanofiber structures, expanding their utility across a spectrum of applications. In this comprehensive review, we summarize the production, characterization and application of electrospun cellulose, collagen, gelatin and other biopolymer nanofibers in tissue engineering, drug delivery, biosensing, environmental remediation, agriculture and synthetic biology. These applications span diverse fields, including tissue engineering, drug delivery, biosensing, environmental remediation, agriculture, and synthetic biology. In the realm of tissue engineering, nanofibers emerge as key players, adept at mimicking the intricacies of the extracellular matrix. These fibers serve as scaffolds and vascular grafts, showcasing their potential to regenerate and repair tissues. Moreover, they facilitate controlled drug and gene delivery, ensuring sustained therapeutic levels essential for optimized wound healing and cancer treatment. Biosensing platforms, another prominent arena, leverage nanofibers by immobilizing enzymes and antibodies onto their surfaces. This enables precise glucose monitoring, pathogen detection, and immunodiagnostics. In the environmental sector, these fibers prove invaluable, purifying water through efficient adsorption and filtration, while also serving as potent air filtration agents against pollutants and pathogens. Agricultural applications see the deployment of nanofibers in controlled release fertilizers and pesticides, enhancing crop management, and extending antimicrobial food packaging coatings to prolong shelf life. In the realm of synthetic biology, these fibers play a pivotal role by encapsulating cells and facilitating bacteria-mediated prodrug activation strategies. Across this multifaceted landscape, nanofibers offer tunable topographies and surface functionalities that tightly regulate cellular behavior and molecular interactions. Importantly, their biodegradable nature aligns with sustainability goals, positioning them as promising alternatives to synthetic polymer-based technologies. As research and development continue to refine and expand the capabilities of green electrospun nanofibers, their versatility promises to advance numerous applications in the realms of biomedicine and biotechnology, contributing to a more sustainable and environmentally conscious future.
]]>Technologies doi: 10.3390/technologies11050149
Authors: Sofia Paschou Georgios Papaioannou
This paper contributes to the field of museum and visitor experience in terms of atmosphere by discussing the “museum digital atmosphere” or MDA, a notion that has been introduced and found across museums in Greece. Research on museum atmospherics has tended to focus on physical museum spaces and exhibits. By “atmosphere”, we mean the emotional state that is a result of public response adding to the overall museum experience. The MDA is therefore studied as the specific emotional state caused by the use of digital applications and technologies. The stimulus–organism–response or SOR model is used to define the MDA, so as to confirm and reinforce the concept. To that end, a qualitative methodological approach is used; we conduct semi-structured interviews and evaluate findings via content analysis. The sample consists of 17 specialists and professionals from the field, namely museologists, museographers, museum managers, and digital application developers working in Greek museums. Ultimately, this research uses the SOR model to reveal the effect of digital tools on the digital atmosphere in Greek museums. It also enriches the SOR model with additional concepts and emotions taken from real-life situations, adding new categories of variables. This research provides the initial data and knowledge regarding the concept of the MDA, along with its importance.
]]>Technologies doi: 10.3390/technologies11050148
Authors: Tianyi Zhang Yuan Ke
In this article, we introduce an innovative hybrid quantum search algorithm, the Robust Non-oracle Quantum Search (RNQS), which is specifically designed to efficiently identify the minimum value within a large set of random numbers. Distinct from the Grover’s algorithm, the proposed RNQS algorithm circumvents the need for an oracle function that describes the true solution state, a feature often impractical for data science applications. Building on existing non-oracular quantum search algorithms, RNQS enhances robustness while substantially reducing running time. The superior properties of RNQS have been demonstrated through careful analysis and extensive empirical experiments. Our findings underscore the potential of the RNQS algorithm as an effective and efficient solution to combinatorial optimization problems in the realm of quantum computing.
]]>Technologies doi: 10.3390/technologies11050147
Authors: Hsin-Tsung Lin Wei-Han Pan Pi-Chung Wang
Packet classification based on rules of packet header fields is the key technology for enabling software-defined networking (SDN). Ternary content addressable memory (TCAM) is a widely used hardware for packet classification; however, commercially available TCAM chips have only limited storage. As the number of supported header fields in SDN increases, the number of supported rules in a TCAM chip is reduced. In this work, we present a novel scheme to enable packet classification using TCAM with entries that are narrower than rules by storing the most representative field of a ruleset in TCAM. Due to the fact that not all rules can be distinguished using one field, our scheme employs a TCAM-based multimatch packet classification technique to ensure correctness. We further develop approaches to reduce redundant TCAM accesses for multimatch packet classification. Although our scheme requires additional TCAM accesses, it supports packet classification upon long rules with narrow TCAM entries, and drastically reduces the required TCAM storage. Our experimental results show that our scheme requires a moderate number of additional TCAM accesses and consumes much less storage compared to the basic TCAM-based packet classification. Thus, it can provide the required scalability for long rules required by potential applications of SDN.
]]>Technologies doi: 10.3390/technologies11050146
Authors: Nataliya Kildeeva Nikita Sazhnev Maria Drozdova Vasilina Zakharova Evgeniya Svidchenko Nikolay Surin Elena Markvicheva
Silk fibroin (SF) holds promise for the preparation of matrices for tissue engineering and regenerative medicine or for the development of drug delivery systems. Regenerated fibroin from Bombyx mori cocoons is water-soluble and can be processed into scaffolds of various forms, such as fibrous matrices, using the electrospinning method. In the current study, we studied the correlation between concentrations of fibroin aqueous solutions and their properties, in order to obtain electrospun mats for tissue engineering. Two methods were used to prevent solubility in fibroin-based matrices: The conversion of fibroin to the β-conformation via treatment with an ethanol solution and chemical cross-linking with genipin (Gp). The interaction of Gp with SF led to the appearance of a characteristic blue color but did not lead to the gelation of solutions. To speed up the cross-linking reaction with Gp, we propose using chitosan-containing systems and modifying fibrous materials via treatment with a solution of Gp in 80% ethanol. It was shown that the composition of fibroin with chitosan contributes to an improved water resistance, reduces defective material, and leads to a decrease in the diameter of the fibers. The electrospun fiber matrices based on regenerated fibroin modified by cross-linking with genipin in water–alcohol solutions were shown to promote cell adhesion, spreading, and growth and, therefore, could hold promise for tissue engineering.
]]>Technologies doi: 10.3390/technologies11050145
Authors: Gilyana K. Kazakova Victoria S. Presniakova Yuri M. Efremov Svetlana L. Kotova Anastasia A. Frolova Sergei V. Kostjuk Yury A. Rochev Peter S. Timashev
In the realm of scaffold-free cell therapies, there is a questto develop organotypic three-dimensional (3D) tissue surrogates in vitro, capitalizing on the inherent ability of cells to create tissues with an efficiency and sophistication that still remains unmatched by human-made devices. In this study, we explored the properties of scaffolds obtained by the electrospinning of a thermosensitive copolymer, poly(N-isopropylacrylamide-co-N-tert-butylacrylamide) (P(NIPAM-co-NtBA)), intended for use in such therapies. Two copolymers with molecular weights of 123 and 137 kDa and a content of N-tert-butylacrylamide of ca. 15 mol% were utilized to generate 3D scaffolds via electrospinning. We examined the morphology, solution viscosity, porosity, and thickness of the spun matrices as well as the mechanical properties and hydrophobic–hydrophilic characteristics of the scaffolds. Particular attention was paid to studying the influence of the thermosensitive polymer’s molecular weight and dispersity on the resultant scaffolds’ properties and the role of electroforming parameters on the morphology and mechanical characteristics of the scaffolds. The cytotoxicity of the copolymers and interaction of cells with the scaffolds were also studied. Our findings provide significant insight into approaches to optimizing scaffolds for specific cell cultures, thereby offering new opportunities for scaffold-free cell therapies.
]]>Technologies doi: 10.3390/technologies11050144
Authors: Pramita Sen Praneel Bhattacharya Gargi Mukherjee Jumasri Ganguly Berochan Marik Devyani Thapliyal Sarojini Verma George D. Verros Manvendra Singh Chauhan Raj Kumar Arya
Environmental pollution poses a pressing global challenge, demanding innovative solutions for effective pollutant removal. Photocatalysts, particularly titanium dioxide (TiO2), are renowned for their catalytic prowess; however, they often require ultraviolet light for activation. Researchers had turned to doping with metals and non-metals to extend their utility into the visible spectrum. While this approach shows promise, it also presents challenges such as material stability and dopant leaching. Co-doping, involving both metals and non-metals, has emerged as a viable strategy to mitigate these limitations. Inthe fieldof adsorbents, carbon-based materials doped with nitrogen are gaining attention for their improved adsorption capabilities and CO2/N2 selectivity. Nitrogen doping enhances surface area and fosters interactions between acidic CO2 molecules and basic nitrogen functionalities. The optimal combination of an ultramicroporous surface area and specific nitrogen functional groups is key to achievehigh CO2 uptake values and selectivity. The integration of photocatalysis and adsorption processes in doped materials has shown synergistic pollutant removal efficiency. Various synthesis methods, including sol–gel, co-precipitation, and hydrothermal approaches had been employed to create hybrid units of doped photocatalysts and adsorbents. While progress has been made in enhancing the performance of doped materials at the laboratory scale, challenges persist in transitioning these technologies to large-scale industrial applications. Rigorous studies are needed to investigate the impact of doping on material structure and stability, optimize process parameters, and assess performance in real-world industrial reactors. These advancements are promising foraddressing environmental pollution challenges, promoting sustainability, and paving the way for a cleaner and healthier future. This manuscript provides a comprehensive overview of recent developments in doping strategies for photocatalysts and adsorbents, offering insights into the potential of these materials to revolutionize environmental remediation technologies.
]]>Technologies doi: 10.3390/technologies11050143
Authors: Koji Nakano Shunsuke Tsukiyama Yasuaki Ito Takashi Yazane Junko Yano Takumi Kato Shiro Ozaki Rie Mori Ryota Katsuki
The Ising model is defined by an objective function using a quadratic formula of qubit variables. The problem of an Ising model aims to determine the qubit values of the variables that minimize the objective function, and many optimization problems can be reduced to this problem. In this paper, we focus on optimization problems related to permutations, where the goal is to find the optimal permutation out of the n! possible permutations of n elements. To represent these problems as Ising models, a commonly employed approach is to use a kernel that applies one-hot encoding to find any one of the n! permutations as the optimal solution. However, this kernel contains a large number of quadratic terms and high absolute coefficient values. The main contribution of this paper is the introduction of a novel permutation encoding technique called the dual-matrix domain wall, which significantly reduces the number of quadratic terms and the maximum absolute coefficient values in the kernel. Surprisingly, our dual-matrix domain-wall encoding reduces the quadratic term count and maximum absolute coefficient values from n3−n2 and 2n−4 to 6n2−12n+4 and 2, respectively. We also demonstrate the applicability of our encoding technique to partial permutations and Quadratic Unconstrained Binary Optimization (QUBO) models. Furthermore, we discuss a family of permutation problems that can be efficiently implemented using Ising/QUBO models with our dual-matrix domain-wall encoding.
]]>Technologies doi: 10.3390/technologies11050142
Authors: Poonam Tiwari Vishant Gahlaut Meenu Kaushik Anshuman Shastri Vivek Arya Issa Elfergani Chemseddine Zebiri Jonathan Rodriguez
An approach is presented to enhance the isolation of a two-port Multiple Input Multiple Output (MIMO) antenna using a decoupling structure and a common defected ground structure (DGS) that physically separates the antennas from each other. The antenna operates in the 24 to 40 GHz frequency range. The innovation in the presented MIMO antenna design involves the novel integration of two arc-shaped symmetrical elements with dimensions of 35 × 35 × 1.6 mm3 placed perpendicular to each other. The benefits of employing an antenna with elements arranged perpendicularly are exemplified by the enhancement of its overall performance metrics. These elements incorporate a microstrip feed featuring a quarter-wave transformer (QWT). This concept synergizes with decoupling techniques and a defected ground structure to significantly enhance isolation in a millimeter wave (mm wave) MIMO antenna. These methods collectively achieve an impressively wide bandwidth. Efficient decoupling methodologies have been implemented, yielding a notable increase of 5 dB in isolation performance. The antenna exhibits 10 dB impedance matching, with a 15 GHz (46.87%) wide bandwidth, excellent isolation of more than 28 dB, and a desirable gain of 4.6 dB. Antennas have been analyzed to improve their performance in mm wave applications by evaluating diversity parameters such as envelope correlation coefficient (ECC) and diversity gain (DG), with achieved values of 0.0016 and 9.992 dB, respectively. The simulation is conducted using CST software. To validate the findings, experimental investigations have been conducted, affirming the accuracy of the simulations.
]]>Technologies doi: 10.3390/technologies11050141
Authors: Yue Hao Choong Manickavasagam Krishnan Manoj Gupta
Thermal management devices such as heat exchangers and heat pipes are integral to safe and efficient performance in multiple engineering applications, including lithium-ion batteries, electric vehicles, electronics, and renewable energy. However, the functional designs of these devices have until now been created around conventional manufacturing constraints, and thermal performance has plateaued as a result. While 3D printing offers the design freedom to address these limitations, there has been a notable lack in high thermal conductivity materials beyond aluminium alloys. Recently, the 3D printing of pure copper to sufficiently high densities has finally taken off, due to the emergence of commercial-grade printers which are now equipped with 1 kW high-power lasers or short-wavelength lasers. Although the capabilities of these new systems appear ideal for processing pure copper as a bulk material, the performance of advanced thermal management devices are strongly dependent on topology-optimised filigree structures, which can require a very different processing window. Hence, this article presents a broad overview of the state-of-the-art in various additive manufacturing technologies used to fabricate pure copper functional filigree geometries comprising thin walls, lattice structures, and porous foams, and identifies opportunities for future developments in the 3D printing of pure copper for advanced thermal management devices.
]]>Technologies doi: 10.3390/technologies11050140
Authors: Md Masum Reza Jairo Gutierrez
With the rapid expansion of the Internet of Things (IoT), the necessity for lightweight communication is also increasing due to the constrained capabilities of IoT devices. This paper presents the design of a novel lightweight protocol called the Enhanced Lightweight Security Gateway Protocol (ELSGP) based on a distributed computation model of the IoT layer. This model introduces a new type of node called a sub-server to assist edge layer servers and IoT devices with computational tasks and act as a primary gateway for dependent IoT nodes. This paper then introduces six features of ELSGP with developed algorithms that include access token distribution and validation, authentication and dynamic interoperability, attribute-based access control, traffic filtering, secure tunneling, and dynamic load distribution and balancing. Considering the variability of system requirements, ELSGP also outlines how to adopt a system-defined policy framework. For fault resiliency, this paper also presents fault mitigation mechanisms, especially Trust and Priority Impact Relation for Byzantine, Cascading, and Transient faults. A simulation study was carried out to validate the protocol’s performance. Based on the findings from the performance evaluation, further analysis of the protocol and future research directions are outlined.
]]>Technologies doi: 10.3390/technologies11050139
Authors: Yingxiu Du Mingyue Hu Xiaohua Tu Chengping Miao Yang Zhang Jiayou Li
An environmentally friendly alkaline electrolyte of silicate and borate, which contained the addition of carbohydrates (lactose, starch, and dextrin), was applied to produce micro-arc oxidation (MAO) coatings on AZ31B magnesium alloy surfaces in constant current mode. The effects of the carbohydrates on the performance of the MAO coatings were investigated using a scanning electron microscope (SEM), an X-ray diffractometer (XRD), energy-dispersive spectroscopy (EDS), the salt spray test, potentiodynamic polarization curves, and electrochemical impedance spectroscopy (EIS). The results show that the carbohydrates effectively inhibited spark discharge, so the anodized growth process, surface morphology, composition, and corrosion resistance of the MAO coatings were strongly dependent on the carbohydrate concentration. This is ascribed to the surface adsorption layer formed on the surface of the magnesium alloy. When the carbohydrate concentration was 10 g/L, smooth, compact, and thick MAO coatings with excellent corrosion resistance on the magnesium alloy were obtained.
]]>Technologies doi: 10.3390/technologies11050138
Authors: Musulmon Lolaev Shraddha M. Naik Anand Paul Abdellah Chehri
The advent of Artificial Intelligence (AI) has had a broad impact on life to solve various tasks. Building AI models and integrating them with modern technologies is a central challenge for researchers. These technologies include wearables and implants in living beings, and their use is known as human augmentation, using technology to enhance human abilities. Combining human augmentation with artificial intelligence (AI), especially after the recent successes of the latter, is the most significant advancement in their applicability. In the first section, we briefly introduce these modern applications in health care and examples of their use cases. Then, we present a computationally efficient AI-driven method to diagnose heart failure events by leveraging actual heart failure data. The classifier model is designed without conventional models such as gradient descent. Instead, a heuristic is used to discover the optimal parameters of a linear model. An analysis of the proposed model shows that it achieves an accuracy of 84% and an F1 score of 0.72 with only one feature. With five features for diagnosis, the accuracy achieved is 83%, and the F1 score is 0.74. Moreover, the model is flexible, allowing experts to determine which variables are more important than others when implementing diagnostic systems.
]]>Technologies doi: 10.3390/technologies11050137
Authors: Arsalan D. Badaraev Tuan-Hoang Tran Anastasia G. Drozd Evgenii V. Plotnikov Gleb E. Dubinenko Anna I. Kozelskaya Sven Rutkowski Sergei I. Tverdokhlebov
In this work, the effects of weight concentration on the properties of poly(lactide-co-glycolide) polymeric scaffolds prepared by electrospinning are investigated, using four different weight concentrations of poly(lactide-co-glycolide) for the electrospinning solutions (2, 3, 4, 5 wt.%). With increasing concentration of poly(lactide-co-glycolide) in the electrospinning solutions, their viscosity increases significantly. The average fiber diameter of the scaffolds also increases with increasing concentration. Moreover, the tensile strength and maximum elongation at break of the scaffold increase with increasing electrospinning concentration. The prepared scaffolds have hydrophobic properties and their wetting angle does not change with the concentration of the electrospinning solution. All poly(lactide-co-glycolide) scaffolds are non-toxic toward fibroblasts of the cell line 3T3-L1, with the highest numbers of cells observed on the surface of scaffolds prepared from the 2-, 3- and 4-wt.% electrospinning solutions. The results of the analysis of mechanical and biological properties indicate that the poly(lactide-co-glycolide) scaffolds prepared from the 4 wt.% electrospinning solution have optimal properties for future applications in skin tissue engineering. This is due to the fact that the poly(lactide-co-glycolide) scaffolds prepared from the 2 wt.% and 3 wt.% electrospinning solution exhibit low mechanical properties, and 5 wt.% have the lowest porosity values, which might be the cause of their lowest biological properties.
]]>Technologies doi: 10.3390/technologies11050136
Authors: Zhen Pan Shunqi Yuan Xi Ren Zhibin He Zhenzhong Wang Shujun Han Yuexin Qi Haifeng Yu Jingang Liu
Nanotechnologies are being increasingly widely used in advanced energy fields. Triboelectric nanogenerators (TENGs) represent a class of new-type flexible energy-harvesting devices with promising application prospects in future human societies. As one of the most important parts of TENG devices, triboelectric materials play key roles in the achievement of high-efficiency power generation. Conventional polymer tribo-negative materials, such as polytetrafluoroethylene (PTFE), polyvinylidene difluoride (PVDF), and the standard polyimide (PI) film with the Kapton® trademark based on pyromellitic anhydride (PMDA) and 4,4′-oxydianiline (ODA), usually suffer from low output performance. In addition, the relationship between molecular structure and triboelectric properties remains a challenge in the search for novel triboelectric materials. In the current work, by incorporating functional groups of trifluoromethyl (–CF3) with strong electron withdrawal into the backbone, a series of fluorine-containing polyimide (FPI) negative friction layers have been designed and prepared. The derived FPI-1 (6FDA-6FODA), FPI-2 (6FDA-TFMB), and FPI-3 (6FDA-TFMDA) resins possessed good solubility in polar aprotic solvents, such as the N,N-dimethylacetamide (DMAc) and N-methyl-2-pyrrolidone (NMP). The PI films obtained via the solution-casting procedure showed glass transition temperatures (Tg) higher than 280 °C with differential scanning calorimetry (DSC) analyses. The TENG prototypes were successfully fabricated using the developed PI films as the tribo-negative layers. The electron-withdrawing trifluoromethyl (–CF3) units in the molecular backbones of the PI layers provided the devices with an apparently enhanced output performance. The FPI-3 (6FDA-TFMDA) layer-based TENG devices showcased an especially impressive open-circuit voltage and short-circuit current, measuring 277.8 V and 9.54 μA, respectively. These values were 4~5 times greater when compared to the TENGs manufactured using the readily accessible Kapton® film.
]]>Technologies doi: 10.3390/technologies11050135
Authors: Sharon P. Varughese S. Merlin Gilbert Raj T. Jesse Joel Sneha Gautam
The persistent threat posed by infectious pathogens remains a formidable challenge for humanity. Rapidly spreading infectious diseases caused by airborne microorganisms have far-reaching global consequences, imposing substantial costs on society. While various detection technologies have emerged, including biochemical, immunological, and molecular approaches, these methods still exhibit significant limitations such as time-intensive procedures, instability, and the need for specialized operators. This study presents an innovative solution that harnesses the potential of surface acoustic wave (SAW) sensors for the detection of airborne microorganisms. The research involves the establishment of a sensor model within the framework of COMSOL Multiphysics, utilizing a predefined piezoelectric multi-physics interface and employing a 2D modeling approach. Chitosan, selected as the sensing film for the model, interfaces with lithium niobate (LiNbO3), the chosen piezoelectric material responsible for detecting airborne pathogens. The analysis of microbe presence centers on solid displacement and electric potential frequencies, operating within the 850–900 MHz range. Notably, the first and second resonant frequencies are identified at 856 and 859 MHz, respectively. To enhance understanding, this study proposes a novel mathematical model grounded in Stokes’ Law and mass balance equations. This model serves to analyze microbe concentration, offering a fresh perspective on quantifying the presence of airborne pathogens. Through these endeavors, this research contributes to advancing the field of airborne microorganism detection, offering a promising avenue for addressing the challenges posed by infectious diseases.
]]>Technologies doi: 10.3390/technologies11050134
Authors: Ovi Sarkar Md. Robiul Islam Md. Khalid Syfullah Md. Tohidul Islam Md. Faysal Ahamed Mominul Ahsan Julfikar Haider
Lung-related diseases continue to be a leading cause of global mortality. Timely and precise diagnosis is crucial to save lives, but the availability of testing equipment remains a challenge, often coupled with issues of reliability. Recent research has highlighted the potential of Chest X-ray (CXR) images in identifying various lung diseases, including COVID-19, fibrosis, pneumonia, and more. In this comprehensive study, four publicly accessible datasets have been combined to create a robust dataset comprising 6650 CXR images, categorized into seven distinct disease groups. To effectively distinguish between normal and six different lung-related diseases (namely, bacterial pneumonia, COVID-19, fibrosis, lung opacity, tuberculosis, and viral pneumonia), a Deep Learning (DL) architecture called a Multi-Scale Convolutional Neural Network (MS-CNN) is introduced. The model is adapted to classify multiple numbers of lung disease classes, which is considered to be a persistent challenge in the field. While prior studies have demonstrated high accuracy in binary and limited-class scenarios, the proposed framework maintains this accuracy across a diverse range of lung conditions. The innovative model harnesses the power of combining predictions from multiple feature maps at different resolution scales, significantly enhancing disease classification accuracy. The approach aims to shorten testing duration compared to the state-of-the-art models, offering a potential solution toward expediting medical interventions for patients with lung-related diseases and integrating explainable AI (XAI) for enhancing prediction capability. The results demonstrated an impressive accuracy of 96.05%, with average values for precision, recall, F1-score, and AUC at 0.97, 0.95, 0.95, and 0.94, respectively, for the seven-class classification. The model exhibited exceptional performance across multi-class classifications, achieving accuracy rates of 100%, 99.65%, 99.21%, 98.67%, and 97.47% for two, three, four, five, and six-class scenarios, respectively. The novel approach not only surpasses many pre-existing state-of-the-art (SOTA) methodologies but also sets a new standard for the diagnosis of lung-affected diseases using multi-class CXR data. Furthermore, the integration of XAI techniques such as SHAP and Grad-CAM enhanced the transparency and interpretability of the model’s predictions. The findings hold immense promise for accelerating and improving the accuracy and confidence of diagnostic decisions in the field of lung disease identification.
]]>Technologies doi: 10.3390/technologies11050133
Authors: Marcos Severt Roberto Casado-Vara Angel Martín del Rey
Malware propagation is a growing concern due to its potential impact on the security and integrity of connected devices in Internet of Things (IoT) network environments. This study investigates parameter estimation for Susceptible-Infectious-Recovered (SIR) and Susceptible–Infectious–Recovered–Susceptible (SIRS) models modeling malware propagation in an IoT network. Synthetic data of malware propagation in the IoT network is generated and a comprehensive comparison is made between two approaches: algorithms based on Monte Carlo methods and Physics-Informed Neural Networks (PINNs). The results show that, based on the infection curve measured in the IoT network, both methods are able to provide accurate estimates of the parameters of the malware propagation model. Furthermore, the results show that the choice of the appropriate method depends on the dynamics of the spreading malware and computational constraints. This work highlights the importance of considering both classical and AI-based approaches and provides a basis for future research on parameter estimation in epidemiological models applied to malware propagation in IoT networks.
]]>Technologies doi: 10.3390/technologies11050132
Authors: Francesc Auli-Llinas
The compression of data is fundamental to alleviating the costs of transmitting and storing massive datasets employed in myriad fields of our society. Most compression systems employ an entropy coder in their coding pipeline to remove the redundancy of coded symbols. The entropy-coding stage needs to be efficient, to yield high compression ratios, and fast, to process large amounts of data rapidly. Despite their widespread use, entropy coders are commonly assessed for some particular scenario or coding system. This work provides a general framework to assess and optimize different entropy coders. First, the paper describes three main families of entropy coders, namely those based on variable-to-variable length codes (V2VLC), arithmetic coding (AC), and tabled asymmetric numeral systems (tANS). Then, a low-complexity architecture for the most representative coder(s) of each family is presented—more precisely, a general version of V2VLC, the MQ, M, and a fixed-length version of AC and two different implementations of tANS. These coders are evaluated under different coding conditions in terms of compression efficiency and computational throughput. The results obtained suggest that V2VLC and tANS achieve the highest compression ratios for most coding rates and that the AC coder that uses fixed-length codewords attains the highest throughput. The experimental evaluation discloses the advantages and shortcomings of each entropy-coding scheme, providing insights that may help to select this stage in forthcoming compression systems.
]]>Technologies doi: 10.3390/technologies11050131
Authors: Maha Gharaibeh Wlla Abedalaziz Noor Aldeen Alawad Hasan Gharaibeh Ahmad Nasayreh Mwaffaq El-Heis Maryam Altalhi Agostino Forestiero Laith Abualigah
The intricate neuroinflammatory diseases multiple sclerosis (MS) and neuromyelitis optica (NMO) often present similar clinical symptoms, creating challenges in their precise detection via magnetic resonance imaging (MRI). This challenge is further compounded when detecting the active and inactive states of MS. To address this diagnostic problem, we introduce an innovative framework that incorporates state-of-the-art machine learning algorithms applied to features culled from MRI scans by pre-trained deep learning models, VGG-NET and InceptionV3. To develop and test this methodology, we utilized a robust dataset obtained from the King Abdullah University Hospital in Jordan, encompassing cases diagnosed with both MS and NMO. We benchmarked thirteen distinct machine learning algorithms and discovered that support vector machine (SVM) and K-nearest neighbor (KNN) algorithms performed superiorly in our context. Our results demonstrated KNN’s exceptional performance in differentiating between MS and NMO, with precision, recall, F1-score, and accuracy values of 0.98, 0.99, 0.99, and 0.99, respectively, using leveraging features extracted from VGG16. In contrast, SVM excelled in classifying active versus inactive states of MS, achieving precision, recall, F1-score, and accuracy values of 0.99, 0.97, 0.98, and 0.98, respectively, using leveraging features extracted from VGG16 and VGG19. Our advanced methodology outshines previous studies, providing clinicians with a highly accurate, efficient tool for diagnosing these diseases. The immediate implication of our research is the potential to streamline treatment processes, thereby delivering timely, appropriate care to patients suffering from these complex diseases.
]]>Technologies doi: 10.3390/technologies11050130
Authors: Seetha S Esther Daniel S Durga Jennifer Eunice R Andrew J
The academic and research communities are showing significant interest in the modern and highly promising technology of wireless mesh networks (WMNs) due to their low-cost deployment, self-configuration, self-organization, robustness, scalability, and reliable service coverage. Multicasting is a broadcast technique in which the communication is started by an individual user and is shared by one or multiple groups of destinations concurrently as one-to-many allotments. The multicasting protocols are focused on building accurate paths with proper channel optimization techniques. The forwarder nodes of the multicast protocol may behave with certain malicious characteristics, such as dropping packets, and delayed transmissions that cause heavy packet loss in the network. This leads to a reduced packet delivery ratio and throughput of the network. Hence, the forwarder node validation is critical for building a secure network. This research paper presents a secure forwarder selection between a sender and the batch of receivers by utilizing the node’s communication behavior. The parameters of the malicious nodes are analyzed using orthogonal projection and statistical methods to distinguish malicious node behaviors from normal node behaviors based on node actions. The protocol then validates the malicious behaviors and subsequently eliminates them from the forwarder selection process using secure path finding strategies, which lead to dynamic and scalable multicast mesh networks for communication.
]]>Technologies doi: 10.3390/technologies11050129
Authors: Alejandro Villanueva Cerón Eduardo López Domínguez Saúl Domínguez Isidro María Auxilio Medina Nieto Jorge De La Calleja Saúl Eduardo Pomares Hernández
In the field of eHealth, several works have proposed telemonitoring systems focused on patients with chronic kidney disease (CKD) undergoing peritoneal dialysis (PD) treatment. Nevertheless, no secondary study presents a comparative analysis of these works regarding the technology readiness level (TRL) framework. The TRL scale goes from 1 to 9, with 1 being the lowest level of readiness and 9 being the highest. This paper analyzes works that propose telemonitoring systems focused on patients with CKD undergoing PD treatment to determine their TRL. We also analyzed the requirements and parameters that the systems of the selected works provide to the users to perform telemonitoring of the patient’s treatment undergoing PD. Fourteen works were relevant to the present study. Of these works, eight were classified within TRL 9, two were categorized within TRL 7, three were identified within TRL 6, and one within TRL 4. The works reported with the highest TRL partially cover the requirements for appropriate telemonitoring of patients based on the specialized literature; in addition, those works are focused on the treatment of patients in the automated peritoneal dialysis (APD) modality, which limits the care of patients undergoing the continuous ambulatory peritoneal dialysis (CAPD) modality.
]]>Technologies doi: 10.3390/technologies11050128
Authors: Usharani Bhimavarapu Nalini Chintalapudi Gopi Battineni
Lung disease is a respiratory disease that poses a high risk to people worldwide and includes pneumonia and COVID-19. As such, quick and precise identification of lung disease is vital in medical treatment. Early detection and diagnosis can significantly reduce the life-threatening nature of lung diseases and improve the quality of life of human beings. Chest X-ray and computed tomography (CT) scan images are currently the best techniques to detect and diagnose lung infection. The increase in the chest X-ray or CT scan images at the time of training addresses the overfitting dilemma, and multi-class classification of lung diseases will deal with meaningful information and overfitting. Overfitting deteriorates the performance of the model and gives inaccurate results. This study reduces the overfitting issue and computational complexity by proposing a new enhanced kernel convolution function. Alongside an enhanced kernel convolution function, this study used convolution neural network (CNN) models to determine pneumonia and COVID-19. Each CNN model was applied to the collected dataset to extract the features and later applied these features as input to the classification models. This study shows that extracting deep features from the common layers of the CNN models increased the performance of the classification procedure. The multi-class classification improves the diagnostic performance, and the evaluation metrics improved significantly with the improved support vector machine (SVM). The best results were obtained using the improved SVM classifier fed with the features provided by CNN, and the success rate of the improved SVM was 99.8%.
]]>Technologies doi: 10.3390/technologies11050127
Authors: Shatha Abu Rass Omer Cohen Eliav Bareli Sigal Portnoy
Audio guidance is a common means of helping visually impaired individuals to navigate, thereby increasing their independence. However, the differences between different guidance modalities for locating objects in 3D space have yet to be investigated. The aim of this study was to compare the time, the hand’s path length, and the satisfaction levels of visually impaired individuals using three automatic cueing modalities: pitch sonification, verbal, and vibration. We recruited 30 visually impaired individuals (11 women, average age 39.6 ± 15.0), who were asked to locate a small cube, guided by one of three cueing modalities: sonification (a continuous beep that increases in frequency as the hand approaches the cube), verbal prompting (“right”, “forward”, etc.), and vibration (via five motors, attached to different locations on the hand). The three cueing modalities were automatically activated by computerized motion capture systems. The subjects separately answered satisfaction questions for each cueing modality. The main finding was that the time to find the cube was longer using the sonification cueing (p = 0.016). There were no significant differences in the hand path length or the subjects’ satisfaction. It can be concluded that verbal guidance may be the most effective for guiding people with visual impairment to locate an object in a 3D space.
]]>