Previous Issue
Volume 17, June
 
 

Algorithms, Volume 17, Issue 7 (July 2024) – 39 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
15 pages, 370 KiB  
Article
Messy Broadcasting in Grid
by Aria Adibi and Hovhannes A. Harutyunyan
Algorithms 2024, 17(7), 310; https://doi.org/10.3390/a17070310 - 12 Jul 2024
Viewed by 1
Abstract
In classical broadcast models, information is disseminated in synchronous rounds under the constant communication time model, wherein a node may only inform one of its neighbors in each time-unit—also known as the processor-bound model. These models assume either a coordinating leader or that [...] Read more.
In classical broadcast models, information is disseminated in synchronous rounds under the constant communication time model, wherein a node may only inform one of its neighbors in each time-unit—also known as the processor-bound model. These models assume either a coordinating leader or that each node has a set of coordinated actions optimized for each originator, which may require nodes to have sufficient storage, processing power, and the ability to determine the originator. This assumption is not always ideal, and a broadcast model based on the node’s local knowledge can sometimes be more effective. Messy models address these issues by eliminating the need for a leader, knowledge of the starting time, and the identity of the originator, relying solely on local knowledge available to each node. This paper investigates the messy broadcast time and optimal scheme in a grid graph, a structure widely used in computer networking systems, particularly in parallel computing, due to its robustness, scalability, fault tolerance, and simplicity. The focus is on scenarios where the originator is located at one of the corner vertices, aiming to understand the efficiency and performance of messy models in such grid structures. Full article
Show Figures

Figure 1

20 pages, 29618 KiB  
Article
Real-Time Tracking and Detection of Cervical Cancer Precursor Cells: Leveraging SIFT Descriptors in Mobile Video Sequences for Enhanced Early Diagnosis
by Jesus Eduardo Alcaraz-Chavez, Adriana del Carmen Téllez-Anguiano, Juan Carlos Olivares-Rojas and Ricardo Martínez-Parrales
Algorithms 2024, 17(7), 309; https://doi.org/10.3390/a17070309 - 12 Jul 2024
Viewed by 85
Abstract
Cervical cancer ranks among the leading causes of mortality in women worldwide, underscoring the critical need for early detection to ensure patient survival. While the Pap smear test is widely used, its effectiveness is hampered by the inherent subjectivity of cytological analysis, impacting [...] Read more.
Cervical cancer ranks among the leading causes of mortality in women worldwide, underscoring the critical need for early detection to ensure patient survival. While the Pap smear test is widely used, its effectiveness is hampered by the inherent subjectivity of cytological analysis, impacting its sensitivity and specificity. This study introduces an innovative methodology for detecting and tracking precursor cervical cancer cells using SIFT descriptors in video sequences captured with mobile devices. More than one hundred digital images were analyzed from Papanicolaou smears provided by the State Public Health Laboratory of Michoacán, Mexico, along with over 1800 unique examples of cervical cancer precursor cells. SIFT descriptors enabled real-time correspondence of precursor cells, yielding results demonstrating 98.34% accuracy, 98.3% precision, 98.2% recovery rate, and an F-measure of 98.05%. These methods were meticulously optimized for real-time analysis, showcasing significant potential to enhance the accuracy and efficiency of the Pap smear test in early cervical cancer detection. Full article
(This article belongs to the Special Issue Recent Advances in Algorithms for Computer Vision Applications)
20 pages, 4618 KiB  
Article
Automatic Vertical Parking Reference Trajectory Based on Improved Immune Shark Smell Optimization
by Yan Chen, Gang Liu, Longda Wang and Bing Xia
Algorithms 2024, 17(7), 308; https://doi.org/10.3390/a17070308 - 11 Jul 2024
Viewed by 160
Abstract
Parking path optimization is the principal problem of automatic vertical parking (AVP); however, it is difficult to determine a collision avoiding, smooth, and accurate optimized parking path using traditional parking reference trajectory optimization methods. In order to implement high-performance automatic parking reference trajectory [...] Read more.
Parking path optimization is the principal problem of automatic vertical parking (AVP); however, it is difficult to determine a collision avoiding, smooth, and accurate optimized parking path using traditional parking reference trajectory optimization methods. In order to implement high-performance automatic parking reference trajectory optimization, we establish an automatic parking reference trajectory optimization model using cubic spline interpolation, and we propose an improved immune shark smell optimization (IISSO) to solve it. Firstly, we take the length of the parking reference trajectory as the optimization objective, and we introduce an intelligent automatic parking path optimization model using cubic spline interpolation. Secondly, the improved immune shark optimization algorithm combines the immune, refraction, and Gaussian variation mechanisms, thus effectively improving its global optimization ability. The simulation results for the parking path optimization experiments indicate that the proposed IISSO has a higher optimization accuracy and faster calculation speed; hence, it can obtain a parking path with higher optimization performance. Full article
(This article belongs to the Special Issue Algorithms for Complex Problems)
14 pages, 777 KiB  
Article
Evaluating the Expressive Range of Super Mario Bros Level Generators
by Hans Schaa and Nicolas A. Barriga
Algorithms 2024, 17(7), 307; https://doi.org/10.3390/a17070307 - 11 Jul 2024
Viewed by 190
Abstract
Procedural Content Generation for video games (PCG) is widely used by today’s video game industry to create huge open worlds or enhance replayability. However, there is little scientific evidence that these systems produce high-quality content. In this document, we evaluate three open-source automated [...] Read more.
Procedural Content Generation for video games (PCG) is widely used by today’s video game industry to create huge open worlds or enhance replayability. However, there is little scientific evidence that these systems produce high-quality content. In this document, we evaluate three open-source automated level generators for Super Mario Bros in addition to the original levels used for training. These are based on Genetic Algorithms, Generative Adversarial Networks, and Markov Chains. The evaluation was performed through an Expressive Range Analysis (ERA) on 200 levels with nine metrics. The results show how analyzing the algorithms’ expressive range can help us evaluate the generators as a preliminary measure to study whether they respond to users’ needs. This method allows us to recognize potential problems early in the content generation process, in addition to taking action to guarantee quality content when a generator is used. Full article
(This article belongs to the Special Issue Algorithms for Games AI)
Show Figures

Figure 1

20 pages, 694 KiB  
Article
Performance Evaluation of Fractional Proportional–Integral–Derivative Controllers Tuned by Heuristic Algorithms for Nonlinear Interconnected Tanks
by Raúl Pazmiño, Wilson Pavon, Matthew Armstrong and Silvio Simani
Algorithms 2024, 17(7), 306; https://doi.org/10.3390/a17070306 - 10 Jul 2024
Viewed by 211
Abstract
This article presents an in-depth analysis of three advanced strategies to tune fractional PID (FOPID) controllers for a nonlinear system of interconnected tanks, simulated using MATLAB. The study focuses on evaluating the performance characteristics of system responses controlled by FOPID controllers tuned through [...] Read more.
This article presents an in-depth analysis of three advanced strategies to tune fractional PID (FOPID) controllers for a nonlinear system of interconnected tanks, simulated using MATLAB. The study focuses on evaluating the performance characteristics of system responses controlled by FOPID controllers tuned through three heuristic algorithms: Ant Colony Optimization (ACO), Grey Wolf Optimizer (GWO), and Flower Pollination Algorithm (FPA). Each algorithm aims to minimize its respective cost function using various performance metrics. The nonlinear model was linearized around an equilibrium point using Taylor Series expansion and Laplace transforms to facilitate control. The FPA algorithm performed better with the lowest Integral Square Error (ISE) criterion value (297.83) and faster convergence in constant values and fractional orders. This comprehensive evaluation underscores the importance of selecting the appropriate tuning strategy and performance index, demonstrating that the FPA provides the most efficient and robust tuning for FOPID controllers in nonlinear systems. The results highlight the efficacy of meta-heuristic algorithms in optimizing complex control systems, providing valuable insights for future research and practical applications, thereby contributing to the advancement of control systems engineering. Full article
Show Figures

Figure 1

18 pages, 910 KiB  
Article
Equity in Transportation Asset Management: A Proposed Framework
by Sara Arezoumand and Omar Smadi
Algorithms 2024, 17(7), 305; https://doi.org/10.3390/a17070305 (registering DOI) - 9 Jul 2024
Viewed by 199
Abstract
Transportation asset management has historically overlooked equity considerations. However, recently, there has been a significant increase in concerns about this issue, leading to a range of research and practices aimed at achieving more equitable outcomes. Yet, addressing equity is challenging and time-consuming, given [...] Read more.
Transportation asset management has historically overlooked equity considerations. However, recently, there has been a significant increase in concerns about this issue, leading to a range of research and practices aimed at achieving more equitable outcomes. Yet, addressing equity is challenging and time-consuming, given its complexity and multifaceted nature. Several factors can significantly impact the outcome of an analysis, including the definition of equity, the evaluation and quantification of its impacts, and the community classification. As a result, there can be a wide range of interpretations of what constitutes equity. Therefore, there is no single correct or incorrect approach for equity evaluation, and different perspectives, impacts, and analysis methods could be considered for this purpose. This study reviews previous research on how transportation agencies are integrating equity into transportation asset management, particularly pavement management systems. The primary objective is to investigate important equity factors for pavement management and propose a prototype framework that integrates economic, environmental, and social equity considerations into the decision-making process for pavement maintenance, rehabilitation, and reconstruction projects. The proposed framework consists of two main steps: (1) defining objectives based on the three equity dimensions, and (2) analyzing key factors for identifying underserved areas through a case study approach. The case study analyzed pavement condition and sociodemographic data for California’s Bay Area. Statistical analysis and a machine learning method revealed that areas with higher poverty rates and worse air quality tend to have poorer pavement conditions, highlighting the need to consider these factors when defining underserved areas in Bay Area and promoting equity in pavement management decision-making. The proposed framework incorporates an optimization problem to simultaneously minimize disparities in pavement conditions between underserved and other areas, reduce greenhouse gas emissions from construction and traffic disruptions, and maximize overall network pavement condition subject to budget constraints. By incorporating all three equity aspects into a quantitative decision-support framework with specific objectives, this study proposes a novel approach for transportation agencies to promote sustainable and equitable asset management practices. Full article
(This article belongs to the Special Issue Algorithms for Smart Cities)
Show Figures

Figure 1

20 pages, 1477 KiB  
Article
Sequential Convex Programming for Nonlinear Optimal Control in UAV Trajectory Planning
by Yong Li, Qidan Zhu and Ahsan Elahi
Algorithms 2024, 17(7), 304; https://doi.org/10.3390/a17070304 - 8 Jul 2024
Viewed by 215
Abstract
Abstract: In this paper, an algorithm is proposed to solve the non-convex optimization problem using sequential convex programming. An approximation method was used to solve the collision avoidance constraint. An iterative approach was utilized to estimate the non-convex constraints, replacing them with [...] Read more.
Abstract: In this paper, an algorithm is proposed to solve the non-convex optimization problem using sequential convex programming. An approximation method was used to solve the collision avoidance constraint. An iterative approach was utilized to estimate the non-convex constraints, replacing them with their linear approximations. Through the simulation, we can see that this method allows for quadcopters to take off from a given initial position and fly to the desired final position within a specified flight time. It is guaranteed that the quadcopters will not collide with each other in different scenarios. Full article
20 pages, 483 KiB  
Article
On Implementing a Two-Step Interior Point Method for Solving Linear Programs
by Sajad Fathi Hafshejani, Daya Gaur and Robert Benkoczi
Algorithms 2024, 17(7), 303; https://doi.org/10.3390/a17070303 - 8 Jul 2024
Viewed by 197
Abstract
A new two-step interior point method for solving linear programs is presented. The technique uses a convex combination of the auxiliary and central points to compute the search direction. To update the central point, we find the best value for step size such [...] Read more.
A new two-step interior point method for solving linear programs is presented. The technique uses a convex combination of the auxiliary and central points to compute the search direction. To update the central point, we find the best value for step size such that the feasibility condition is held. Since we use the information from the previous iteration to find the search direction, the inverse of the system is evaluated only once every iteration. A detailed empirical evaluation is performed on NETLIB instances, which compares two variants of the approach to the primal-dual log barrier interior point method. Results show that the proposed method is faster. The method reduces the number of iterations and CPU time(s) by 27% and 18%, respectively, on NETLIB instances tested compared to the classical interior point algorithm. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 2nd Edition)
25 pages, 2248 KiB  
Article
SCMs: Systematic Conglomerated Models for Audio Cough Signal Classification
by Sunil Kumar Prabhakar and Dong-Ok Won
Algorithms 2024, 17(7), 302; https://doi.org/10.3390/a17070302 - 8 Jul 2024
Viewed by 302
Abstract
A common and natural physiological response of the human body is cough, which tries to push air and other wastage thoroughly from the airways. Due to environmental factors, allergic responses, pollution or some diseases, cough occurs. A cough can be either dry or [...] Read more.
A common and natural physiological response of the human body is cough, which tries to push air and other wastage thoroughly from the airways. Due to environmental factors, allergic responses, pollution or some diseases, cough occurs. A cough can be either dry or wet depending on the amount of mucus produced. A characteristic feature of the cough is the sound, which is a quacking sound mostly. Human cough sounds can be monitored continuously, and so, cough sound classification has attracted a lot of interest in the research community in the last decade. In this research, three systematic conglomerated models (SCMs) are proposed for audio cough signal classification. The first conglomerated technique utilizes the concept of robust models like the Cross-Correlation Function (CCF) and Partial Cross-Correlation Function (PCCF) model, Least Absolute Shrinkage and Selection Operator (LASSO) model, elastic net regularization model with Gabor dictionary analysis and efficient ensemble machine learning techniques, the second technique utilizes the concept of stacked conditional autoencoders (SAEs) and the third technique utilizes the concept of using some efficient feature extraction schemes like Tunable Q Wavelet Transform (TQWT), sparse TQWT, Maximal Information Coefficient (MIC), Distance Correlation Coefficient (DCC) and some feature selection techniques like the Binary Tunicate Swarm Algorithm (BTSA), aggregation functions (AFs), factor analysis (FA), explanatory factor analysis (EFA) classified with machine learning classifiers, kernel extreme learning machine (KELM), arc-cosine ELM, Rat Swarm Optimization (RSO)-based KELM, etc. The techniques are utilized on publicly available datasets, and the results show that the highest classification accuracy of 98.99% was obtained when sparse TQWT with AF was implemented with an arc-cosine ELM classifier. Full article
(This article belongs to the Special Issue Quantum and Classical Artificial Intelligence)
Show Figures

Figure 1

20 pages, 2923 KiB  
Article
To Cache or Not to Cache
by Steven Lyons, Jr. and Raju Rangaswami
Algorithms 2024, 17(7), 301; https://doi.org/10.3390/a17070301 - 7 Jul 2024
Viewed by 241
Abstract
Unlike conventional CPU caches, non-datapath caches, such as host-side flash caches which are extensively used as storage caches, have distinct requirements. While every cache miss results in a cache update in a conventional cache, non-datapath caches allow for the flexibility of selective [...] Read more.
Unlike conventional CPU caches, non-datapath caches, such as host-side flash caches which are extensively used as storage caches, have distinct requirements. While every cache miss results in a cache update in a conventional cache, non-datapath caches allow for the flexibility of selective caching, i.e., the option of not having to update the cache on each miss. We propose a new, generalized, bimodal caching algorithm, Fear Of Missing Out (FOMO), for managing non-datapath caches. Being generalized has the benefit of allowing any datapath cache replacement policy, such as LRU, ARC, or LIRS, to be augmented by FOMO to make these datapath caching algorithms better suited for non-datapath caches. Operating in two states, FOMO is selective—it selectively disables cache insertion and replacement depending on the learned behavior of the workload. FOMO is lightweight and tracks inexpensive metrics in order to identify these workload behaviors effectively. FOMO is evaluated using three different cache replacement policies against the current state-of-the-art non-datapath caching algorithms, using five different storage system workload repositories (totaling 176 workloads) for six different cache size configurations, each sized as a percentage of each workload’s footprint. Our extensive experimental analysis reveals that FOMO can improve upon other non-datapath caching algorithms across a range of production storage workloads, while also reducing the write rate. Full article
13 pages, 2390 KiB  
Article
Continuous Recognition of Teachers’ Hand Signals for Students with Attention Deficits
by Ivane Delos Santos Chen, Chieh-Ming Yang, Shang-Shu Wu, Chih-Kang Yang, Mei-Juan Chen, Chia-Hung Yeh and Yuan-Hong Lin
Algorithms 2024, 17(7), 300; https://doi.org/10.3390/a17070300 - 7 Jul 2024
Viewed by 296
Abstract
In the era of inclusive education, students with attention deficits are integrated into the general classroom. To ensure a seamless transition of students’ focus towards the teacher’s instruction throughout the course and to align with the teaching pace, this paper proposes a continuous [...] Read more.
In the era of inclusive education, students with attention deficits are integrated into the general classroom. To ensure a seamless transition of students’ focus towards the teacher’s instruction throughout the course and to align with the teaching pace, this paper proposes a continuous recognition algorithm for capturing teachers’ dynamic gesture signals. This algorithm aims to offer instructional attention cues for students with attention deficits. According to the body landmarks of the teacher’s skeleton by using vision and machine learning-based MediaPipe BlazePose, the proposed method uses simple rules to detect the teacher’s hand signals dynamically and provides three kinds of attention cues (Pointing to left, Pointing to right, and Non-pointing) during the class. Experimental results show the average accuracy, sensitivity, specificity, precision, and F1 score achieved 88.31%, 91.03%, 93.99%, 86.32%, and 88.03%, respectively. By analyzing non-verbal behavior, our method of competent performance can replace verbal reminders from the teacher and be helpful for students with attention deficits in inclusive education. Full article
(This article belongs to the Special Issue Algorithms for Image Processing and Machine Vision)
Show Figures

Figure 1

32 pages, 2218 KiB  
Article
Mixed Graph Colouring as Scheduling a Partially Ordered Set of Interruptible Multi-Processor Tasks with Integer Due Dates
by Evangelina I. Mihova and Yuri N. Sotskov
Algorithms 2024, 17(7), 299; https://doi.org/10.3390/a17070299 - 6 Jul 2024
Viewed by 240
Abstract
We investigate relationships between scheduling problems with the bottleneck objective functions (minimising makespan or maximal lateness) and problems of optimal colourings of the mixed graphs. The investigated scheduling problems have integer durations of the multi-processor tasks (operations), integer release dates and integer due [...] Read more.
We investigate relationships between scheduling problems with the bottleneck objective functions (minimising makespan or maximal lateness) and problems of optimal colourings of the mixed graphs. The investigated scheduling problems have integer durations of the multi-processor tasks (operations), integer release dates and integer due dates of the given jobs. In the studied scheduling problems, it is required to find an optimal schedule for processing the partially ordered operations, given that operation interruptions are allowed and indicated subsets of the unit-time operations must be processed simultaneously. First, we show that the input data for any considered scheduling problem can be completely determined by the corresponding mixed graph. Second, we prove that solvable scheduling problems can be reduced to problems of finding optimal colourings of corresponding mixed graphs. Third, finding an optimal colouring of the mixed graph is equivalent to the considered scheduling problem determined by the same mixed graph. Finally, due to the proven equivalence of the considered optimisation problems, most of the results that were proven for the optimal colourings of mixed graphs generate similar results for considered scheduling problems, and vice versa. Full article
(This article belongs to the Special Issue 2024 and 2025 Selected Papers from Algorithms Editorial Board Members)
20 pages, 1368 KiB  
Article
A Sparsity-Invariant Model via Unifying Depth Prediction and Completion
by Shuling Wang, Fengze Jiang and Xiaojin Gong
Algorithms 2024, 17(7), 298; https://doi.org/10.3390/a17070298 - 6 Jul 2024
Viewed by 237
Abstract
The development of a sparse-invariant depth completion model capable of handling varying levels of input depth sparsity is highly desirable in real-world applications. However, existing sparse-invariant models tend to degrade when the input depth points are extremely sparse. In this paper, we propose [...] Read more.
The development of a sparse-invariant depth completion model capable of handling varying levels of input depth sparsity is highly desirable in real-world applications. However, existing sparse-invariant models tend to degrade when the input depth points are extremely sparse. In this paper, we propose a new model that combines the advantageous designs of depth completion and monocular depth estimation tasks to achieve sparse invariance. Specifically, we construct a dual-branch architecture with one branch dedicated to depth prediction and the other to depth completion. Additionally, we integrate the multi-scale local planar module in the decoders of both branches. Experimental results on the NYU Depth V2 benchmark and the OPPO prototype dataset equipped with the Spot-iToF316 sensor demonstrate that our model achieves reliable results even in cases with irregularly distributed, limited or absent depth information. Full article
(This article belongs to the Special Issue Machine Learning Algorithms for Image Understanding and Analysis)
21 pages, 2544 KiB  
Article
Crystal Symmetry-Inspired Algorithm for Optimal Design of Contemporary Mono Passivated Emitter and Rear Cell Solar Photovoltaic Modules
by Ram Ishwar Vais, Kuldeep Sahay, Tirumalasetty Chiranjeevi, Ramesh Devarapalli and Łukasz Knypiński
Algorithms 2024, 17(7), 297; https://doi.org/10.3390/a17070297 - 6 Jul 2024
Viewed by 221
Abstract
A metaheuristic algorithm named the Crystal Structure Algorithm (CrSA), which is inspired by the symmetric arrangement of atoms, molecules, or ions in crystalline minerals, has been used for the accurate modeling of Mono Passivated Emitter and Rear Cell (PERC) WSMD-545 and CS7L-590 MS [...] Read more.
A metaheuristic algorithm named the Crystal Structure Algorithm (CrSA), which is inspired by the symmetric arrangement of atoms, molecules, or ions in crystalline minerals, has been used for the accurate modeling of Mono Passivated Emitter and Rear Cell (PERC) WSMD-545 and CS7L-590 MS solar photovoltaic (PV) modules. The suggested algorithm is a concise and parameter-free approach that does not need the identification of any intrinsic parameter during the optimization stage. It is based on crystal structure generation by combining the basis and lattice point. The proposed algorithm is adopted to minimize the sum of the squares of the errors at the maximum power point, as well as the short circuit and open circuit points. Several runs are carried out to examine the V-I characteristics of the PV panels under consideration and the nature of the derived parameters. The parameters generated by the proposed technique offer the lowest error over several executions, indicating that it should be implemented in the present scenario. To validate the performance of the proposed approach, convergence curves of Mono PERC WSMD-545 and CS7L-590 MS PV modules obtained using the CrSA are compared with the convergence curves obtained using the recent optimization algorithms (OAs) in the literature. It has been observed that the proposed approach exhibited the fastest rate of convergence on each of the PV panels. Full article
(This article belongs to the Collection Feature Paper in Metaheuristic Algorithms and Applications)
Show Figures

Figure 1

11 pages, 250 KiB  
Article
Hardness and Approximability of Dimension Reduction on the Probability Simplex
by Roberto Bruno
Algorithms 2024, 17(7), 296; https://doi.org/10.3390/a17070296 - 6 Jul 2024
Viewed by 227
Abstract
Dimension reduction is a technique used to transform data from a high-dimensional space into a lower-dimensional space, aiming to retain as much of the original information as possible. This approach is crucial in many disciplines like engineering, biology, astronomy, and economics. In this [...] Read more.
Dimension reduction is a technique used to transform data from a high-dimensional space into a lower-dimensional space, aiming to retain as much of the original information as possible. This approach is crucial in many disciplines like engineering, biology, astronomy, and economics. In this paper, we consider the following dimensionality reduction instance: Given an n-dimensional probability distribution p and an integer m<n, we aim to find the m-dimensional probability distribution q that is the closest to p, using the Kullback–Leibler divergence as the measure of closeness. We prove that the problem is strongly NP-hard, and we present an approximation algorithm for it. Full article
(This article belongs to the Special Issue Selected Algorithmic Papers from IWOCA 2024)
14 pages, 358 KiB  
Article
VMP-ER: An Efficient Virtual Machine Placement Algorithm for Energy and Resources Optimization in Cloud Data Center
by Hasanein D. Rjeib and Gabor Kecskemeti
Algorithms 2024, 17(7), 295; https://doi.org/10.3390/a17070295 - 5 Jul 2024
Viewed by 273
Abstract
Cloud service providers deliver computing services on demand using the Infrastructure as a Service (IaaS) model. In a cloud data center, several virtual machines (VMs) can be hosted on a single physical machine (PM) with the help of virtualization. The virtual machine placement [...] Read more.
Cloud service providers deliver computing services on demand using the Infrastructure as a Service (IaaS) model. In a cloud data center, several virtual machines (VMs) can be hosted on a single physical machine (PM) with the help of virtualization. The virtual machine placement (VMP) involves assigning VMs across various physical machines, which is a crucial process impacting energy draw and resource usage in the cloud data center. Nonetheless, finding an effective settlement is challenging owing to factors like hardware heterogeneity and the scalability of cloud data centers. This paper proposes an efficient algorithm named VMP-ER aimed at optimizing power consumption and reducing resource wastage. Our algorithm achieves this by decreasing the number of running physical machines, and it gives priority to energy-efficient servers. Additionally, it improves resource utilization across physical machines, thus minimizing wastage and ensuring balanced resource allocation. Full article
Show Figures

Figure 1

25 pages, 3881 KiB  
Article
Logical Execution Time and Time-Division Multiple Access in Multicore Embedded Systems: A Case Study
by Carlos-Antonio Mosqueda-Arvizu, Julio-Alejandro Romero-González, Diana-Margarita Córdova-Esparza, Juan Terven, Ricardo Chaparro-Sánchez and Juvenal Rodríguez-Reséndiz
Algorithms 2024, 17(7), 294; https://doi.org/10.3390/a17070294 - 5 Jul 2024
Viewed by 310
Abstract
The automotive industry has recently adopted multicore processors and microcontrollers to meet the requirements of new features, such as autonomous driving, and comply with the latest safety standards. However, inter-core communication poses a challenge in ensuring real-time requirements such as time determinism and [...] Read more.
The automotive industry has recently adopted multicore processors and microcontrollers to meet the requirements of new features, such as autonomous driving, and comply with the latest safety standards. However, inter-core communication poses a challenge in ensuring real-time requirements such as time determinism and low latencies. Concurrent access to shared buffers makes predicting the flow of data difficult, leading to decreased algorithm performance. This study explores the integration of Logical Execution Time (LET) and Time-Division Multiple Access (TDMA) models in multicore embedded systems to address the challenges in inter-core communication by synchronizing read/write operations across different cores, significantly reducing latency variability and improving system predictability and consistency. Experimental results demonstrate that this integrated approach eliminates data loss and maintains fixed operation rates, achieving a consistent latency of 11 ms. The LET-TDMA method reduces latency variability to approximately 1 ms, maintaining a maximum delay of 1.002 ms and a minimum delay of 1.001 ms, compared to the variability in the LET-only method, which ranged from 3.2846 ms to 8.9257 ms for different configurations. Full article
(This article belongs to the Special Issue Scheduling: Algorithms and Real-World Applications)
Show Figures

Figure 1

21 pages, 1004 KiB  
Article
A Histogram Publishing Method under Differential Privacy That Involves Balancing Small-Bin Availability First
by Jianzhang Chen, Shuo Zhou, Jie Qiu, Yixin Xu, Bozhe Zeng, Wanchuan Fang, Xiangying Chen, Yipeng Huang, Zhengquan Xu and Youqin Chen
Algorithms 2024, 17(7), 293; https://doi.org/10.3390/a17070293 - 4 Jul 2024
Viewed by 346
Abstract
Differential privacy, a cornerstone of privacy-preserving techniques, plays an indispensable role in ensuring the secure handling and sharing of sensitive data analysis across domains such as in census, healthcare, and social networks. Histograms, serving as a visually compelling tool for presenting analytical outcomes, [...] Read more.
Differential privacy, a cornerstone of privacy-preserving techniques, plays an indispensable role in ensuring the secure handling and sharing of sensitive data analysis across domains such as in census, healthcare, and social networks. Histograms, serving as a visually compelling tool for presenting analytical outcomes, are widely employed in these sectors. Currently, numerous algorithms for publishing histograms under differential privacy have been developed, striving to balance privacy protection with the provision of useful data. Nonetheless, the pivotal challenge concerning the effective enhancement of precision for small bins (those intervals that are narrowly defined or contain a relatively small number of data points) within histograms has yet to receive adequate attention and in-depth investigation from experts. In standard DP histogram publishing, adding noise without regard for bin size can result in small data bins being disproportionately influenced by noise, potentially severely impairing the overall accuracy of the histogram. In response to this challenge, this paper introduces the SReB_GCA sanitization algorithm designed to enhance the accuracy of small bins in DP histograms. The SReB_GCA approach involves sorting the bins from smallest to largest and applying a greedy grouping strategy, with a predefined lower bound on the mean relative error required for a bin to be included in a group. Our theoretical analysis reveals that sorting bins in ascending order prior to grouping effectively prioritizes the accuracy of smaller bins. SReB_GCA ensures strict ϵ-DP compliance and strikes a careful balance between reconstruction error and noise error, thereby not only initially improving the accuracy of small bins but also approximately optimizing the mean relative error of the entire histogram. To validate the efficiency of our proposed SReB_GCA method, we conducted extensive experiments using four diverse datasets, including two real-life datasets and two synthetic ones. The experimental results, quantified by the Kullback–Leibler Divergence (KLD), show that the SReB_GCA algorithm achieves substantial performance enhancement compared to the baseline method (DP_BASE) and several other established approaches for differential privacy histogram publication. Full article
(This article belongs to the Section Randomized, Online, and Approximation Algorithms)
Show Figures

Figure 1

19 pages, 925 KiB  
Article
Central Kurdish Text-to-Speech Synthesis with Novel End-to-End Transformer Training
by Hawraz A. Ahmad and Tarik A. Rashid
Algorithms 2024, 17(7), 292; https://doi.org/10.3390/a17070292 - 3 Jul 2024
Viewed by 485
Abstract
Recent advancements in text-to-speech (TTS) models have aimed to streamline the two-stage process into a single-stage training approach. However, many single-stage models still lag behind in audio quality, particularly when handling Kurdish text and speech. There is a critical need to enhance text-to-speech [...] Read more.
Recent advancements in text-to-speech (TTS) models have aimed to streamline the two-stage process into a single-stage training approach. However, many single-stage models still lag behind in audio quality, particularly when handling Kurdish text and speech. There is a critical need to enhance text-to-speech conversion for the Kurdish language, particularly for the Sorani dialect, which has been relatively neglected and is underrepresented in recent text-to-speech advancements. This study introduces an end-to-end TTS model for efficiently generating high-quality Kurdish audio. The proposed method leverages a variational autoencoder (VAE) that is pre-trained for audio waveform reconstruction and is augmented by adversarial training. This involves aligning the prior distribution established by the pre-trained encoder with the posterior distribution of the text encoder within latent variables. Additionally, a stochastic duration predictor is incorporated to imbue synthesized Kurdish speech with diverse rhythms. By aligning latent distributions and integrating the stochastic duration predictor, the proposed method facilitates the real-time generation of natural Kurdish speech audio, offering flexibility in pitches and rhythms. Empirical evaluation via the mean opinion score (MOS) on a custom dataset confirms the superior performance of our approach (MOS of 3.94) compared with that of a one-stage system and other two-staged systems as assessed through a subjective human evaluation. Full article
(This article belongs to the Special Issue AI Algorithms for Positive Change in Digital Futures)
Show Figures

Figure 1

32 pages, 1243 KiB  
Article
Prime Time Tactics—Sieve Tweaks and Boosters
by Mircea Ghidarcea and Decebal Popescu
Algorithms 2024, 17(7), 291; https://doi.org/10.3390/a17070291 - 3 Jul 2024
Viewed by 272
Abstract
In a landscape where interest in prime sieving has waned and practitioners are few, we are still hoping for a domain renaissance, fueled by a resurgence of interest and a fresh wave of innovation. Building upon years of extensive research and experimentation, this [...] Read more.
In a landscape where interest in prime sieving has waned and practitioners are few, we are still hoping for a domain renaissance, fueled by a resurgence of interest and a fresh wave of innovation. Building upon years of extensive research and experimentation, this article aims to contribute by presenting a heterogeneous compilation of generic tweaks and boosters aimed at revitalizing prime sieving methodologies. Drawing from a wealth of resurfaced knowledge and refined sieving algorithms, techniques, and optimizations, we unveil a diverse array of strategies designed to elevate the efficiency, accuracy, and scalability of prime sieving algorithms; these tweaks and boosters represent a synthesis of old wisdom and new discoveries, offering practical guidance for researchers and practitioners alike. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

22 pages, 2721 KiB  
Article
Federated Learning-Based Security Attack Detection for Multi-Controller Software-Defined Networks
by Abrar Alkhamisi, Iyad Katib and Seyed M. Buhari
Algorithms 2024, 17(7), 290; https://doi.org/10.3390/a17070290 - 2 Jul 2024
Viewed by 351
Abstract
A revolutionary concept of Multi-controller Software-Defined Networking (MC-SDN) is a promising structure for pursuing an evolving complex and expansive large-scale modern network environment. Despite the rich operational flexibility of MC-SDN, it is imperative to protect the network deployment against potential vulnerabilities that lead [...] Read more.
A revolutionary concept of Multi-controller Software-Defined Networking (MC-SDN) is a promising structure for pursuing an evolving complex and expansive large-scale modern network environment. Despite the rich operational flexibility of MC-SDN, it is imperative to protect the network deployment against potential vulnerabilities that lead to misuse and malicious activities on data planes. The security holes in the MC-SDN significantly impact network survivability, and subsequently, the data plane is vulnerable to potential security threats and unintended consequences. Accordingly, this work intends to design a Federated learning-based Security (FedSec) strategy that detects the MC-SDN attack. The FedSec ensures packet routing services among the nodes by maintaining a flow table frequently updated according to the global model knowledge. By executing the FedSec algorithm only on the network-centric nodes selected based on importance measurements, the FedSec reduces the system complexity and enhances attack detection and classification accuracy. Finally, the experimental results illustrate the significance of the proposed FedSec strategy regarding various metrics. Full article
(This article belongs to the Special Issue Supervised and Unsupervised Classification Algorithms (2nd Edition))
Show Figures

Figure 1

34 pages, 628 KiB  
Article
Fuzzy Fractional Brownian Motion: Review and Extension
by Georgy Urumov, Panagiotis Chountas and Thierry Chaussalet
Algorithms 2024, 17(7), 289; https://doi.org/10.3390/a17070289 - 1 Jul 2024
Viewed by 364
Abstract
In traditional finance, option prices are typically calculated using crisp sets of variables. However, as reported in the literature novel, these parameters possess a degree of fuzziness or uncertainty. This allows participants to estimate option prices based on their risk preferences and beliefs, [...] Read more.
In traditional finance, option prices are typically calculated using crisp sets of variables. However, as reported in the literature novel, these parameters possess a degree of fuzziness or uncertainty. This allows participants to estimate option prices based on their risk preferences and beliefs, considering a range of possible values for the parameters. This paper presents a comprehensive review of existing work on fuzzy fractional Brownian motion and proposes an extension in the context of financial option pricing. In this paper, we define a unified framework combining fractional Brownian motion with fuzzy processes, creating a joint product measure space that captures both randomness and fuzziness. The approach allows for the consideration of individual risk preferences and beliefs about parameter uncertainties. By extending Merton’s jump-diffusion model to include fuzzy fractional Brownian motion, this paper addresses the modelling needs of hybrid systems with uncertain variables. The proposed model, which includes fuzzy Poisson processes and fuzzy volatility, demonstrates advantageous properties such as long-range dependence and self-similarity, providing a robust tool for modelling financial markets. By incorporating fuzzy numbers and the belief degree, this approach provides a more flexible framework for practitioners to make their investment decisions. Full article
16 pages, 2836 KiB  
Article
Optimal Design of I-PD and PI-D Industrial Controllers Based on Artificial Intelligence Algorithm
by Olga Shiryayeva, Batyrbek Suleimenov and Yelena Kulakova
Algorithms 2024, 17(7), 288; https://doi.org/10.3390/a17070288 - 1 Jul 2024
Viewed by 526
Abstract
This research aims to apply Artificial Intelligence (AI) methods, specifically Artificial Immune Systems (AIS), to design an optimal control strategy for a multivariable control plant. Two specific industrial control approaches are investigated: I-PD (Integral-Proportional Derivative) and PI-D (Proportional-Integral Derivative) control. The motivation for [...] Read more.
This research aims to apply Artificial Intelligence (AI) methods, specifically Artificial Immune Systems (AIS), to design an optimal control strategy for a multivariable control plant. Two specific industrial control approaches are investigated: I-PD (Integral-Proportional Derivative) and PI-D (Proportional-Integral Derivative) control. The motivation for using these variations of PID controllers is that they are functionally implemented in modern industrial controllers, where they provide precise process control. The research results in a novel solution to the control synthesis problem for the industrial system. In particular, the research deals with the synthesis of I-P control for a two-loop system in the technological process of a distillation column. This synthesis is carried out using the AIS algorithm, which is the first application of this technique in this specific context. Methodological approaches are proposed to improve the performance of industrial multivariable control systems by effectively using optimization algorithms and establishing modified quality criteria. The numerical performance index ISE justifies the effectiveness of the AIS-based controllers in comparison with conventional PID controllers (ISE1 = 1.865, ISE2 = 1.56). The problem of synthesis of the multi-input multi-output (MIMO) control system is solved, considering the interconnections due to the decoupling procedure. Full article
Show Figures

Figure 1

20 pages, 424 KiB  
Article
Enhancing Program Synthesis with Large Language Models Using Many-Objective Grammar-Guided Genetic Programming
by Ning Tao, Anthony Ventresque, Vivek Nallur and Takfarinas Saber
Algorithms 2024, 17(7), 287; https://doi.org/10.3390/a17070287 - 1 Jul 2024
Viewed by 427
Abstract
The ability to automatically generate code, i.e., program synthesis, is one of the most important applications of artificial intelligence (AI). Currently, two AI techniques are leading the way: large language models (LLMs) and genetic programming (GP) methods—each with its strengths and weaknesses. While [...] Read more.
The ability to automatically generate code, i.e., program synthesis, is one of the most important applications of artificial intelligence (AI). Currently, two AI techniques are leading the way: large language models (LLMs) and genetic programming (GP) methods—each with its strengths and weaknesses. While LLMs have shown success in program synthesis from a task description, they often struggle to generate the correct code due to ambiguity in task specifications, complex programming syntax, and lack of reliability in the generated code. Furthermore, their generative nature limits their ability to fix erroneous code with iterative LLM prompting. Grammar-guided genetic programming (G3P, i.e., one of the top GP methods) has been shown capable of evolving programs that fit a defined Backus–Naur-form (BNF) grammar based on a set of input/output tests that help guide the search process while ensuring that the generated code does not include calls to untrustworthy libraries or poorly structured snippets. However, G3P still faces issues generating code for complex tasks. A recent study attempting to combine both approaches (G3P and LLMs) by seeding an LLM-generated program into the initial population of the G3P has shown promising results. However, the approach rapidly loses the seeded information over the evolutionary process, which hinders its performance. In this work, we propose combining an LLM (specifically ChatGPT) with a many-objective G3P (MaOG3P) framework in two parts: (i) provide the LLM-generated code as a seed to the evolutionary process following a grammar-mapping phase that creates an avenue for program evolution and error correction; and (ii) leverage many-objective similarity measures towards the LLM-generated code to guide the search process throughout the evolution. The idea behind using the similarity measures is that the LLM-generated code is likely to be close to the correct fitting code. Our approach compels any generated program to adhere to the BNF grammar, ultimately mitigating security risks and improving code quality. Experiments on a well-known and widely used program synthesis dataset show that our approach successfully improves the synthesis of grammar-fitting code for several tasks. Full article
Show Figures

Figure 1

31 pages, 14424 KiB  
Article
Enhancing Video Anomaly Detection Using a Transformer Spatiotemporal Attention Unsupervised Framework for Large Datasets
by Mohamed H. Habeb, May Salama and Lamiaa A. Elrefaei
Algorithms 2024, 17(7), 286; https://doi.org/10.3390/a17070286 - 1 Jul 2024
Viewed by 465
Abstract
This work introduces an unsupervised framework for video anomaly detection, leveraging a hybrid deep learning model that combines a vision transformer (ViT) with a convolutional spatiotemporal relationship (STR) attention block. The proposed model addresses the challenges of anomaly detection in video surveillance by [...] Read more.
This work introduces an unsupervised framework for video anomaly detection, leveraging a hybrid deep learning model that combines a vision transformer (ViT) with a convolutional spatiotemporal relationship (STR) attention block. The proposed model addresses the challenges of anomaly detection in video surveillance by capturing both local and global relationships within video frames, a task that traditional convolutional neural networks (CNNs) often struggle with due to their localized field of view. We have utilized a pre-trained ViT as an encoder for feature extraction, which is then processed by the STR attention block to enhance the detection of spatiotemporal relationships among objects in videos. The novelty of this work is utilizing the ViT with the STR attention to detect video anomalies effectively in large and heterogeneous datasets, an important thing given the diverse environments and scenarios encountered in real-world surveillance. The framework was evaluated on three benchmark datasets, i.e., the UCSD-Ped2, CHUCK Avenue, and ShanghaiTech. This demonstrates the model’s superior performance in detecting anomalies compared to state-of-the-art methods, showcasing its potential to significantly enhance automated video surveillance systems by achieving area under the receiver operating characteristic curve (AUC ROC) values of 95.6, 86.8, and 82.1. To show the effectiveness of the proposed framework in detecting anomalies in extra-large datasets, we trained the model on a subset of the huge contemporary CHAD dataset that contains over 1 million frames, achieving AUC ROC values of 71.8 and 64.2 for CHAD-Cam 1 and CHAD-Cam 2, respectively, which outperforms the state-of-the-art techniques. Full article
(This article belongs to the Special Issue Algorithms for Image Processing and Machine Vision)
Show Figures

Figure 1

13 pages, 4099 KiB  
Article
The Novel EfficientNet Architecture-Based System and Algorithm to Predict Complex Human Emotions
by Mavlonbek Khomidov and Jong-Ha Lee
Algorithms 2024, 17(7), 285; https://doi.org/10.3390/a17070285 - 1 Jul 2024
Viewed by 324
Abstract
Facial expressions are often considered the primary indicators of emotions. However, it is challenging to detect genuine emotions because they can be controlled. Many studies on emotion recognition have been conducted actively in recent years. In this study, we designed a convolutional neural [...] Read more.
Facial expressions are often considered the primary indicators of emotions. However, it is challenging to detect genuine emotions because they can be controlled. Many studies on emotion recognition have been conducted actively in recent years. In this study, we designed a convolutional neural network (CNN) model and proposed an algorithm that combines the analysis of bio-signals with facial expression templates to effectively predict emotional states. We utilized the EfficientNet-B0 architecture for network design and validation, known for achieving maximum performance with minimal parameters. The accuracy for emotion recognition using facial expression images alone was 74%, while the accuracy for emotion recognition combining biological signals reached 88.2%. These results demonstrate that integrating these two types of data leads to significantly improved accuracy. By combining the image and bio-signals captured in facial expressions, our model offers a more comprehensive and accurate understanding of emotional states. Full article
Show Figures

Figure 1

23 pages, 4962 KiB  
Article
Ensemble Learning with Pre-Trained Transformers for Crash Severity Classification: A Deep NLP Approach
by Shadi Jaradat, Richi Nayak, Alexander Paz and Mohammed Elhenawy
Algorithms 2024, 17(7), 284; https://doi.org/10.3390/a17070284 - 30 Jun 2024
Viewed by 476
Abstract
Transfer learning has gained significant traction in natural language processing due to the emergence of state-of-the-art pre-trained language models (PLMs). Unlike traditional word embedding methods such as TF-IDF and Word2Vec, PLMs are context-dependent and outperform conventional techniques when fine-tuned for specific tasks. This [...] Read more.
Transfer learning has gained significant traction in natural language processing due to the emergence of state-of-the-art pre-trained language models (PLMs). Unlike traditional word embedding methods such as TF-IDF and Word2Vec, PLMs are context-dependent and outperform conventional techniques when fine-tuned for specific tasks. This paper proposes an innovative hard voting classifier to enhance crash severity classification by combining machine learning and deep learning models with various word embedding techniques, including BERT, RoBERTa, Word2Vec, and TF-IDF. Our study involves two comprehensive experiments using motorists’ crash data from the Missouri State Highway Patrol. The first experiment evaluates the performance of three machine learning models—XGBoost (XGB), random forest (RF), and naive Bayes (NB)—paired with TF-IDF, Word2Vec, and BERT feature extraction techniques. Additionally, BERT and RoBERTa are fine-tuned with a Bidirectional Long Short-Term Memory (Bi-LSTM) classification model. All models are initially evaluated on the original dataset. The second experiment repeats the evaluation using an augmented dataset to address the severe data imbalance. The results from the original dataset show strong performance for all models in the “Fatal” and “Personal Injury” classes but a poor classification of the minority “Property Damage” class. In the augmented dataset, while the models continued to excel with the majority classes, only XGB/TFIDF and BERT-LSTM showed improved performance for the minority class. The ensemble model outperformed individual models in both datasets, achieving an F1 score of 99% for “Fatal” and “Personal Injury” and 62% for “Property Damage” on the augmented dataset. These findings suggest that ensemble models, combined with data augmentation, are highly effective for crash severity classification and potentially other textual classification tasks. Full article
(This article belongs to the Special Issue AI Algorithms for Positive Change in Digital Futures)
Show Figures

Figure 1

19 pages, 768 KiB  
Article
Maximizing the Average Environmental Benefit of a Fleet of Drones under a Periodic Schedule of Tasks
by Vladimir Kats and Eugene Levner
Algorithms 2024, 17(7), 283; https://doi.org/10.3390/a17070283 - 28 Jun 2024
Viewed by 267
Abstract
Unmanned aerial vehicles (UAVs, drones) are not just a technological achievement based on modern ideas of artificial intelligence; they also provide a sustainable solution for green technologies in logistics, transport, and material handling. In particular, using battery-powered UAVs to transport products can significantly [...] Read more.
Unmanned aerial vehicles (UAVs, drones) are not just a technological achievement based on modern ideas of artificial intelligence; they also provide a sustainable solution for green technologies in logistics, transport, and material handling. In particular, using battery-powered UAVs to transport products can significantly decrease energy and fuel expenses, reduce environmental pollution, and improve the efficiency of clean technologies through improved energy-saving efficiency. We consider the problem of maximizing the average environmental benefit of a fleet of drones given a periodic schedule of tasks performed by the fleet of vehicles. To solve the problem efficiently, we formulate it as an optimization problem on an infinite periodic graph and reduce it to a special type of parametric assignment problem. We exactly solve the problem under consideration in O(n3) time, where n is the number of flights performed by UAVs. Full article
(This article belongs to the Special Issue Scheduling Theory and Algorithms for Sustainable Manufacturing)
Show Figures

Figure 1

28 pages, 2896 KiB  
Article
Qualitative Perturbation Analysis and Machine Learning: Elucidating Bacterial Optimization of Tryptophan Production
by Miguel Angel Ramos-Valdovinos, Prisciluis Caheri Salas-Navarrete, Gerardo R. Amores, Ana Lilia Hernández-Orihuela and Agustino Martínez-Antonio
Algorithms 2024, 17(7), 282; https://doi.org/10.3390/a17070282 - 27 Jun 2024
Viewed by 484
Abstract
L-tryptophan is an essential amino acid widely used in the pharmaceutical and feed industries. Enhancing its production in microorganisms necessitates activating and inactivating specific genes to direct more resources toward its synthesis. In this study, we developed a classification model based on Qualitative [...] Read more.
L-tryptophan is an essential amino acid widely used in the pharmaceutical and feed industries. Enhancing its production in microorganisms necessitates activating and inactivating specific genes to direct more resources toward its synthesis. In this study, we developed a classification model based on Qualitative Perturbation Analysis and Machine Learning (QPAML). The model uses pFBA to obtain optimal reactions for tryptophan production and FSEOF to introduce perturbations on fluxes of the optima reactions while registering all changes over the iML1515a Genome-Scale Metabolic Network model. The altered reaction fluxes and their relationship with tryptophan and biomass production are translated to qualitative variables classified with GBDT. In the end, groups of enzymatic reactions are predicted to be deleted, overexpressed, or attenuated for tryptophan and 30 other metabolites in E. coli with a 92.34% F1-Score. The QPAML model can integrate diverse data types, promising improved predictions and the discovery of complex patterns in microbial metabolic engineering. It has broad potential applications and offers valuable insights for optimizing microbial production in biotechnology. Full article
Show Figures

Figure 1

23 pages, 1378 KiB  
Article
Optimizing Automated Brain Extraction for Moderate to Severe Traumatic Brain Injury Patients: The Role of Intensity Normalization and Bias-Field Correction
by Patrick Carbone, Celina Alba, Alexis Bennett, Kseniia Kriukova and Dominique Duncan
Algorithms 2024, 17(7), 281; https://doi.org/10.3390/a17070281 - 27 Jun 2024
Viewed by 753
Abstract
Accurate brain extraction is crucial for the validity of MRI analyses, particularly in the context of traumatic brain injury (TBI), where conventional automated methods frequently fall short. This study investigates the interplay between intensity normalization, bias-field correction (also called intensity inhomogeneity correction), and [...] Read more.
Accurate brain extraction is crucial for the validity of MRI analyses, particularly in the context of traumatic brain injury (TBI), where conventional automated methods frequently fall short. This study investigates the interplay between intensity normalization, bias-field correction (also called intensity inhomogeneity correction), and automated brain extraction in MRIs of individuals with TBI. We analyzed 125 T1-weighted Magnetization-Prepared Rapid Gradient-Echo (T1-MPRAGE) and 72 T2-weighted Fluid-Attenuated Inversion Recovery (T2-FLAIR) MRI sequences from a cohort of 143 patients with moderate to severe TBI. Our study combined 14 different intensity processing procedures, each using a configuration of N3 inhomogeneity correction, Z-score normalization, KDE-based normalization, or WhiteStripe intensity normalization, with 10 different configurations of the Brain Extraction Tool (BET) and the Optimized Brain Extraction Tool (optiBET). Our results demonstrate that optiBET with N3 inhomogeneity correction produces the most accurate brain extractions, specifically with one iteration of N3 for T1-MPRAGE and four iterations for T2-FLAIR, and pipelines incorporating N3 inhomogeneity correction significantly improved the accuracy of BET as well. Conversely, intensity normalization demonstrated a complex relationship with brain extraction, with effects varying by the normalization algorithm and BET parameter configuration combination. This study elucidates the interactions between intensity processing and the accuracy of brain extraction. Understanding these relationships is essential to the effective and efficient preprocessing of TBI MRI data, laying the groundwork for the development of robust preprocessing pipelines optimized for multi-site TBI MRI data. Full article
(This article belongs to the Special Issue Algorithms for Computer Aided Diagnosis)
Show Figures

Figure 1

Previous Issue
Back to TopTop