Next Issue
Volume 17, December
Previous Issue
Volume 17, October
 
 

Algorithms, Volume 17, Issue 11 (November 2024) – 63 articles

Cover Story (view full-size image): We present challenging model classes arising in the context of finding optimized object packings (OPs). Except for the smallest and simplest general OP model instances, it is not possible to find their exact (closed-form) solution. Most OP problem instances become increasingly difficult to handle, even numerically, as the number of packed objects increases. In our article, scalable irregular OP problem classes–aimed at packing given collections of general circles, spheres, ellipses, and ovals–are discussed, with numerical results being found for a selection of model instances. To illustrate, the figure shows an optimized configuration of different size spheres packed into a container sphere. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 1117 KiB  
Article
Automatic Simplification of Lithuanian Administrative Texts
by Justina Mandravickaitė, Eglė Rimkienė, Danguolė Kotryna Kapkan, Danguolė Kalinauskaitė and Tomas Krilavičius
Algorithms 2024, 17(11), 533; https://doi.org/10.3390/a17110533 - 20 Nov 2024
Abstract
Text simplification reduces the complexity of text while preserving essential information, thus making it more accessible to a broad range of readers, including individuals with cognitive disorders, non-native speakers, children, and the general public. In this paper, we present experiments on text simplification [...] Read more.
Text simplification reduces the complexity of text while preserving essential information, thus making it more accessible to a broad range of readers, including individuals with cognitive disorders, non-native speakers, children, and the general public. In this paper, we present experiments on text simplification for the Lithuanian language, aiming to simplify administrative texts to a Plain Language level. We fine-tuned mT5 and mBART models for this task and evaluated the effectiveness of ChatGPT as well. We assessed simplification results via both quantitative metrics and qualitative evaluation. Our findings indicated that mBART performed the best as it achieved the best scores across all evaluation metrics. The qualitative analysis further supported these findings. ChatGPT experiments showed that it responded quite well to a short and simple prompt to simplify the given text; however, it ignored most of the rules given in a more elaborate prompt. Finally, our analysis revealed that BERTScore and ROUGE aligned moderately well with human evaluations, while BLEU and readability scores indicated lower or even negative correlations Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

13 pages, 1085 KiB  
Article
Exponential Functions Permit Estimation of Anaerobic Work Capacity and Critical Power from Less than 2 Min All-Out Test
by Ming-Chang Tsai, Scott Thomas and Marc Klimstra
Algorithms 2024, 17(11), 532; https://doi.org/10.3390/a17110532 - 20 Nov 2024
Abstract
The Critical Power Model (CPM) is key for assessing athletes’ aerobic and anaerobic energy systems but typically involves lengthy, exhausting protocols. The 3 min all-out test (3MT) simplifies CPM assessment, yet its duration remains demanding. Exponential decay models, specifically mono- and bi-exponential functions, [...] Read more.
The Critical Power Model (CPM) is key for assessing athletes’ aerobic and anaerobic energy systems but typically involves lengthy, exhausting protocols. The 3 min all-out test (3MT) simplifies CPM assessment, yet its duration remains demanding. Exponential decay models, specifically mono- and bi-exponential functions, offer a more efficient alternative by accurately capturing the nonlinear energy dynamics in high-intensity efforts. This study explores shortening the 3MT using these functions to reduce athlete strain while preserving the accuracy of critical power (CP) and work capacity (W) estimates. Seventy-six competitive cyclists and triathletes completed a 3MT on a cycle ergometer, with CP and W calculated at shorter intervals. Results showed that a 90 s test using the bi-exponential model yielded CP and W values similar to those of the full 3MT. Meanwhile, the mono-exponential model required at least 135 s. Bland–Altman and linear regression analyses confirmed that a 120 s test with the mono-exponential model reliably estimated CP and W with minimal physical strain. These findings support a shortened, less-demanding 3MT as a valid alternative for CPM assessment. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

18 pages, 1508 KiB  
Article
Adversarial Validation in Image Classification Datasets by Means of Cumulative Spectral Gradient
by Diego Renza, Ernesto Moya-Albor and Adrian Chavarro
Algorithms 2024, 17(11), 531; https://doi.org/10.3390/a17110531 - 19 Nov 2024
Viewed by 230
Abstract
The main objective of a machine learning (ML) system is to obtain a trained model from input data in such a way that it allows predictions to be made on new i.i.d. (Independently and Identically Distributed) data with the lowest possible error. However, [...] Read more.
The main objective of a machine learning (ML) system is to obtain a trained model from input data in such a way that it allows predictions to be made on new i.i.d. (Independently and Identically Distributed) data with the lowest possible error. However, how can we assess whether the training and test data have a similar distribution? To answer this question, this paper presents a proposal to determine the degree of distribution shift of two datasets. To this end, a metric for evaluating complexity in datasets is used, which can be applied in multi-class problems, comparing each pair of classes of the two sets. The proposed methodology has been applied to three well-known datasets: MNIST, CIFAR-10 and CIFAR-100, together with corrupted versions of these. Through this methodology, it is possible to evaluate which types of modification have a greater impact on the generalization of the models without the need to train multiple models multiple times, also allowing us to determine which classes are more affected by corruption. Full article
(This article belongs to the Special Issue Machine Learning Algorithms for Image Understanding and Analysis)
Show Figures

Figure 1

20 pages, 3221 KiB  
Article
A VIKOR-Based Sequential Three-Way Classification Ranking Method
by Wentao Xu, Jin Qian, Yueyang Wu, Shaowei Yan, Yongting Ni and Guangjin Yang
Algorithms 2024, 17(11), 530; https://doi.org/10.3390/a17110530 - 19 Nov 2024
Viewed by 214
Abstract
VIKOR uses the idea of overall utility maximization and individual regret minimization to afford a compromise result for multi-attribute decision-making problems with conflicting attributes. Many researchers have proposed corresponding improvements and expansions to make it more suitable for sorting optimization in their respective [...] Read more.
VIKOR uses the idea of overall utility maximization and individual regret minimization to afford a compromise result for multi-attribute decision-making problems with conflicting attributes. Many researchers have proposed corresponding improvements and expansions to make it more suitable for sorting optimization in their respective research fields. However, these improvements and extensions only rank the alternatives without classifying them. For this purpose, this text introduces the three-way sequential decisions method and combines it with the VIKOR method to design a three-way VIKOR method that can deal with both ranking and classification. By using the final negative ideal solution (NIS) and the final positive ideal solution (PIS) for all alternatives, the individual regret value and group utility value of each alternative were calculated. Different three-way VIKOR models were obtained by four different combinations of individual regret value and group utility value. In the ranking process, the characteristics of VIKOR method are introduced, and the subjective preference of decision makers is considered by using individual regret, group utility, and decision index values. In the classification process, the corresponding alternatives are divided into the corresponding decision domains by sequential three-way decisions, and the risk of direct acceptance or rejection is avoided by putting the uncertain alternatives into the boundary region to delay the decision. The alternative is divided into decision domains through sequential three-way decisions, sorted according to the collation rules in the same decision domain, and the final sorting results are obtained according to the collation rules in different decision domains. Finally, the effectiveness and correctness of the proposed method are verified by a project investment example, and the results are compared and evaluated. The experimental results show that the proposed method has a significant correlation with the results of other methods, ad is effective and feasible, and is simpler and more effective in dealing with some problems. Errors caused by misclassification is reduced by sequential three-way decisions. Full article
Show Figures

Figure 1

28 pages, 6900 KiB  
Article
A New Approach to Recognize Faces Amidst Challenges: Fusion Between the Opposite Frequencies of the Multi-Resolution Features
by Regina Lionnie, Julpri Andika and Mudrik Alaydrus
Algorithms 2024, 17(11), 529; https://doi.org/10.3390/a17110529 - 17 Nov 2024
Viewed by 507
Abstract
This paper proposes a new approach to pixel-level fusion using the opposite frequency from the discrete wavelet transform with Gaussian or Difference of Gaussian. The low-frequency from discrete wavelet transform sub-band was fused with the Difference of Gaussian, while the high-frequency sub-bands were [...] Read more.
This paper proposes a new approach to pixel-level fusion using the opposite frequency from the discrete wavelet transform with Gaussian or Difference of Gaussian. The low-frequency from discrete wavelet transform sub-band was fused with the Difference of Gaussian, while the high-frequency sub-bands were fused with Gaussian. The final fusion was reconstructed using an inverse discrete wavelet transform into one enhanced reconstructed image. These enhanced images were utilized to improve recognition performance in the face recognition system. The proposed method was tested against benchmark face datasets such as The Database of Faces (AT&T), the Extended Yale B Face Dataset, the BeautyREC Face Dataset, and the FEI Face Dataset. The results showed that our proposed method was robust and accurate against challenges such as lighting conditions, facial expressions, head pose, 180-degree rotation of the face profile, dark images, acquisition with time gap, and conditions where the person uses attributes such as glasses. The proposed method is comparable to state-of-the-art methods and generates high recognition performance (more than 99% accuracy). Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

19 pages, 5212 KiB  
Article
Assessment of Solar Energy Generation Toward Net-Zero Energy Buildings
by Rayan Khalil, Guilherme Vieira Hollweg, Akhtar Hussain, Wencong Su and Van-Hai Bui
Algorithms 2024, 17(11), 528; https://doi.org/10.3390/a17110528 - 16 Nov 2024
Viewed by 297
Abstract
With the continuous rise in the energy consumption of buildings, the study and integration of net-zero energy buildings (NZEBs) are essential for mitigating the harmful effects associated with this trend. However, developing an energy management system for such buildings is challenging due to [...] Read more.
With the continuous rise in the energy consumption of buildings, the study and integration of net-zero energy buildings (NZEBs) are essential for mitigating the harmful effects associated with this trend. However, developing an energy management system for such buildings is challenging due to uncertainties surrounding NZEBs. This paper introduces an optimization framework comprising two major stages: (i) renewable energy prediction and (ii) multi-objective optimization. A prediction model is developed to accurately forecast photovoltaic (PV) system output, while a multi-objective optimization model is designed to identify the most efficient ways to produce cooling, heating, and electricity at minimal operational costs. These two stages not only help mitigate uncertainties in NZEBs but also reduce dependence on imported power from the utility grid. Finally, to facilitate the deployment of the proposed framework, a graphical user interface (GUI) has been developed, providing a user-friendly environment for building operators to determine optimal scheduling and oversee the entire system. Full article
Show Figures

Figure 1

22 pages, 3297 KiB  
Article
Sleep Apnea Classification Using the Mean Euler–Poincaré Characteristic and AI Techniques
by Moises Ramos-Martinez, Felipe D. J. Sorcia-Vázquez, Gerardo Ortiz-Torres, Mario Martínez García, Mayra G. Mena-Enriquez, Estela Sarmiento-Bustos, Juan Carlos Mixteco-Sánchez, Erasmo Misael Rentería-Vargas, Jesús E. Valdez-Resendiz and Jesse Yoe Rumbo-Morales
Algorithms 2024, 17(11), 527; https://doi.org/10.3390/a17110527 - 15 Nov 2024
Viewed by 350
Abstract
Sleep apnea is a sleep disorder that disrupts breathing during sleep. This study aims to classify sleep apnea using a machine learning approach and a Euler–Poincaré characteristic (EPC) model derived from electrocardiogram (ECG) signals. An ensemble K-nearest neighbors classifier and a feedforward neural [...] Read more.
Sleep apnea is a sleep disorder that disrupts breathing during sleep. This study aims to classify sleep apnea using a machine learning approach and a Euler–Poincaré characteristic (EPC) model derived from electrocardiogram (ECG) signals. An ensemble K-nearest neighbors classifier and a feedforward neural network were implemented using the EPC model as inputs. ECG signals were preprocessed with a polynomial-based scheme to reduce noise, and the processed signals were transformed into a non-Gaussian physiological random field (NGPRF) for EPC model extraction from excursion sets. The classifiers were then applied to the EPC model inputs. Using the Apnea-ECG dataset, the proposed method achieved an accuracy of 98.5%, sensitivity of 94.5%, and specificity of 100%. Combining machine learning methods and geometrical features can effectively diagnose sleep apnea from single-lead ECG signals. The EPC model enhances clinical decision-making for evaluating this disease. Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms for Medicine (2nd Edition))
Show Figures

Figure 1

22 pages, 2446 KiB  
Review
A Comprehensive Review of Autonomous Driving Algorithms: Tackling Adverse Weather Conditions, Unpredictable Traffic Violations, Blind Spot Monitoring, and Emergency Maneuvers
by Cong Xu and Ravi Sankar
Algorithms 2024, 17(11), 526; https://doi.org/10.3390/a17110526 - 15 Nov 2024
Viewed by 333
Abstract
With the rapid development of autonomous driving technology, ensuring the safety and reliability of vehicles under various complex and adverse conditions has become increasingly important. Although autonomous driving algorithms perform well in regular driving scenarios, they still face significant challenges when dealing with [...] Read more.
With the rapid development of autonomous driving technology, ensuring the safety and reliability of vehicles under various complex and adverse conditions has become increasingly important. Although autonomous driving algorithms perform well in regular driving scenarios, they still face significant challenges when dealing with adverse weather conditions, unpredictable traffic rule violations (such as jaywalking and aggressive lane changes), inadequate blind spot monitoring, and emergency handling. This review aims to comprehensively analyze these critical issues, systematically review current research progress and solutions, and propose further optimization suggestions. By deeply analyzing the logic of autonomous driving algorithms in these complex situations, we hope to provide strong support for enhancing the safety and reliability of autonomous driving technology. Additionally, we will comprehensively analyze the limitations of existing driving technologies and compare Advanced Driver Assistance Systems (ADASs) with Full Self-Driving (FSD) to gain a thorough understanding of the current state and future development directions of autonomous driving technology. Full article
(This article belongs to the Special Issue Algorithms for Smart Cities (2nd Edition))
Show Figures

Figure 1

31 pages, 7153 KiB  
Article
You Only Look Once Version 5 and Deep Simple Online and Real-Time Tracking Algorithms for Real-Time Customer Behavior Tracking and Retail Optimization
by Mohamed Shili, Osama Sohaib and Salah Hammedi
Algorithms 2024, 17(11), 525; https://doi.org/10.3390/a17110525 - 15 Nov 2024
Viewed by 326
Abstract
The speedy progress of computer vision and machine learning engineering has inaugurated novel means for improving the purchasing experiment in brick-and-mortar stores. This paper examines the utilization of YOLOv (You Only Look Once) and DeepSORT (Deep Simple Online and Real-Time Tracking) algorithms for [...] Read more.
The speedy progress of computer vision and machine learning engineering has inaugurated novel means for improving the purchasing experiment in brick-and-mortar stores. This paper examines the utilization of YOLOv (You Only Look Once) and DeepSORT (Deep Simple Online and Real-Time Tracking) algorithms for the real-time detection and analysis of the purchasing penchant in brick-and-mortar market surroundings. By leveraging these algorithms, stores can track customer behavior, identify popular products, and monitor high-traffic areas, enabling businesses to adapt quickly to customer preferences and optimize store layout and inventory management. The methodology involves the integration of YOLOv5 for accurate and rapid object detection combined with DeepSORT for the effective tracking of customer movements and interactions with products. Information collected in in-store cameras and sensors is handled to detect tendencies in customer behavior, like repeatedly inspected products, periods expended in specific intervals, and product handling. The results indicate a modest improvement in customer engagement, with conversion rates increasing by approximately 3 percentage points, and a decline in inventory waste levels, from 88% to 75%, after system implementation. This study provides essential insights into the further integration of algorithm technology in physical retail locations and demonstrates the revolutionary potential of real-time behavior tracking in the retail industry. This research determines the foundation for future developments in functional strategies and customer experience optimization by offering a solid framework for creating intelligent retail systems. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

17 pages, 681 KiB  
Article
Subsampling Algorithms for Irregularly Spaced Autoregressive Models
by Jiaqi Liu, Ziyang Wang, HaiYing Wang and Nalini Ravishanker
Algorithms 2024, 17(11), 524; https://doi.org/10.3390/a17110524 - 15 Nov 2024
Viewed by 294
Abstract
With the exponential growth of data across diverse fields, applying conventional statistical methods directly to large-scale datasets has become computationally infeasible. To overcome this challenge, subsampling algorithms are widely used to perform statistical analyses on smaller, more manageable subsets of the data. The [...] Read more.
With the exponential growth of data across diverse fields, applying conventional statistical methods directly to large-scale datasets has become computationally infeasible. To overcome this challenge, subsampling algorithms are widely used to perform statistical analyses on smaller, more manageable subsets of the data. The effectiveness of these methods depends on their ability to identify and select data points that improve the estimation efficiency according to some optimality criteria. While much of the existing research has focused on subsampling techniques for independent data, there is considerable potential for developing methods tailored to dependent data, particularly in time-dependent contexts. In this study, we extend subsampling techniques to irregularly spaced time series data which are modeled by irregularly spaced autoregressive models. We present frameworks for various subsampling approaches, including optimal subsampling under A-optimality, information-based optimal subdata selection, and sequential thinning on streaming data. These methods use A-optimality or D-optimality criteria to assess the usefulness of each data point and prioritize the inclusion of the most informative ones. We then assess the performance of these subsampling methods using numerical simulations, providing insights into their suitability and effectiveness for handling irregularly spaced long time series. Numerical results show that our algorithms have promising performance. Their estimation efficiency can be ten times as high as that of the uniform sampling estimator. They also significantly reduce the computational time and can be up to forty times faster than the full-data estimator. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

21 pages, 388 KiB  
Article
The Nelder–Mead Simplex Algorithm Is Sixty Years Old: New Convergence Results and Open Questions
by Aurél Galántai
Algorithms 2024, 17(11), 523; https://doi.org/10.3390/a17110523 - 14 Nov 2024
Viewed by 330
Abstract
We investigate and compare two versions of the Nelder–Mead simplex algorithm for function minimization. Two types of convergence are studied: the convergence of function values at the simplex vertices and convergence of the simplex sequence. For the first type of convergence, we generalize [...] Read more.
We investigate and compare two versions of the Nelder–Mead simplex algorithm for function minimization. Two types of convergence are studied: the convergence of function values at the simplex vertices and convergence of the simplex sequence. For the first type of convergence, we generalize the main result of Lagarias, Reeds, Wright and Wright (1998). For the second type of convergence, we also improve recent results which indicate that the Lagarias et al.’s version of the Nelder–Mead algorithm has better convergence properties than the original Nelder–Mead method. This paper concludes with some open questions. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 2nd Edition)
5 pages, 188 KiB  
Editorial
Metaheuristic Algorithms in Optimal Design of Engineering Problems
by Łukasz Knypiński, Ramesh Devarapalli and Marcin Kamiński
Algorithms 2024, 17(11), 522; https://doi.org/10.3390/a17110522 - 14 Nov 2024
Viewed by 324
Abstract
Metaheuristic optimization algorithms (MOAs) are widely used to optimize the design process of engineering problems [...] Full article
(This article belongs to the Special Issue Metaheuristic Algorithms in Optimal Design of Engineering Problems)
16 pages, 8588 KiB  
Article
Quotient Network-A Network Similar to ResNet but Learning Quotients
by Peng Hui, Jiamuyang Zhao, Changxin Li and Qingzhen Zhu
Algorithms 2024, 17(11), 521; https://doi.org/10.3390/a17110521 - 13 Nov 2024
Viewed by 279
Abstract
The emergence of ResNet provides a powerful tool for training extremely deep networks. The core idea behind it is to change the learning goals of the network. It no longer learns new features from scratch but learns the difference between the target and [...] Read more.
The emergence of ResNet provides a powerful tool for training extremely deep networks. The core idea behind it is to change the learning goals of the network. It no longer learns new features from scratch but learns the difference between the target and existing features. However, the difference between the two kinds of features does not have an independent and clear meaning, and the amount of learning is based on the absolute rather than the relative difference, which is sensitive to the size of existing features. We propose a new network that perfectly solves these two problems while still having the advantages of ResNet. Specifically, it chooses to learn the quotient of the target features with the existing features, so we call it the quotient network. In order to enable this network to learn successfully and achieve higher performance, we propose some design rules for this network so that it can be trained efficiently and achieve better performance than ResNet. Experiments on the CIFAR10, CIFAR100, and SVHN datasets prove that this network can stably achieve considerable improvements over ResNet by simply making tiny corresponding changes to the original ResNet network without adding new parameters. Full article
Show Figures

Figure 1

25 pages, 3540 KiB  
Article
Minimum-Energy Scheduling of Flexible Job-Shop Through Optimization and Comprehensive Heuristic
by Oludolapo Akanni Olanrewaju, Fabio Luiz Peres Krykhtine and Felix Mora-Camino
Algorithms 2024, 17(11), 520; https://doi.org/10.3390/a17110520 - 12 Nov 2024
Viewed by 377
Abstract
This study considers a flexible job-shop scheduling problem where energy cost savings are the primary objective and where the classical objective of the minimization of the make-span is replaced by the satisfaction of due times for each job. An original two-level mixed-integer formulation [...] Read more.
This study considers a flexible job-shop scheduling problem where energy cost savings are the primary objective and where the classical objective of the minimization of the make-span is replaced by the satisfaction of due times for each job. An original two-level mixed-integer formulation of this optimization problem is proposed, where the processed flows of material and their timing are explicitly considered. Its exact solution is discussed, and, considering its computational complexity, a comprehensive heuristic, balancing energy performance and due time constraint satisfaction, is developed to provide acceptable solutions in polynomial time to the minimum-energy flexible job-shop scheduling problem, even when considering its dynamic environment. The proposed approach is illustrated through a small-scale example. Full article
(This article belongs to the Special Issue Scheduling Theory and Algorithms for Sustainable Manufacturing)
Show Figures

Figure 1

26 pages, 862 KiB  
Article
Can the Plantar Pressure and Temperature Data Trend Show the Presence of Diabetes? A Comparative Study of a Variety of Machine Learning Techniques
by Eduardo A. Gerlein, Francisco Calderón, Martha Zequera-Díaz and Roozbeh Naemi
Algorithms 2024, 17(11), 519; https://doi.org/10.3390/a17110519 - 12 Nov 2024
Viewed by 466
Abstract
This study aimed to explore the potential of predicting diabetes by analyzing trends in plantar thermal and plantar pressure data, either individually or in combination, using various machine learning techniques. A total of twenty-six participants, comprising thirteen individuals diagnosed with diabetes and thirteen [...] Read more.
This study aimed to explore the potential of predicting diabetes by analyzing trends in plantar thermal and plantar pressure data, either individually or in combination, using various machine learning techniques. A total of twenty-six participants, comprising thirteen individuals diagnosed with diabetes and thirteen healthy individuals, walked along a 20 m path. In-shoe plantar pressure data were collected and the plantar temperature was measured both immediately before and after the walk. Each participant completed the trial three times, and the average data between the trials were calculated. The research was divided into three experiments: the first evaluated the correlations between the plantar pressure and temperature data; the second focused on predicting diabetes using each data type independently; and the third combined both data types and assessed the effect of such to enhance the predictive accuracy. For the experiments, 20 regression models and 16 classification algorithms were employed, and the performance was evaluated using a five-fold cross-validation strategy. The outcomes of the initial set of experiments indicated that the machine learning models were significant correlations between the thermal data and pressure estimates. This was consistent with the findings from the prior correlation analysis, which showed weak relationships between these two data modalities. However, a shift in focus towards predicting diabetes by aggregating the temperature and pressure data led to encouraging results, demonstrating the effectiveness of this approach in accurately predicting the presence of diabetes. The analysis revealed that, while several classifiers demonstrated reasonable metrics when using standalone variables, the integration of thermal and pressure data significantly improved the predictive accuracy. Specifically, when only plantar pressure data were used, the Logistic Regression model achieved the highest accuracy at 68.75%. Those predictions based solely on temperature data showed the Naive Bayes model as the lead with an accuracy of 87.5%. Notably, the highest accuracy of 93.75% was observed when both the temperature and pressure data were combined, with the Extra Trees Classifier performing the best. These results suggest that combining temperature and pressure data enhances the model’s predictive accuracy. This can indicate the importance of multimodal data integration and their potentials in diabetes prediction. Full article
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing (2nd Edition))
Show Figures

Figure 1

21 pages, 623 KiB  
Article
Attribute Relevance Score: A Novel Measure for Identifying Attribute Importance
by Pablo Neirz, Hector Allende and Carolina Saavedra
Algorithms 2024, 17(11), 518; https://doi.org/10.3390/a17110518 - 9 Nov 2024
Viewed by 492
Abstract
This study introduces a novel measure for evaluating attribute relevance, specifically designed to accurately identify attributes that are intrinsically related to a phenomenon, while being sensitive to the asymmetry of those relationships and noise conditions. Traditional variable selection techniques, such as filter and [...] Read more.
This study introduces a novel measure for evaluating attribute relevance, specifically designed to accurately identify attributes that are intrinsically related to a phenomenon, while being sensitive to the asymmetry of those relationships and noise conditions. Traditional variable selection techniques, such as filter and wrapper methods, often fall short in capturing these complexities. Our methodology, grounded in decision trees but extendable to other machine learning models, was rigorously evaluated across various data scenarios. The results demonstrate that our measure effectively distinguishes relevant from irrelevant attributes and highlights how relevance is influenced by noise, providing a more nuanced understanding compared to established methods such as Pearson, Spearman, Kendall, MIC, MAS, MEV, GMIC, and Phik. This research underscores the importance of phenomenon-centric explainability, reproducibility, and robust attribute relevance evaluation in the development of predictive models. By enhancing both the interpretability and contextual accuracy of models, our approach not only supports more informed decision making but also contributes to a deeper understanding of the underlying mechanisms in diverse application domains, such as biomedical research, financial modeling, astronomy, and others. Full article
(This article belongs to the Special Issue Algorithms for Feature Selection (2nd Edition))
Show Figures

Figure 1

13 pages, 1056 KiB  
Article
A Framework for Evaluating Dynamic Directed Brain Connectivity Estimation Methods Using Synthetic EEG Signal Generation
by Zoran Šverko, Saša Vlahinić and Peter Rogelj
Algorithms 2024, 17(11), 517; https://doi.org/10.3390/a17110517 - 9 Nov 2024
Viewed by 409
Abstract
This study presents a method for generating synthetic electroencephalography (EEG) signals to test dynamic directed brain connectivity estimation methods. Current methods for evaluating dynamic brain connectivity estimation techniques face challenges due to the lack of ground truth in real EEG signals. To [...] Read more.
This study presents a method for generating synthetic electroencephalography (EEG) signals to test dynamic directed brain connectivity estimation methods. Current methods for evaluating dynamic brain connectivity estimation techniques face challenges due to the lack of ground truth in real EEG signals. To address this, we propose a framework for generating synthetic EEG signals with predefined dynamic connectivity changes. Our approach allows for evaluating and optimizing dynamic connectivity estimation methods, particularly Granger causality (GC). We demonstrate the framework’s utility by identifying optimal window sizes and regression orders for GC analysis. The findings could guide the development of more accurate dynamic connectivity techniques. Full article
(This article belongs to the Special Issue Artificial Intelligence and Signal Processing: Circuits and Systems)
Show Figures

Figure 1

16 pages, 2594 KiB  
Article
Topological Reinforcement Adaptive Algorithm (TOREADA) Application to the Alerting of Convulsive Seizures and Validation with Monte Carlo Numerical Simulations
by Stiliyan Kalitzin
Algorithms 2024, 17(11), 516; https://doi.org/10.3390/a17110516 - 8 Nov 2024
Viewed by 393
Abstract
The detection of adverse events—for example, convulsive epileptic seizures—can be critical for patients suffering from a variety of pathological syndromes. Algorithms using remote sensing modalities, such as a video camera input, can be effective for real-time alerting, but the broad variability of environments [...] Read more.
The detection of adverse events—for example, convulsive epileptic seizures—can be critical for patients suffering from a variety of pathological syndromes. Algorithms using remote sensing modalities, such as a video camera input, can be effective for real-time alerting, but the broad variability of environments and numerous nonstationary factors may limit their precision. In this work, we address the issue of adaptive reinforcement that can provide flexible applications in alerting devices. The general concept of our approach is the topological reinforced adaptive algorithm (TOREADA). Three essential steps—embedding, assessment, and envelope—act iteratively during the operation of the system, thus providing continuous, on-the-fly, reinforced learning. We apply this concept in the case of detecting convulsive epileptic seizures, where three parameters define the decision manifold. Monte Carlo-type simulations validate the effectiveness and robustness of the approach. We show that the adaptive procedure finds the correct detection parameters, providing optimal accuracy from a large variety of initial states. With respect to the separation quality between simulated seizure and normal epochs, the detection reinforcement algorithm is robust within the broad margins of signal-generation scenarios. We conclude that our technique is applicable to a large variety of event detection systems. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Graphical abstract

28 pages, 5224 KiB  
Article
Unsupervised Image Segmentation on 2D Echocardiogram
by Gabriel Farias Cacao, Dongping Du and Nandini Nair
Algorithms 2024, 17(11), 515; https://doi.org/10.3390/a17110515 - 7 Nov 2024
Viewed by 411
Abstract
Echocardiography is a widely used, non-invasive imaging technique for diagnosing and monitoring heart conditions. However, accurate segmentation of cardiac structures, particularly the left ventricle, remains a complex task due to the inherent variability and noise in echocardiographic images. Current supervised models have achieved [...] Read more.
Echocardiography is a widely used, non-invasive imaging technique for diagnosing and monitoring heart conditions. However, accurate segmentation of cardiac structures, particularly the left ventricle, remains a complex task due to the inherent variability and noise in echocardiographic images. Current supervised models have achieved state-of-the-art results but are highly dependent on large, annotated datasets, which are costly and time-consuming to obtain and depend on the quality of the annotated data. These limitations motivate the need for unsupervised methods that can generalize across different image conditions without relying on annotated data. In this study, we propose an unsupervised approach for segmenting 2D echocardiographic images. By combining customized objective functions with convolutional neural networks (CNNs), our method effectively segments cardiac structures, addressing the challenges posed by low-resolution and gray-scale images. Our approach leverages techniques traditionally used outside of medical imaging, optimizing feature extraction through CNNs in a data-driven manner and with a new and smaller network design. Another key contribution of this work is the introduction of a post-processing algorithm that refines the segmentation to isolate the left ventricle in both diastolic and systolic positions, enabling the calculation of the ejection fraction (EF). This calculation serves as a benchmark for evaluating the performance of our unsupervised method. Our results demonstrate the potential of unsupervised learning to improve echocardiogram analysis by overcoming the limitations of supervised approaches, particularly in settings where labeled data are scarce or unavailable. Full article
Show Figures

Figure 1

22 pages, 436 KiB  
Article
Data-Driven Formation Control for Multi-Vehicle Systems Induced by Leader Motion
by Gianfranco Parlangeli
Algorithms 2024, 17(11), 514; https://doi.org/10.3390/a17110514 - 7 Nov 2024
Viewed by 296
Abstract
In this paper, a leader motion mechanism is studied for the finite time achievement of any desired formation of a multi-agent system. The approach adopted in this paper exploits a recent technique based on leader motion to the formation control problem of second-order [...] Read more.
In this paper, a leader motion mechanism is studied for the finite time achievement of any desired formation of a multi-agent system. The approach adopted in this paper exploits a recent technique based on leader motion to the formation control problem of second-order systems, with a special effort to networks of mobile devices and teams of vehicles. After a thorough description of the problem framework, the leader motion mechanism is designed to accomplish the prescribed formation attainment in finite time. Both asymptotic and transient behavior are thoroughly analyzed, to derive the appropriate analytical conditions for the controller design. The overall algorithm is then finalized by two procedures that allow the exploitation of local data only, and the leader motion mechanism is performed based on data collected by the leader during a preliminary experimental stage. A final section of simulation results closes the paper, confirming the effectiveness of the proposed strategy for formation control of a multi-agent system. Full article
(This article belongs to the Special Issue Intelligent Algorithms for Networked Robotic Systems)
Show Figures

Graphical abstract

16 pages, 2633 KiB  
Article
Bus Network Adjustment Pre-Evaluation Based on Biometric Recognition and Travel Spatio-Temporal Deduction
by Qingbo Wei, Nanfeng Zhang, Yuan Gao, Cheng Chen, Li Wang and Jingfeng Yang
Algorithms 2024, 17(11), 513; https://doi.org/10.3390/a17110513 - 7 Nov 2024
Viewed by 322
Abstract
A critical component of bus network adjustment is the accurate prediction of potential risks, such as the likelihood of complaints from passengers. Traditional simulation methods, however, face limitations in identifying passengers and understanding how their travel patterns may change. To address this issue, [...] Read more.
A critical component of bus network adjustment is the accurate prediction of potential risks, such as the likelihood of complaints from passengers. Traditional simulation methods, however, face limitations in identifying passengers and understanding how their travel patterns may change. To address this issue, a pre-evaluation method has been developed, leveraging the spatial distribution of bus networks and the spatio-temporal behavior of passengers. The method includes stage of travel demand analysis, accessible path set calculation, passenger assignment, and evaluation of key indicators. First, we explore the actual passengers’ origin and destination (OD) stop from bus card (or passenger Code) payment data and biometric recognition data, with the OD as one of the main input parameters. Second, a digital bus network model is constructed to represent the logical and spatial relationships between routes and stops. Upon inputting bus line adjustment parameters, these relationships allow for the precise and automatic identification of the affected areas, as well as the calculation of accessible paths of each OD pair. Third, the factors influencing passengers’ path selection are analyzed, and a predictive model is built to estimate post-adjustment path choices. A genetic algorithm is employed to optimize the model’s weights. Finally, various metrics, such as changes in travel routes and ride times, are analyzed by integrating passenger profiles. The proposed method was tested on the case of the Guangzhou 543 route adjustment. Results show that the accuracy of the number of predicted trips after adjustment is 89.6%, and the predicted flow of each associated bus line is also consistent with the actual situation. The main reason for the error is that the path selection has a certain level of irrationality, which stems from the fact that the proportion of passengers who choose the minimum cost path for direct travel is about 65%, while the proportion of one-transfer passengers is only about 50%. Overall, the proposed algorithm can quantitatively analyze the impact of rigid travel groups, occasional travel groups, elderly groups, and other groups that are prone to making complaints in response to bus line adjustment. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

14 pages, 2862 KiB  
Article
Optimizing Parameters for Enhanced Iterative Image Reconstruction Using Extended Power Divergence
by Takeshi Kojima, Yusaku Yamaguchi, Omar M. Abou Al-Ola and Tetsuya Yoshinaga
Algorithms 2024, 17(11), 512; https://doi.org/10.3390/a17110512 - 7 Nov 2024
Viewed by 336
Abstract
In this paper, we propose a method for optimizing the parameter values in iterative reconstruction algorithms that include adjustable parameters in order to optimize the reconstruction performance. Specifically, we focus on the power divergence-based expectation-maximization algorithm, which includes two power indices as adjustable [...] Read more.
In this paper, we propose a method for optimizing the parameter values in iterative reconstruction algorithms that include adjustable parameters in order to optimize the reconstruction performance. Specifically, we focus on the power divergence-based expectation-maximization algorithm, which includes two power indices as adjustable parameters. Through numerical and physical experiments, we demonstrate that optimizing the evaluation function based on the extended power-divergence and weighted extended power-divergence measures yields high-quality image reconstruction. Notably, the optimal parameter values derived from the proposed method produce reconstruction results comparable to those obtained using the true image, even when using distance functions based on differences between forward projection data and measured projection data, as verified by numerical experiments. These results suggest that the proposed method effectively improves reconstruction quality without the need for machine-learning techniques in parameter selection. Our findings also indicate that this approach is useful for enhancing the performance of iterative reconstruction algorithms, especially in medical imaging, where high-accuracy reconstruction under noisy conditions is required. Full article
Show Figures

Figure 1

13 pages, 696 KiB  
Article
PIPET: A Pipeline to Generate PET Phantom Datasets for Reconstruction Based on Convolutional Neural Network Training
by Alejandro Sanz-Sanchez, Francisco B. García, Pablo Mesas-Lafarga, Joan Prats-Climent and María José Rodríguez-Álvarez
Algorithms 2024, 17(11), 511; https://doi.org/10.3390/a17110511 - 7 Nov 2024
Viewed by 423
Abstract
There has been a strong interest in using neural networks to solve several tasks in PET medical imaging. One of the main problems faced when using neural networks is the quality, quantity, and availability of data to train the algorithms. In order to [...] Read more.
There has been a strong interest in using neural networks to solve several tasks in PET medical imaging. One of the main problems faced when using neural networks is the quality, quantity, and availability of data to train the algorithms. In order to address this issue, we have developed a pipeline that enables the generation of voxelized synthetic PET phantoms, simulates the acquisition of a PET scan, and reconstructs the image from the simulated data. In order to achieve these results, several pieces of software are used in the different steps of the pipeline. This pipeline solves the problem of generating diverse PET datasets and images of high quality for different types of phantoms and configurations. The data obtained from this pipeline can be used to train convolutional neural networks for PET reconstruction. Full article
Show Figures

Figure 1

28 pages, 4502 KiB  
Article
Improved Bacterial Foraging Optimization Algorithm with Machine Learning-Driven Short-Term Electricity Load Forecasting: A Case Study in Peninsular Malaysia
by Farah Anishah Zaini, Mohamad Fani Sulaima, Intan Azmira Wan Abdul Razak, Mohammad Lutfi Othman and Hazlie Mokhlis
Algorithms 2024, 17(11), 510; https://doi.org/10.3390/a17110510 - 6 Nov 2024
Viewed by 425
Abstract
Accurate electricity demand forecasting is crucial for ensuring the sustainability and reliability of power systems. Least square support vector machines (LSSVM) are well suited to handle complex non-linear power load series. However, the less optimal regularization parameter and the Gaussian kernel function in [...] Read more.
Accurate electricity demand forecasting is crucial for ensuring the sustainability and reliability of power systems. Least square support vector machines (LSSVM) are well suited to handle complex non-linear power load series. However, the less optimal regularization parameter and the Gaussian kernel function in the LSSVM model have contributed to flawed forecasting accuracy and random generalization ability. Thus, these parameters of LSSVM need to be chosen appropriately using intelligent optimization algorithms. This study proposes a new hybrid model based on the LSSVM optimized by the improved bacterial foraging optimization algorithm (IBFOA) for forecasting the short-term daily electricity load in Peninsular Malaysia. The IBFOA based on the sine cosine equation addresses the limitations of fixed chemotaxis constants in the original bacterial foraging optimization algorithm (BFOA), enhancing its exploration and exploitation capabilities. Finally, the load forecasting model based on LSSVM-IBFOA is constructed using mean absolute percentage error (MAPE) as the objective function. The comparative analysis demonstrates the model, achieving the highest determination coefficient (R2) of 0.9880 and significantly reducing the average MAPE value by 28.36%, 27.72%, and 5.47% compared to the deep neural network (DNN), LSSVM, and LSSVM-BFOA, respectively. Additionally, IBFOA exhibits faster convergence times compared to BFOA, highlighting the practicality of LSSVM-IBFOA for short-term load forecasting. Full article
Show Figures

Figure 1

14 pages, 6820 KiB  
Article
Local Search Heuristic for the Two-Echelon Capacitated Vehicle Routing Problem in Educational Decision Support Systems
by José Pedro Gomes da Cruz, Matthias Winkenbach and Hugo Tsugunobu Yoshida Yoshizaki
Algorithms 2024, 17(11), 509; https://doi.org/10.3390/a17110509 - 6 Nov 2024
Viewed by 381
Abstract
This study focuses on developing a heuristic for Decision Support Systems (DSS) in e-commerce logistics education, specifically addressing the Two-Echelon Capacitated Vehicle Routing Problem (2E-CVRP). The 2E-CVRP involves using Urban Transshipment Points (UTPs) to optimize deliveries. To tackle the complexity of the 2E-CVRP, [...] Read more.
This study focuses on developing a heuristic for Decision Support Systems (DSS) in e-commerce logistics education, specifically addressing the Two-Echelon Capacitated Vehicle Routing Problem (2E-CVRP). The 2E-CVRP involves using Urban Transshipment Points (UTPs) to optimize deliveries. To tackle the complexity of the 2E-CVRP, DSS can employ fast and effective techniques for visual problem-solving. Therefore, the objective of this work is to develop a local search heuristic to solve the 2E-CVRP quickly and efficiently for implementation in DSS. The efficiency of the heuristic is assessed through benchmarks from the literature and applied to real-world problems from a Brazilian e-commerce retailer, contributing to advancements in the 2E-CVRP approach and promoting operational efficiency in e-commerce logistics education. The heuristic yielded promising results, solving problems almost instantly, for instances in the literature on average in 1.06 s, with average gaps of 6.3% in relation to the best-known solutions and, for real problems with hundreds of customers, in 1.4 s, with gaps of 8.3%, demonstrating its effectiveness in achieving the study’s objectives. Full article
(This article belongs to the Special Issue New Insights in Algorithms for Logistics Problems and Management)
Show Figures

Figure 1

18 pages, 9219 KiB  
Article
Automated Evaluation Method for Risk Behaviors of Quay Crane Operators at Ports Using Virtual Reality
by Mengjie He, Yujie Zhang, Yi Liu, Yang Shen and Chao Mi
Algorithms 2024, 17(11), 508; https://doi.org/10.3390/a17110508 - 5 Nov 2024
Viewed by 440
Abstract
Currently, the operational risk assessment of quay crane operators at ports relies on manual evaluations based on experience, but this method lacks objectivity and fairness. As port throughput continues to grow, the port accident rate has also increased, making it crucial to scientifically [...] Read more.
Currently, the operational risk assessment of quay crane operators at ports relies on manual evaluations based on experience, but this method lacks objectivity and fairness. As port throughput continues to grow, the port accident rate has also increased, making it crucial to scientifically evaluate the risk behaviors of operators and improve their safety awareness. This paper proposes an automated evaluation method based on a Deep Q-Network (DQN) to assess the risk behaviors of quay crane operators in virtual scenarios. A risk simulation module has been added to the existing automated quay crane remote operation simulation system to simulate potential risks during operations. Based on the collected data, a DQN-based benchmark model reflecting the operational behaviors and decision-making processes of skilled operators has been developed. This model enables a quantitative evaluation of operators’ behaviors, ensuring the objectivity and accuracy of the assessment process. The experimental results show that, compared with traditional manual scoring methods, the proposed method is more stable and objective, effectively reducing subjective biases and providing a reliable alternative to conventional manual evaluations. Additionally, this method enhances operators’ safety awareness and their ability to handle risks, helping them identify and avoid risks during actual operations, thereby ensuring both operational safety and efficiency. Full article
(This article belongs to the Special Issue Algorithms for Virtual and Augmented Environments)
Show Figures

Figure 1

20 pages, 3504 KiB  
Article
On the Estimation of Logistic Models with Banking Data Using Particle Swarm Optimization
by Moch. Fandi Ansori, Kuntjoro Adji Sidarto, Novriana Sumarti and Iman Gunadi
Algorithms 2024, 17(11), 507; https://doi.org/10.3390/a17110507 - 5 Nov 2024
Viewed by 400
Abstract
This paper presents numerical works on estimating some logistic models using particle swarm optimization (PSO). The considered models are the Verhulst model, Pearl and Reed generalization model, von Bertalanffy model, Richards model, Gompertz model, hyper-Gompertz model, Blumberg model, Turner et al. model, and [...] Read more.
This paper presents numerical works on estimating some logistic models using particle swarm optimization (PSO). The considered models are the Verhulst model, Pearl and Reed generalization model, von Bertalanffy model, Richards model, Gompertz model, hyper-Gompertz model, Blumberg model, Turner et al. model, and Tsoularis model. We employ data on commercial and rural banking assets in Indonesia due to their tendency to correspond with logistic growth. Most banking asset forecasting uses statistical methods concentrating solely on short-term data forecasting. In banking asset forecasting, deterministic models are seldom employed, despite their capacity to predict data behavior for an extended time. Consequently, this paper employs logistic model forecasting. To improve the speed of the algorithm execution, we use the Cauchy criterion as one of the stopping criteria. For choosing the best model out of the nine models, we analyze several considerations such as the mean absolute percentage error, the root mean squared error, and the value of the carrying capacity in determining which models can be unselected. Consequently, we obtain the best-fitted model for each commercial and rural bank. We evaluate the performance of PSO against another metaheuristic algorithm known as spiral optimization for benchmarking purposes. We assess the robustness of the algorithm employing the Taguchi method. Ultimately, we present a novel logistic model which is a generalization of the existence model. We evaluate its parameters and compare the result with the best-obtained model. Full article
(This article belongs to the Special Issue New Insights in Algorithms for Logistics Problems and Management)
Show Figures

Figure 1

20 pages, 2003 KiB  
Article
Enhanced Curvature-Based Fabric Defect Detection: A Experimental Study with Gabor Transform and Deep Learning
by Mehmet Erdogan and Mustafa Dogan
Algorithms 2024, 17(11), 506; https://doi.org/10.3390/a17110506 - 5 Nov 2024
Viewed by 462
Abstract
Quality control at every stage of production in the textile industry is essential for maintaining competitiveness in the global market. Manual fabric defect inspections are often characterized by low precision and high time costs, in contrast to intelligent anomaly detection systems implemented in [...] Read more.
Quality control at every stage of production in the textile industry is essential for maintaining competitiveness in the global market. Manual fabric defect inspections are often characterized by low precision and high time costs, in contrast to intelligent anomaly detection systems implemented in the early stages of fabric production. To achieve successful automated fabric defect identification, significant challenges must be addressed, including accurate detection, classification, and decision-making processes. Traditionally, fabric defect classification has relied on inefficient and labor-intensive human visual inspection, particularly as the variety of fabric defects continues to increase. Despite the global chip crisis and its adverse effects on supply chains, electronic hardware costs for quality control systems have become more affordable. This presents a notable advantage, as vision systems can now be easily developed with the use of high-resolution, advanced cameras. In this study, we propose a discrete curvature algorithm, integrated with the Gabor transform, which demonstrates significant success in near real-time defect classification. The primary contribution of this work is the development of a modified curvature algorithm that achieves high classification performance without the need for training. This method is particularly efficient due to its low data storage requirements and minimal processing time, making it ideal for real-time applications. Furthermore, we implemented and evaluated several other methods from the literature, including Gabor and Convolutional Neural Networks (CNNs), within a unified coding framework. Each defect type was analyzed individually, with results indicating that the proposed algorithm exhibits comparable success and robust performance relative to deep learning-based approaches. Full article
(This article belongs to the Special Issue Algorithms for Image Processing and Machine Vision)
Show Figures

Figure 1

23 pages, 7960 KiB  
Article
Novelty in Intelligent Controlled Oscillations in Smart Structures
by Amalia Moutsopoulou, Markos Petousis, Georgios E. Stavroulakis, Anastasios Pouliezos and Nectarios Vidakis
Algorithms 2024, 17(11), 505; https://doi.org/10.3390/a17110505 - 4 Nov 2024
Viewed by 411
Abstract
Structural control techniques can be used to protect engineering structures. By computing instantaneous control forces based on the input from the observed reactions and adhering to a strong control strategy, intelligent control in structural engineering can be achieved. In this study, we employed [...] Read more.
Structural control techniques can be used to protect engineering structures. By computing instantaneous control forces based on the input from the observed reactions and adhering to a strong control strategy, intelligent control in structural engineering can be achieved. In this study, we employed intelligent piezoelectric patches to reduce vibrations in structures. The actuators and sensors were implemented using piezoelectric patches. We reduced structural oscillations by employing sophisticated intelligent control methods. Examples of such control methods include H-infinity and H2. An advantage of this study is that the results are presented for both static and dynamic loading, as well as for the frequency domain. Oscillation suppression must be achieved over the entire frequency range. In this study, advanced programming was used to solve this problem and complete oscillation suppression was achieved. This study explored in detail the methods and control strategies that can be used to address the problem of oscillations. These techniques have been thoroughly described and analyzed, offering valuable insights into their effective applications. The ability to reduce oscillations has significant implications for applications that extend to various structures and systems such as airplanes, metal bridges, and large metallic structures. Full article
Show Figures

Figure 1

24 pages, 2294 KiB  
Article
Fast Algorithm for Cyber-Attack Estimation and Attack Path Extraction Using Attack Graphs with AND/OR Nodes
by Eugene Levner and Dmitry Tsadikovich
Algorithms 2024, 17(11), 504; https://doi.org/10.3390/a17110504 - 4 Nov 2024
Viewed by 496
Abstract
This paper studies the security issues for cyber–physical systems, aimed at countering potential malicious cyber-attacks. The main focus is on solving the problem of extracting the most vulnerable attack path in a known attack graph, where an attack path is a sequence of [...] Read more.
This paper studies the security issues for cyber–physical systems, aimed at countering potential malicious cyber-attacks. The main focus is on solving the problem of extracting the most vulnerable attack path in a known attack graph, where an attack path is a sequence of steps that an attacker can take to compromise the underlying network. Determining an attacker’s possible attack path is critical to cyber defenders as it helps identify threats, harden the network, and thwart attacker’s intentions. We formulate this problem as a path-finding optimization problem with logical constraints represented by AND and OR nodes. We propose a new Dijkstra-type algorithm that combines elements from Dijkstra’s shortest path algorithm and the critical path method. Although the path extraction problem is generally NP-hard, for the studied special case, the proposed algorithm determines the optimal attack path in polynomial time, O(nm), where n is the number of nodes and m is the number of edges in the attack graph. To our knowledge this is the first exact polynomial algorithm that can solve the path extraction problem for different attack graphs, both cycle-containing and cycle-free. Computational experiments with real and synthetic data have shown that the proposed algorithm consistently and quickly finds optimal solutions to the problem. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop