Next Issue
Volume 15, October
Previous Issue
Volume 15, August
 
 

Algorithms, Volume 15, Issue 9 (September 2022) – 38 articles

Cover Story (view full-size image): In configuration design, the task is to compose a system out of a set of predefined, modular building blocks. Product configuration systems implement reasoning techniques to model and explore the resulting solution spaces. Among others, the formulation of constraint satisfaction problems (CSP) is state of the art and the background in many proprietary configuration engine software packages. Configuration design tasks can also be implemented in computer-aided design (CAD) systems as these contain different techniques for knowledge-based product modeling, but the literature reports only little about modeling examples or training materials. This article is aimed at bridging this gap and presents a step-by-step implementation guide for CSP-based CAD configurators on the example of Autodesk Inventor. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 845 KiB  
Article
Non-Interactive Decision Trees and Applications with Multi-Bit TFHE
by Jestine Paul, Benjamin Hong Meng Tan, Bharadwaj Veeravalli and Khin Mi Mi Aung
Algorithms 2022, 15(9), 333; https://doi.org/10.3390/a15090333 - 18 Sep 2022
Cited by 4 | Viewed by 1850
Abstract
Machine learning classification algorithms, such as decision trees and random forests, are commonly used in many applications. Clients who want to classify their data send them to a server that performs their inference using a trained model. The client must trust the server [...] Read more.
Machine learning classification algorithms, such as decision trees and random forests, are commonly used in many applications. Clients who want to classify their data send them to a server that performs their inference using a trained model. The client must trust the server and provide the data in plaintext. Moreover, if the classification is done at a third-party cloud service, the model owner also needs to trust the cloud service. In this paper, we propose a protocol for privately evaluating decision trees. The protocol uses a novel private comparison function based on fully homomorphic encryption over the torus (TFHE) scheme and a programmable bootstrapping technique. Our comparison function for 32-bit and 64-bit integers is 26% faster than the naive TFHE implementation. The protocol is designed to be non-interactive and is less complex than the existing interactive protocols. Our experiment results show that our technique scales linearly with the depth of the decision tree and efficiently evaluates large decision trees on real datasets. Compared with the state of the art, ours is the only non-interactive protocol to evaluate a decision tree with high precision on encrypted parameters. The final download bandwidth is also 50% lower than the state of the art. Full article
Show Figures

Figure 1

15 pages, 999 KiB  
Article
Tree-Based Classifier Ensembles for PE Malware Analysis: A Performance Revisit
by Maya Hilda Lestari Louk and Bayu Adhi Tama
Algorithms 2022, 15(9), 332; https://doi.org/10.3390/a15090332 - 17 Sep 2022
Cited by 10 | Viewed by 2936
Abstract
Given their escalating number and variety, combating malware is becoming increasingly strenuous. Machine learning techniques are often used in the literature to automatically discover the models and patterns behind such challenges and create solutions that can maintain the rapid pace at which malware [...] Read more.
Given their escalating number and variety, combating malware is becoming increasingly strenuous. Machine learning techniques are often used in the literature to automatically discover the models and patterns behind such challenges and create solutions that can maintain the rapid pace at which malware evolves. This article compares various tree-based ensemble learning methods that have been proposed in the analysis of PE malware. A tree-based ensemble is an unconventional learning paradigm that constructs and combines a collection of base learners (e.g., decision trees), as opposed to the conventional learning paradigm, which aims to construct individual learners from training data. Several tree-based ensemble techniques, such as random forest, XGBoost, CatBoost, GBM, and LightGBM, are taken into consideration and are appraised using different performance measures, such as accuracy, MCC, precision, recall, AUC, and F1. In addition, the experiment includes many public datasets, such as BODMAS, Kaggle, and CIC-MalMem-2022, to demonstrate the generalizability of the classifiers in a variety of contexts. Based on the test findings, all tree-based ensembles performed well, and performance differences between algorithms are not statistically significant, particularly when their respective hyperparameters are appropriately configured. The proposed tree-based ensemble techniques also outperformed other, similar PE malware detectors that have been published in recent years. Full article
Show Figures

Figure 1

13 pages, 3774 KiB  
Article
Irregular Workpiece Template-Matching Algorithm Using Contour Phase
by Shaohui Su, Jiadong Wang and Dongyang Zhang
Algorithms 2022, 15(9), 331; https://doi.org/10.3390/a15090331 - 16 Sep 2022
Cited by 1 | Viewed by 1610
Abstract
The current template-matching algorithm can match the target workpiece but cannot give the position and orientation of the irregular workpiece. Aiming at this problem, this paper proposes a template-matching algorithm for irregular workpieces based on the contour phase difference. By this, one can [...] Read more.
The current template-matching algorithm can match the target workpiece but cannot give the position and orientation of the irregular workpiece. Aiming at this problem, this paper proposes a template-matching algorithm for irregular workpieces based on the contour phase difference. By this, one can firstly gain the profile curve of the irregular workpiece by measuring its radius in orderly fashion and then, calculate the similarity of the template workpiece profile and the target one by phase-shifting and finally, compute the rotation measure between the two according to the number of phase movements. The experimental results showed that in this way one could not only match the shaped workpiece in the template base accurately, but also accurately calculate the rotation angle of the irregular workpiece relative to the template with the maximum error controlled within 1%. Full article
Show Figures

Figure 1

8 pages, 1976 KiB  
Review
Polymer Models of Chromatin Imaging Data in Single Cells
by Mattia Conte, Andrea M. Chiariello, Alex Abraham, Simona Bianco, Andrea Esposito, Mario Nicodemi, Tommaso Matteuzzi and Francesca Vercellone
Algorithms 2022, 15(9), 330; https://doi.org/10.3390/a15090330 - 16 Sep 2022
Cited by 3 | Viewed by 1685
Abstract
Recent super-resolution imaging technologies enable tracing chromatin conformation with nanometer-scale precision at the single-cell level. They revealed, for example, that human chromosomes fold into a complex three-dimensional structure within the cell nucleus that is essential to establish biological activities, such as the regulation [...] Read more.
Recent super-resolution imaging technologies enable tracing chromatin conformation with nanometer-scale precision at the single-cell level. They revealed, for example, that human chromosomes fold into a complex three-dimensional structure within the cell nucleus that is essential to establish biological activities, such as the regulation of the genes. Yet, to decode from imaging data the molecular mechanisms that shape the structure of the genome, quantitative methods are required. In this review, we consider models of polymer physics of chromosome folding that we benchmark against multiplexed FISH data available in human loci in IMR90 fibroblast cells. By combining polymer theory, numerical simulations and machine learning strategies, the predictions of the models are validated at the single-cell level, showing that chromosome structure is controlled by the interplay of distinct physical processes, such as active loop-extrusion and thermodynamic phase-separation. Full article
(This article belongs to the Special Issue Algorithms for Biomedical Image Analysis and Processing)
Show Figures

Figure 1

20 pages, 3066 KiB  
Article
Classification of Program Texts Represented as Markov Chains with Biology-Inspired Algorithms-Enhanced Extreme Learning Machines
by Liliya A. Demidova and Artyom V. Gorchakov
Algorithms 2022, 15(9), 329; https://doi.org/10.3390/a15090329 - 15 Sep 2022
Cited by 5 | Viewed by 1999
Abstract
The massive nature of modern university programming courses increases the burden on academic workers. The Digital Teaching Assistant (DTA) system addresses this issue by automating unique programming exercise generation and checking, and provides means for analyzing programs received from students by the end [...] Read more.
The massive nature of modern university programming courses increases the burden on academic workers. The Digital Teaching Assistant (DTA) system addresses this issue by automating unique programming exercise generation and checking, and provides means for analyzing programs received from students by the end of semester. In this paper, we propose a machine learning-based approach to the classification of student programs represented as Markov chains. The proposed approach enables real-time student submissions analysis in the DTA system. We compare the performance of different multi-class classification algorithms, such as support vector machine (SVM), the k nearest neighbors (KNN) algorithm, random forest (RF), and extreme learning machine (ELM). ELM is a single-hidden layer feedforward network (SLFN) learning scheme that drastically speeds up the SLFN training process. This is achieved by randomly initializing weights of connections among input and hidden neurons, and explicitly computing weights of connections among hidden and output neurons. The experimental results show that ELM is the most computationally efficient algorithm among the considered ones. In addition, we apply biology-inspired algorithms to ELM input weights fine-tuning in order to further improve the generalization capabilities of this algorithm. The obtained results show that ELMs fine-tuned with biology-inspired algorithms achieve the best accuracy on test data in most of the considered problems. Full article
(This article belongs to the Special Issue Mathematical Models and Their Applications III)
Show Figures

Figure 1

21 pages, 2743 KiB  
Concept Paper
A Model Architecture for Public Transport Networks Using a Combination of a Recurrent Neural Network Encoder Library and a Attention Mechanism
by Thilo Reich, David Hulbert and Marcin Budka
Algorithms 2022, 15(9), 328; https://doi.org/10.3390/a15090328 - 14 Sep 2022
Cited by 1 | Viewed by 1395
Abstract
This study presents a working concept of a model architecture allowing to leverage the state of an entire transport network to make estimated arrival time (ETA) and next-step location predictions. To this end, a combination of an attention mechanism with a dynamically changing [...] Read more.
This study presents a working concept of a model architecture allowing to leverage the state of an entire transport network to make estimated arrival time (ETA) and next-step location predictions. To this end, a combination of an attention mechanism with a dynamically changing recurrent neural network (RNN)-based encoder library is used. To achieve this, an attention mechanism was employed that incorporates the states of other vehicles in the network by encoding their positions using gated recurrent units (GRUs) of the individual bus line to encode their current state. By muting specific parts of the imputed information, their impact on prediction accuracy can be estimated on a subset of the available data. The results of the experimental investigation show that the full model with access to all the network data performed better in some scenarios. However, a model limited to vehicles of the same line ahead of the target was the best performing model, suggesting that the incorporation of additional data can have a negative impact on the prediction accuracy if they do not add any useful information. This could be caused by poor data quality but also by a lack of interaction between the included lines and the target line. The technical aspects of this study are challenging and resulted in a very inefficient training procedure. We highlight several areas where improvements to our presented method are required to make it a viable alternative to current methods. The findings in this study should be considered as a possible and promising avenue for further research into this novel architecture. As such, it is a stepping stone for future research to improve public transport predictions if network operators provide high-quality datasets. Full article
(This article belongs to the Special Issue Machine Learning for Time Series Analysis)
Show Figures

Figure 1

16 pages, 6029 KiB  
Article
Infrared Image Deblurring Based on Lp-Pseudo-Norm and High-Order Overlapping Group Sparsity Regularization
by Zhen Ye, Xiaoming Ou, Juhua Huang and Yingpin Chen
Algorithms 2022, 15(9), 327; https://doi.org/10.3390/a15090327 - 14 Sep 2022
Cited by 1 | Viewed by 1277
Abstract
A traditional total variation (TV) model for infrared image deblurring amid salt-and-pepper noise produces a severe staircase effect. A TV model with low-order overlapping group sparsity (LOGS) suppresses this effect; however, it considers only the prior information of the low-order gradient of the [...] Read more.
A traditional total variation (TV) model for infrared image deblurring amid salt-and-pepper noise produces a severe staircase effect. A TV model with low-order overlapping group sparsity (LOGS) suppresses this effect; however, it considers only the prior information of the low-order gradient of the image. This study proposes an image-deblurring model (Lp_HOGS) based on the LOGS model to mine the high-order prior information of an infrared (IR) image amid salt-and-pepper noise. An Lp-pseudo-norm was used to model the salt-and-pepper noise and obtain a more accurate noise model. Simultaneously, the second-order total variation regular term with overlapping group sparsity was introduced into the proposed model to further mine the high-order prior information of the image and preserve the additional image details. The proposed model uses the alternating direction method of multipliers to solve the problem and obtains the optimal solution of the overall model by solving the optimal solution of several simple decoupled subproblems. Experimental results show that the model has better subjective and objective performance than Lp_LOGS and other advanced models, especially when eliminating motion blur. Full article
Show Figures

Figure 1

22 pages, 2300 KiB  
Article
Autonomous Intersection Management by Using Reinforcement Learning
by P. Karthikeyan, Wei-Lun Chen and Pao-Ann Hsiung
Algorithms 2022, 15(9), 326; https://doi.org/10.3390/a15090326 - 13 Sep 2022
Cited by 4 | Viewed by 2236
Abstract
Developing a safer and more effective intersection-control system is essential given the trends of rising populations and vehicle numbers. Additionally, as vehicle communication and self-driving technologies evolve, we may create a more intelligent control system to reduce traffic accidents. We recommend deep reinforcement [...] Read more.
Developing a safer and more effective intersection-control system is essential given the trends of rising populations and vehicle numbers. Additionally, as vehicle communication and self-driving technologies evolve, we may create a more intelligent control system to reduce traffic accidents. We recommend deep reinforcement learning-inspired autonomous intersection management (DRLAIM) to improve traffic environment efficiency and safety. The three primary models used in this methodology are the priority assignment model, the intersection-control model learning, and safe brake control. The brake-safe control module is utilized to make sure that each vehicle travels safely, and we train the system to acquire an effective model by using reinforcement learning. We have simulated our proposed method by using a simulation of urban mobility tools. Experimental results show that our approach outperforms the traditional method. Full article
(This article belongs to the Special Issue Neural Network for Traffic Forecasting)
Show Figures

Figure 1

15 pages, 600 KiB  
Article
Fed-DeepONet: Stochastic Gradient-Based Federated Training of Deep Operator Networks
by Christian Moya and Guang Lin
Algorithms 2022, 15(9), 325; https://doi.org/10.3390/a15090325 - 12 Sep 2022
Cited by 3 | Viewed by 2335
Abstract
The Deep Operator Network (DeepONet) framework is a different class of neural network architecture that one trains to learn nonlinear operators, i.e., mappings between infinite-dimensional spaces. Traditionally, DeepONets are trained using a centralized strategy that requires transferring the training data to a centralized [...] Read more.
The Deep Operator Network (DeepONet) framework is a different class of neural network architecture that one trains to learn nonlinear operators, i.e., mappings between infinite-dimensional spaces. Traditionally, DeepONets are trained using a centralized strategy that requires transferring the training data to a centralized location. Such a strategy, however, limits our ability to secure data privacy or use high-performance distributed/parallel computing platforms. To alleviate such limitations, in this paper, we study the federated training of DeepONets for the first time. That is, we develop a framework, which we refer to as Fed-DeepONet, that allows multiple clients to train DeepONets collaboratively under the coordination of a centralized server. To achieve Fed-DeepONets, we propose an efficient stochastic gradient-based algorithm that enables the distributed optimization of the DeepONet parameters by averaging first-order estimates of the DeepONet loss gradient. Then, to accelerate the training convergence of Fed-DeepONets, we propose a moment-enhanced (i.e., adaptive) stochastic gradient-based strategy. Finally, we verify the performance of Fed-DeepONet by learning, for different configurations of the number of clients and fractions of available clients, (i) the solution operator of a gravity pendulum and (ii) the dynamic response of a parametric library of pendulums. Full article
(This article belongs to the Special Issue Gradient Methods for Optimization)
Show Figures

Figure 1

11 pages, 650 KiB  
Article
Accounting for Round-Off Errors When Using Gradient Minimization Methods
by Dmitry Lukyanenko, Valentin Shinkarev and Anatoly Yagola
Algorithms 2022, 15(9), 324; https://doi.org/10.3390/a15090324 - 9 Sep 2022
Cited by 2 | Viewed by 1855
Abstract
This paper discusses a method for taking into account rounding errors when constructing a stopping criterion for the iterative process in gradient minimization methods. The main aim of this work was to develop methods for improving the quality of the solutions for real [...] Read more.
This paper discusses a method for taking into account rounding errors when constructing a stopping criterion for the iterative process in gradient minimization methods. The main aim of this work was to develop methods for improving the quality of the solutions for real applied minimization problems, which require significant amounts of calculations and, as a result, can be sensitive to the accumulation of rounding errors. However, this paper demonstrates that the developed approach can also be useful in solving computationally small problems. The main ideas of this work are demonstrated using one of the possible implementations of the conjugate gradient method for solving an overdetermined system of linear algebraic equations with a dense matrix. Full article
(This article belongs to the Special Issue Gradient Methods for Optimization)
Show Figures

Figure 1

2 pages, 171 KiB  
Editorial
Special Issue: Stochastic Algorithms and Their Applications
by Stéphanie Allassonnière
Algorithms 2022, 15(9), 323; https://doi.org/10.3390/a15090323 - 9 Sep 2022
Viewed by 1054
Abstract
Stochastic algorithms are at the core of machine learning and artificial intelligence [...] Full article
(This article belongs to the Special Issue Stochastic Algorithms and Their Applications)
11 pages, 355 KiB  
Article
Projection onto the Set of Rank-Constrained Structured Matrices for Reduced-Order Controller Design
by Masaaki Nagahara, Yu Iwai and Noboru Sebe
Algorithms 2022, 15(9), 322; https://doi.org/10.3390/a15090322 - 9 Sep 2022
Cited by 1 | Viewed by 1706
Abstract
In this paper, we propose an efficient numerical computation method of reduced-order controller design for linear time-invariant systems. The design problem is described by linear matrix inequalities (LMIs) with a rank constraint on a structured matrix, due to which the problem is non-convex. [...] Read more.
In this paper, we propose an efficient numerical computation method of reduced-order controller design for linear time-invariant systems. The design problem is described by linear matrix inequalities (LMIs) with a rank constraint on a structured matrix, due to which the problem is non-convex. Instead of the heuristic method that approximates the matrix rank by the nuclear norm, we propose a numerical projection onto the rank-constrained set based on the alternating direction method of multipliers (ADMM). Then the controller is obtained by alternating projection between the rank-constrained set and the LMI set. We show the effectiveness of the proposed method compared with existing heuristic methods, by using 95 benchmark models from the COMPLeib library. Full article
(This article belongs to the Special Issue Computational Methods and Optimization for Numerical Analysis)
20 pages, 2794 KiB  
Article
A Practical Staff Scheduling Strategy Considering Various Types of Employment in the Construction Industry
by Chan Hee Park and Young Dae Ko
Algorithms 2022, 15(9), 321; https://doi.org/10.3390/a15090321 - 9 Sep 2022
Cited by 2 | Viewed by 2239
Abstract
The Korean government implemented a 52-h workweek policy for employees’ welfare. Consequently, companies face workforce availability reduction with the same number of employees. That is, labor-dependent companies suffer from workforce shortage. To handle the workforce shortage, they increase irregular employees who are paid [...] Read more.
The Korean government implemented a 52-h workweek policy for employees’ welfare. Consequently, companies face workforce availability reduction with the same number of employees. That is, labor-dependent companies suffer from workforce shortage. To handle the workforce shortage, they increase irregular employees who are paid relatively less. However, the problem of ‘no-show’, due to the stochastic characteristics of irregular employee’s absence, happens. Therefore, this study aims to propose a staff scheduling strategy considering irregular employee absence and a new labor policy by using linear programming. By deriving a deterministic staff schedule through system parameters derived from the features and rules of an actual company in the numerical experiment, the practicality and applicability of the developed mathematical model are proven. Furthermore, through sensitivity analysis and simulation considering the stochastic characteristics of absences, various proactive cases are provided. Through the proactive cases, the influence of the change of the average percent of irregular employees’ absences on the total labor costs and staff schedules and the expected number who would not come to work could be given when assuming the application in practice. This finding can help decision-makers prepare precautious measures, such as assigning extra employees in case of an irregular employee’s absence. Full article
(This article belongs to the Special Issue Scheduling: Algorithms and Applications)
Show Figures

Figure 1

27 pages, 1604 KiB  
Article
Adaptive Piecewise Poly-Sinc Methods for Ordinary Differential Equations
by Omar Khalil, Hany El-Sharkawy, Maha Youssef and Gerd Baumann
Algorithms 2022, 15(9), 320; https://doi.org/10.3390/a15090320 - 8 Sep 2022
Cited by 2 | Viewed by 1768
Abstract
We propose a new method of adaptive piecewise approximation based on Sinc points for ordinary differential equations. The adaptive method is a piecewise collocation method which utilizes Poly-Sinc interpolation to reach a preset level of accuracy for the approximation. Our work extends the [...] Read more.
We propose a new method of adaptive piecewise approximation based on Sinc points for ordinary differential equations. The adaptive method is a piecewise collocation method which utilizes Poly-Sinc interpolation to reach a preset level of accuracy for the approximation. Our work extends the adaptive piecewise Poly-Sinc method to function approximation, for which we derived an a priori error estimate for our adaptive method and showed its exponential convergence in the number of iterations. In this work, we show the exponential convergence in the number of iterations of the a priori error estimate obtained from the piecewise collocation method, provided that a good estimate of the exact solution of the ordinary differential equation at the Sinc points exists. We use a statistical approach for partition refinement. The adaptive greedy piecewise Poly-Sinc algorithm is validated on regular and stiff ordinary differential equations. Full article
(This article belongs to the Special Issue Computational Methods and Optimization for Numerical Analysis)
Show Figures

Figure 1

22 pages, 939 KiB  
Article
Federated Optimization of 0-norm Regularized Sparse Learning
by Qianqian Tong, Guannan Liang, Jiahao Ding, Tan Zhu, Miao Pan and Jinbo Bi
Algorithms 2022, 15(9), 319; https://doi.org/10.3390/a15090319 - 6 Sep 2022
Viewed by 2036
Abstract
Regularized sparse learning with the 0-norm is important in many areas, including statistical learning and signal processing. Iterative hard thresholding (IHT) methods are the state-of-the-art for nonconvex-constrained sparse learning due to their capability of recovering true support and scalability with large [...] Read more.
Regularized sparse learning with the 0-norm is important in many areas, including statistical learning and signal processing. Iterative hard thresholding (IHT) methods are the state-of-the-art for nonconvex-constrained sparse learning due to their capability of recovering true support and scalability with large datasets. The current theoretical analysis of IHT assumes the use of centralized IID data. In realistic large-scale scenarios, however, data are distributed, seldom IID, and private to edge computing devices at the local level. Consequently, it is required to study the property of IHT in a federated environment, where local devices update the sparse model individually and communicate with a central server for aggregation infrequently without sharing local data. In this paper, we propose the first group of federated IHT methods: Federated Hard Thresholding (Fed-HT) and Federated Iterative Hard Thresholding (FedIter-HT) with theoretical guarantees. We prove that both algorithms have a linear convergence rate and guarantee for recovering the optimal sparse estimator, which is comparable to classic IHT methods, but with decentralized, non-IID, and unbalanced data. Empirical results demonstrate that the Fed-HT and FedIter-HT outperform their competitor—a distributed IHT, in terms of reducing objective values with fewer communication rounds and bandwidth requirements. Full article
(This article belongs to the Special Issue Gradient Methods for Optimization)
Show Figures

Figure 1

28 pages, 4194 KiB  
Article
Joining Constraint Satisfaction Problems and Configurable CAD Product Models: A Step-by-Step Implementation Guide
by Paul Christoph Gembarski
Algorithms 2022, 15(9), 318; https://doi.org/10.3390/a15090318 - 6 Sep 2022
Cited by 2 | Viewed by 1739
Abstract
In configuration design, the task is to compose a system out of a set of predefined, modu-lar building blocks assembled by defined interfaces. Product configuration systems, both with or without integration of geometric models, implement reasoning techniques to model and explore the resulting [...] Read more.
In configuration design, the task is to compose a system out of a set of predefined, modu-lar building blocks assembled by defined interfaces. Product configuration systems, both with or without integration of geometric models, implement reasoning techniques to model and explore the resulting solution spaces. Among others, the formulation of constraint satisfaction problems (CSP) is state of the art and the informational background in many proprietary configuration engine software packages. Basically, configuration design tasks can also be implemented in modern computer aided design (CAD) systems as these contain different techniques for knowledge-based product modeling but literature reports only little about detailed application examples, best practices or training materials. This article aims at bridging this gap and presents a step-by-step implementation guide for CSP-based CAD configurators for combinatorial designs with the example of Autodesk Inventor. Full article
(This article belongs to the Special Issue Combinatorial Designs: Theory and Applications)
Show Figures

Figure 1

25 pages, 981 KiB  
Article
Improved Slime Mold Algorithm with Dynamic Quantum Rotation Gate and Opposition-Based Learning for Global Optimization and Engineering Design Problems
by Yunyang Zhang, Shiyu Du and Quan Zhang
Algorithms 2022, 15(9), 317; https://doi.org/10.3390/a15090317 - 4 Sep 2022
Cited by 5 | Viewed by 1755
Abstract
The slime mold algorithm (SMA) is a swarm-based metaheuristic algorithm inspired by the natural oscillatory patterns of slime molds. Compared with other algorithms, the SMA is competitive but still suffers from unbalanced development and exploration and the tendency to fall into local optima. [...] Read more.
The slime mold algorithm (SMA) is a swarm-based metaheuristic algorithm inspired by the natural oscillatory patterns of slime molds. Compared with other algorithms, the SMA is competitive but still suffers from unbalanced development and exploration and the tendency to fall into local optima. To overcome these drawbacks, an improved SMA with a dynamic quantum rotation gate and opposition-based learning (DQOBLSMA) is proposed in this paper. Specifically, for the first time, two mechanisms are used simultaneously to improve the robustness of the original SMA: the dynamic quantum rotation gate and opposition-based learning. The dynamic quantum rotation gate proposes an adaptive parameter control strategy based on the fitness to achieve a balance between exploitation and exploration compared to the original quantum rotation gate. The opposition-based learning strategy enhances population diversity and avoids falling into the local optima. Twenty-three benchmark test functions verify the superiority of the DQOBLSMA. Three typical engineering design problems demonstrate the ability of the DQOBLSMA to solve practical problems. Experimental results show that the proposed algorithm outperforms other comparative algorithms in convergence speed, convergence accuracy, and reliability. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

17 pages, 1342 KiB  
Article
Sustainable Risk Identification Using Formal Ontologies
by Avi Shaked and Oded Margalit
Algorithms 2022, 15(9), 316; https://doi.org/10.3390/a15090316 - 2 Sep 2022
Cited by 4 | Viewed by 1865
Abstract
The cyber threat landscape is highly dynamic, posing a significant risk to the operations of systems and organisations. An organisation should, therefore, continuously monitor for new threats and properly contextualise them to identify and manage the resulting risks. Risk identification is typically performed [...] Read more.
The cyber threat landscape is highly dynamic, posing a significant risk to the operations of systems and organisations. An organisation should, therefore, continuously monitor for new threats and properly contextualise them to identify and manage the resulting risks. Risk identification is typically performed manually, relying on the integration of information from various systems as well as subject matter expert knowledge. This manual risk identification hinders the systematic consideration of new, emerging threats. This paper describes a novel method to promote automated cyber risk identification: OnToRisk. This artificial intelligence method integrates information from various sources using formal ontology definitions, and then relies on these definitions to robustly frame cybersecurity threats and provide risk-related insights. We describe a successful case study implementation of the method to frame the threat from a newly disclosed vulnerability and identify its induced organisational risk. The case study is representative of common and widespread real-life challenges, and, therefore, showcases the feasibility of using OnToRisk to sustainably identify new risks. Further applications may contribute to establishing OnToRisk as a comprehensive, disciplined mechanism for risk identification. Full article
Show Figures

Figure 1

10 pages, 879 KiB  
Article
High Per Parameter: A Large-Scale Study of Hyperparameter Tuning for Machine Learning Algorithms
by Moshe Sipper
Algorithms 2022, 15(9), 315; https://doi.org/10.3390/a15090315 - 2 Sep 2022
Cited by 5 | Viewed by 2855
Abstract
Hyperparameters in machine learning (ML) have received a fair amount of attention, and hyperparameter tuning has come to be regarded as an important step in the ML pipeline. However, just how useful is said tuning? While smaller-scale experiments have been previously conducted, herein [...] Read more.
Hyperparameters in machine learning (ML) have received a fair amount of attention, and hyperparameter tuning has come to be regarded as an important step in the ML pipeline. However, just how useful is said tuning? While smaller-scale experiments have been previously conducted, herein we carry out a large-scale investigation, specifically one involving 26 ML algorithms, 250 datasets (regression and both binary and multinomial classification), 6 score metrics, and 28,857,600 algorithm runs. Analyzing the results we conclude that for many ML algorithms, we should not expect considerable gains from hyperparameter tuning on average; however, there may be some datasets for which default hyperparameters perform poorly, especially for some algorithms. By defining a single hp_score value, which combines an algorithm’s accumulated statistics, we are able to rank the 26 ML algorithms from those expected to gain the most from hyperparameter tuning to those expected to gain the least. We believe such a study shall serve ML practitioners at large. Full article
(This article belongs to the Special Issue Algorithms for Natural Computing Models)
Show Figures

Figure 1

22 pages, 1304 KiB  
Article
CVE2ATT&CK: BERT-Based Mapping of CVEs to MITRE ATT&CK Techniques
by Octavian Grigorescu, Andreea Nica, Mihai Dascalu and Razvan Rughinis
Algorithms 2022, 15(9), 314; https://doi.org/10.3390/a15090314 - 31 Aug 2022
Cited by 12 | Viewed by 6721
Abstract
Since cyber-attacks are ever-increasing in number, intensity, and variety, a strong need for a global, standardized cyber-security knowledge database has emerged as a means to prevent and fight cybercrime. Attempts already exist in this regard. The Common Vulnerabilities and Exposures (CVE) list documents [...] Read more.
Since cyber-attacks are ever-increasing in number, intensity, and variety, a strong need for a global, standardized cyber-security knowledge database has emerged as a means to prevent and fight cybercrime. Attempts already exist in this regard. The Common Vulnerabilities and Exposures (CVE) list documents numerous reported software and hardware vulnerabilities, thus building a community-based dictionary of existing threats. The MITRE ATT&CK Framework describes adversary behavior and offers mitigation strategies for each reported attack pattern. While extremely powerful on their own, the tremendous extra benefit gained when linking these tools cannot be overlooked. This paper introduces a dataset of 1813 CVEs annotated with all corresponding MITRE ATT&CK techniques and proposes models to automatically link a CVE to one or more techniques based on the text description from the CVE metadata. We establish a strong baseline that considers classical machine learning models and state-of-the-art pre-trained BERT-based language models while counteracting the highly imbalanced training set with data augmentation strategies based on the TextAttack framework. We obtain promising results, as the best model achieved an F1-score of 47.84%. In addition, we perform a qualitative analysis that uses Lime explanations to point out limitations and potential inconsistencies in CVE descriptions. Our model plays a critical role in finding kill chain scenarios inside complex infrastructures and enables the prioritization of CVE patching by the threat level. We publicly release our code together with the dataset of annotated CVEs. Full article
Show Figures

Figure 1

22 pages, 1059 KiB  
Review
Artificial Intelligence for Cell Segmentation, Event Detection, and Tracking for Label-Free Microscopy Imaging
by Lucia Maddalena, Laura Antonelli, Alexandra Albu, Aroj Hada and Mario Rosario Guarracino
Algorithms 2022, 15(9), 313; https://doi.org/10.3390/a15090313 - 31 Aug 2022
Cited by 9 | Viewed by 4077
Abstract
Background: Time-lapse microscopy imaging is a key approach for an increasing number of biological and biomedical studies to observe the dynamic behavior of cells over time which helps quantify important data, such as the number of cells and their sizes, shapes, and dynamic [...] Read more.
Background: Time-lapse microscopy imaging is a key approach for an increasing number of biological and biomedical studies to observe the dynamic behavior of cells over time which helps quantify important data, such as the number of cells and their sizes, shapes, and dynamic interactions across time. Label-free imaging is an essential strategy for such studies as it ensures that native cell behavior remains uninfluenced by the recording process. Computer vision and machine/deep learning approaches have made significant progress in this area. Methods: In this review, we present an overview of methods, software, data, and evaluation metrics for the automatic analysis of label-free microscopy imaging. We aim to provide the interested reader with a unique source of information, with links for further detailed information. Results: We review the most recent methods for cell segmentation, event detection, and tracking. Moreover, we provide lists of publicly available software and datasets. Finally, we summarize the metrics most frequently adopted for evaluating the methods under exam. Conclusions: We provide hints on open challenges and future research directions. Full article
(This article belongs to the Special Issue Algorithms for Biomedical Image Analysis and Processing)
Show Figures

Figure 1

17 pages, 4178 KiB  
Article
Images Segmentation Based on Cutting the Graph into Communities
by Sergey V. Belim and Svetlana Yu. Belim
Algorithms 2022, 15(9), 312; https://doi.org/10.3390/a15090312 - 30 Aug 2022
Cited by 1 | Viewed by 1744
Abstract
This article considers the problem of image segmentation based on its representation as an undirected weighted graph. Image segmentation is equivalent to partitioning a graph into communities. The image segment corresponds to each community. The growing area algorithm search communities on the graph. [...] Read more.
This article considers the problem of image segmentation based on its representation as an undirected weighted graph. Image segmentation is equivalent to partitioning a graph into communities. The image segment corresponds to each community. The growing area algorithm search communities on the graph. The average edge weight in the community is a measure of the separation quality. The correlation radius determines the number of next nearest neighbors connected by edges. Edge weight is a function of the difference between color and geometric coordinates of pixels. The exponential law calculates the weights of an edge in a graph. The computer experiment determines the parameters of the algorithm. Full article
Show Figures

Figure 1

19 pages, 1284 KiB  
Article
Accelerating the Sinkhorn Algorithm for Sparse Multi-Marginal Optimal Transport via Fast Fourier Transforms
by Fatima Antarou Ba and Michael Quellmalz
Algorithms 2022, 15(9), 311; https://doi.org/10.3390/a15090311 - 30 Aug 2022
Cited by 2 | Viewed by 2233
Abstract
We consider the numerical solution of the discrete multi-marginal optimal transport (MOT) by means of the Sinkhorn algorithm. In general, the Sinkhorn algorithm suffers from the curse of dimensionality with respect to the number of marginals. If the MOT cost function decouples according [...] Read more.
We consider the numerical solution of the discrete multi-marginal optimal transport (MOT) by means of the Sinkhorn algorithm. In general, the Sinkhorn algorithm suffers from the curse of dimensionality with respect to the number of marginals. If the MOT cost function decouples according to a tree or circle, its complexity is linear in the number of marginal measures. In this case, we speed up the convolution with the radial kernel required in the Sinkhorn algorithm via non-uniform fast Fourier methods. Each step of the proposed accelerated Sinkhorn algorithm with a tree-structured cost function has a complexity of O(KN) instead of the classical O(KN2) for straightforward matrix–vector operations, where K is the number of marginals and each marginal measure is supported on, at most, N points. In the case of a circle-structured cost function, the complexity improves from O(KN3) to O(KN2). This is confirmed through numerical experiments. Full article
(This article belongs to the Section Randomized, Online, and Approximation Algorithms)
Show Figures

Figure 1

30 pages, 863 KiB  
Article
Implicit A-Stable Peer Triplets for ODE Constrained Optimal Control Problems
by Jens Lang and Bernhard A. Schmitt
Algorithms 2022, 15(9), 310; https://doi.org/10.3390/a15090310 - 29 Aug 2022
Cited by 1 | Viewed by 1314
Abstract
This paper is concerned with the construction and convergence analysis of novel implicit Peer triplets of two-step nature with four stages for nonlinear ODE constrained optimal control problems. We combine the property of superconvergence of some standard Peer method for inner grid points [...] Read more.
This paper is concerned with the construction and convergence analysis of novel implicit Peer triplets of two-step nature with four stages for nonlinear ODE constrained optimal control problems. We combine the property of superconvergence of some standard Peer method for inner grid points with carefully designed starting and end methods to achieve order four for the state variables and order three for the adjoint variables in a first-discretize-then-optimize approach together with A-stability. The notion triplets emphasize that these three different Peer methods have to satisfy additional matching conditions. Four such Peer triplets of practical interest are constructed. In addition, as a benchmark method, the well-known backward differentiation formula BDF4, which is only A(73.35)-stable, is extended to a special Peer triplet to supply an adjoint consistent method of higher order and BDF type with equidistant nodes. Within the class of Peer triplets, we found a diagonally implicit A(84)-stable method with nodes symmetric in [0, 1] to a common center that performs equally well. Numerical tests with four well established optimal control problems confirm the theoretical findings also concerning A-stability. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

38 pages, 1397 KiB  
Systematic Review
Integrated Industrial Reference Architecture for Smart Healthcare in Internet of Things: A Systematic Investigation
by Aswani Devi Aguru, Erukala Suresh Babu, Soumya Ranjan Nayak, Abhisek Sethy and Amit Verma
Algorithms 2022, 15(9), 309; https://doi.org/10.3390/a15090309 - 29 Aug 2022
Cited by 14 | Viewed by 3409
Abstract
Internet of Things (IoT) is one of the efflorescing technologies of recent years with splendid real-time applications in the fields of healthcare, agriculture, transportation, industry, and environmental monitoring. In addition to the dominant applications and services of IoT, many challenges exist. As there [...] Read more.
Internet of Things (IoT) is one of the efflorescing technologies of recent years with splendid real-time applications in the fields of healthcare, agriculture, transportation, industry, and environmental monitoring. In addition to the dominant applications and services of IoT, many challenges exist. As there is a lack of standardization for IoT technologies, the architecture emerged as the foremost challenge. The salient issues in designing an IoT architecture encompass connectivity, data handling, heterogeneity, privacy, scalability, and security. The standard IoT architectures are the ETSI IoT Standard, the ITU-T IoT Reference Model, IoT-A Reference Model, Intel’s IoT Architecture, the Three-Layer Architecture, Middle-Based Architecture, Service-Oriented Architecture, Five-Layer Architecture, and IWF Architecture. In this paper, we have reviewed these architectures and concluded that IWF Architecture is most suitable for the effortless development of IoT applications because of its immediacy and depth of insight in dealing with IoT data. We carried out this review concerning smart healthcare as it is among the major industries that have been leaders and forerunners in IoT technologies. Motivated by this, we designed the novel Smart Healthcare Reference Architecture (SHRA) based on IWF Architecture. Finally, present the significance of smart healthcare during the COVID-19 pandemic. We have synthesized our findings in a systematic way for addressing the research questions on IoT challenges. To the best of our knowledge, our paper is the first to provide an exhaustive investigation on IoT architectural challenges with a use case in a smart healthcare system. Full article
(This article belongs to the Special Issue Deep Learning for Internet of Things)
Show Figures

Figure 1

15 pages, 2158 KiB  
Article
Early Prediction of Chronic Kidney Disease: A Comprehensive Performance Analysis of Deep Learning Models
by Chaity Mondol, F. M. Javed Mehedi Shamrat, Md. Robiul Hasan, Saidul Alam, Pronab Ghosh, Zarrin Tasnim, Kawsar Ahmed, Francis M. Bui and Sobhy M. Ibrahim
Algorithms 2022, 15(9), 308; https://doi.org/10.3390/a15090308 - 29 Aug 2022
Cited by 9 | Viewed by 3241
Abstract
Chronic kidney disease (CKD) is one of the most life-threatening disorders. To improve survivability, early discovery and good management are encouraged. In this paper, CKD was diagnosed using multiple optimized neural networks against traditional neural networks on the UCI machine learning dataset, to [...] Read more.
Chronic kidney disease (CKD) is one of the most life-threatening disorders. To improve survivability, early discovery and good management are encouraged. In this paper, CKD was diagnosed using multiple optimized neural networks against traditional neural networks on the UCI machine learning dataset, to identify the most efficient model for the task. The study works on the binary classification of CKD from 24 attributes. For classification, optimized CNN (OCNN), ANN (OANN), and LSTM (OLSTM) models were used as well as traditional CNN, ANN, and LSTM models. With various performance matrixes, error measures, loss values, AUC values, and compilation time, the implemented models are compared to identify the most competent model for the classification of CKD. It is observed that, overall, the optimized models have better performance compared to the traditional models. The highest validation accuracy among the tradition models were achieved from CNN with 92.71%, whereas OCNN, OANN, and OLSTM have higher accuracies of 98.75%, 96.25%, and 98.5%, respectively. Additionally, OCNN has the highest AUC score of 0.99 and the lowest compilation time for classification with 0.00447 s, making it the most efficient model for the diagnosis of CKD. Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms for Medicine)
Show Figures

Figure 1

12 pages, 2349 KiB  
Article
Traffic Demand Estimations Considering Route Trajectory Reconstruction in Congested Networks
by Wenyun Tang, Jiahui Chen, Chao Sun, Hanbing Wang and Gen Li
Algorithms 2022, 15(9), 307; https://doi.org/10.3390/a15090307 - 28 Aug 2022
Viewed by 1340
Abstract
Traffic parameter characteristics in congested road networks are explored based on traffic flow theory, and observed variables are transformed to a uniform format. The Gaussian mixture model is used to reconstruct route trajectories based on data regarding travel routes containing only the origin [...] Read more.
Traffic parameter characteristics in congested road networks are explored based on traffic flow theory, and observed variables are transformed to a uniform format. The Gaussian mixture model is used to reconstruct route trajectories based on data regarding travel routes containing only the origin and destination information. Using a bi-level optimization framework, a Bayesian traffic demand estimation model was built using route trajectory reconstruction in congested networks. Numerical examples demonstrate that traffic demand estimation errors, without considering a congested network, are within ±12; whereas estimation demands considering traffic congestion are close to the real values. Using the Gaussian mixture model’s technology of trajectory reconstruction, the mean of the traffic demand root mean square error can be stabilized to approximately 1.3. Traffic demand estimation accuracy decreases with an increase in observed data usage, and the designed iterative algorithm can predict convergence with 0.06 accuracy. The evolution rules of urban traffic demands and road flows in congested networks are uncovered, and a theoretical basis for alleviating urban traffic congestion is provided to determine traffic management and control strategies. Full article
Show Figures

Figure 1

15 pages, 3492 KiB  
Article
MIMO Radar Imaging Method with Non-Orthogonal Waveforms Based on Deep Learning
by Hongbing Li and Qunfei Zhang
Algorithms 2022, 15(9), 306; https://doi.org/10.3390/a15090306 - 28 Aug 2022
Viewed by 1295
Abstract
Transmitting orthogonal waveforms are the basis for giving full play to the advantages of MIMO radar imaging technology, but the commonly used waveforms with the same frequency cannot meet the orthogonality requirement, resulting in serious coupling noise in traditional imaging methods and affecting [...] Read more.
Transmitting orthogonal waveforms are the basis for giving full play to the advantages of MIMO radar imaging technology, but the commonly used waveforms with the same frequency cannot meet the orthogonality requirement, resulting in serious coupling noise in traditional imaging methods and affecting the imaging effect. In order to effectively suppress the mutual coupling interference caused by non-orthogonal waveforms, a new non-orthogonal waveform MIMO radar imaging method based on deep learning is proposed in this paper: with the powerful nonlinear fitting ability of deep learning, the mapping relationship between the non-orthogonal waveform MIMO radar echo and ideal target image is automatically learned by constructing a deep imaging network and training on a large number of simulated training data. The learned imaging network can effectively suppress the coupling interference between non-ideal orthogonal waveforms and improve the imaging quality of MIMO radar. Finally, the effectiveness of the proposed method is verified by experiments with point scattering model data and electromagnetic scattering calculation data. Full article
Show Figures

Figure 1

18 pages, 4981 KiB  
Article
A Systematic Approach for Developing a Robust Artwork Recognition Framework Using Smartphone Cameras
by Zenonas Theodosiou, Marios Thoma, Harris Partaourides and Andreas Lanitis
Algorithms 2022, 15(9), 305; https://doi.org/10.3390/a15090305 - 27 Aug 2022
Cited by 1 | Viewed by 1526
Abstract
The provision of information encourages people to visit cultural sites more often. Exploiting the great potential of using smartphone cameras and egocentric vision, we describe the development of a robust artwork recognition algorithm to assist users when visiting an art space. The algorithm [...] Read more.
The provision of information encourages people to visit cultural sites more often. Exploiting the great potential of using smartphone cameras and egocentric vision, we describe the development of a robust artwork recognition algorithm to assist users when visiting an art space. The algorithm recognizes artworks under any physical museum conditions, as well as camera point of views, making it suitable for different use scenarios towards an enhanced visiting experience. The algorithm was developed following a multiphase approach, including requirements gathering, experimentation in a virtual environment, development of the algorithm in real environment conditions, implementation of a demonstration smartphone app for artwork recognition and provision of assistive information, and its evaluation. During the algorithm development process, a convolutional neural network (CNN) model was trained for automatic artwork recognition using data collected in an art gallery, followed by extensive evaluations related to the parameters that may affect recognition accuracy, while the optimized algorithm was also evaluated through a dedicated app by a group of volunteers with promising results. The overall algorithm design and evaluation adopted for this work can also be applied in numerous applications, especially in cases where the algorithm performance under varying conditions and end-user satisfaction are critical factors. Full article
Show Figures

Figure 1

26 pages, 20610 KiB  
Article
Computational Analysis of PDE-Based Shape Analysis Models by Exploring the Damped Wave Equation
by Alexander Köhler and Michael Breuß
Algorithms 2022, 15(9), 304; https://doi.org/10.3390/a15090304 - 27 Aug 2022
Viewed by 1324
Abstract
The computation of correspondences between shapes is a principal task in shape analysis. In this work, we consider correspondences constructed by a numerical solution of partial differential equations (PDEs). The underlying model of interest is thereby the classic wave equation, since this may [...] Read more.
The computation of correspondences between shapes is a principal task in shape analysis. In this work, we consider correspondences constructed by a numerical solution of partial differential equations (PDEs). The underlying model of interest is thereby the classic wave equation, since this may give the most accurate shape matching. As has been observed in previous works, numerical time discretisation has a substantial influence on matching quality. Therefore, it is of interest to understand the underlying mechanisms and to investigate at the same time if there is an analytical model that could best describe the most suitable method for shape matching. To this end, we study here the damped wave equation, which mainly serves as a tool to understand and model properties of time discretisation. At the hand of a detailed study of possible parameters, we illustrate that the method that gives the most reasonable feature descriptors benefits from a damping mechanism which can be introduced numerically or within the PDE. This sheds light on some basic mechanisms of underlying computational and analytic models, as one may conjecture by our investigation that an ideal model could be composed of a transport mechanism and a diffusive component that helps to counter grid effects. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop