Next Issue
Volume 17, April
Previous Issue
Volume 17, February
 
 

Algorithms, Volume 17, Issue 3 (March 2024) – 42 articles

Cover Story (view full-size image): This study conducts an experiment comparing real street observations with immersive virtual reality (VR) visits to evaluate user perceptions and assess the quality of public spaces. For this experiment, a high-resolution 3D city model of a large-scale neighborhood was created, including dynamic elements representing various urban environments: a public area with a tramway station, a commercial street with a road, and a residential playground with green spaces. Participants were presented with identical views of existing urban scenes, both in reality and through reconstructed 3D scenes, using a head-mounted display. From this auditing, the quality of the streetscapes was evaluated through indicators: the study quantifies the relevance of these indicators in a VR setup and correlates them with critical factors influencing the experience of using and spending time on a street. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
15 pages, 4539 KiB  
Article
A Comprehensive Brain MRI Image Segmentation System Based on Contourlet Transform and Deep Neural Networks
by Navid Khalili Dizaji and Mustafa Doğan
Algorithms 2024, 17(3), 130; https://doi.org/10.3390/a17030130 - 21 Mar 2024
Viewed by 697
Abstract
Brain tumors are one of the deadliest types of cancer. Rapid and accurate identification of brain tumors, followed by appropriate surgical intervention or chemotherapy, increases the probability of survival. Accurate determination of brain tumors in MRI scans determines the exact location of surgical [...] Read more.
Brain tumors are one of the deadliest types of cancer. Rapid and accurate identification of brain tumors, followed by appropriate surgical intervention or chemotherapy, increases the probability of survival. Accurate determination of brain tumors in MRI scans determines the exact location of surgical intervention or chemotherapy. However, this accurate segmentation of brain tumors, due to their diverse morphologies in MRI scans, poses challenges that require significant expertise and accuracy in image interpretation. Despite significant advances in this field, there are several barriers to proper data collection, particularly in the medical sciences, due to concerns about the confidentiality of patient information. However, research papers for learning systems and proposed networks often rely on standardized datasets because a specific approach is unavailable. This system combines unsupervised learning in the adversarial generative network component with supervised learning in segmentation networks. The system is fully automated and can be applied to tumor segmentation on various datasets, including those with sparse data. In order to improve the learning process, the brain MRI segmentation network is trained using a generative adversarial network to increase the number of images. The U-Net model was employed during the segmentation step to combine the remaining blocks efficiently. Contourlet transform produces the ground truth for each MRI image obtained from the adversarial generator network and the original images in the processing and mask preparation phase. On the part of the adversarial generator network, high-quality images are produced, the results of which are similar to the histogram of the original images. Finally, this system improves the image segmentation performance by combining the remaining blocks with the U-net network. Segmentation is evaluated using brain magnetic resonance images obtained from Istanbul Medipol Hospital. The results show that the proposed method and image segmentation network, which incorporates several criteria, such as the DICE criterion of 0.9434, can be effectively used in any dataset as a fully automatic system for segmenting different brain MRI images. Full article
Show Figures

Figure 1

17 pages, 950 KiB  
Article
PDEC: A Framework for Improving Knowledge Graph Reasoning Performance through Predicate Decomposition
by Xin Tian and Yuan Meng
Algorithms 2024, 17(3), 129; https://doi.org/10.3390/a17030129 - 21 Mar 2024
Viewed by 622
Abstract
The judicious configuration of predicates is a crucial but often overlooked aspect in the field of knowledge graphs. While previous research has primarily focused on the precision of triples in assessing knowledge graph quality, the rationality of predicates has been largely ignored. This [...] Read more.
The judicious configuration of predicates is a crucial but often overlooked aspect in the field of knowledge graphs. While previous research has primarily focused on the precision of triples in assessing knowledge graph quality, the rationality of predicates has been largely ignored. This paper introduces an innovative approach aimed at enhancing knowledge graph reasoning by addressing the issue of predicate polysemy. Predicate polysemy refers to instances where a predicate possesses multiple meanings, introducing ambiguity into the knowledge graph. We present an adaptable optimization framework that effectively addresses predicate polysemy, thereby enhancing reasoning capabilities within knowledge graphs. Our approach serves as a versatile and generalized framework applicable to any reasoning model, offering a scalable and flexible solution to enhance performance across various domains and applications. Through rigorous experimental evaluations, we demonstrate the effectiveness and adaptability of our methodology, showing significant improvements in knowledge graph reasoning accuracy. Our findings underscore that discerning predicate polysemy is a crucial step towards achieving a more dependable and efficient knowledge graph reasoning process. Even in the age of large language models, the optimization and induction of predicates remain relevant in ensuring interpretable reasoning. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

13 pages, 1103 KiB  
Article
On the Need for Accurate Brushstroke Segmentation of Tablet-Acquired Kinematic and Pressure Data: The Case of Unconstrained Tracing
by Karly S. Franz, Grace Reszetnik and Tom Chau
Algorithms 2024, 17(3), 128; https://doi.org/10.3390/a17030128 - 20 Mar 2024
Viewed by 601
Abstract
Brushstroke segmentation algorithms are critical in computer-based analysis of fine motor control via handwriting, drawing, or tracing tasks. Current segmentation approaches typically rely only on one type of feature, either spatial, temporal, kinematic, or pressure. We introduce a segmentation algorithm that leverages both [...] Read more.
Brushstroke segmentation algorithms are critical in computer-based analysis of fine motor control via handwriting, drawing, or tracing tasks. Current segmentation approaches typically rely only on one type of feature, either spatial, temporal, kinematic, or pressure. We introduce a segmentation algorithm that leverages both spatiotemporal and pressure features to accurately identify brushstrokes during a tracing task. The algorithm was tested on both a clinical and validation dataset. Using validation trials with incorrectly identified brushstrokes, we evaluated the impact of segmentation errors on commonly derived biomechanical features used in the literature to detect graphomotor pathologies. The algorithm exhibited robust performance on validation and clinical datasets, effectively identifying brushstrokes while simultaneously eliminating spurious, noisy data. Spatial and temporal features were most affected by incorrect segmentation, particularly those related to the distance between brushstrokes and in-air time, which experienced propagated errors of 99% and 95%, respectively. In contrast, kinematic features, such as velocity and acceleration, were minimally affected, with propagated errors between 0 to 12%. The proposed algorithm may help improve brushstroke segmentation in future studies of handwriting, drawing, or tracing tasks. Spatial and temporal features derived from tablet-acquired data should be considered with caution, given their sensitivity to segmentation errors and instrumentation characteristics. Full article
(This article belongs to the Special Issue Algorithms for Computer Aided Diagnosis)
Show Figures

Figure 1

18 pages, 1419 KiB  
Article
Fast Algorithm for High-Throughput Screening Scheduling Based on the PERT/CPM Project Management Technique
by Eugene Levner, Vladimir Kats, Pengyu Yan and Ada Che
Algorithms 2024, 17(3), 127; https://doi.org/10.3390/a17030127 - 19 Mar 2024
Viewed by 728
Abstract
High-throughput screening systems are robotic cells that automatically scan and analyze thousands of biochemical samples and reagents in real time. The problem under consideration is to find an optimal cyclic schedule of robot moves that ensures maximum cell performance. To address this issue, [...] Read more.
High-throughput screening systems are robotic cells that automatically scan and analyze thousands of biochemical samples and reagents in real time. The problem under consideration is to find an optimal cyclic schedule of robot moves that ensures maximum cell performance. To address this issue, we proposed a new efficient version of the parametric PERT/CPM project management method that works in conjunction with a combinatorial subalgorithm capable of rejecting unfeasible schedules. The main result obtained is that the new fast PERT/CPM method finds optimal robust schedules for solving large size problems in strongly polynomial time, which cannot be achieved using existing algorithms. Full article
Show Figures

Figure 1

21 pages, 2540 KiB  
Article
Analysis of a Two-Step Gradient Method with Two Momentum Parameters for Strongly Convex Unconstrained Optimization
by Gerasim V. Krivovichev and Valentina Yu. Sergeeva
Algorithms 2024, 17(3), 126; https://doi.org/10.3390/a17030126 - 18 Mar 2024
Viewed by 739
Abstract
The paper is devoted to the theoretical and numerical analysis of the two-step method, constructed as a modification of Polyak’s heavy ball method with the inclusion of an additional momentum parameter. For the quadratic case, the convergence conditions are obtained with the use [...] Read more.
The paper is devoted to the theoretical and numerical analysis of the two-step method, constructed as a modification of Polyak’s heavy ball method with the inclusion of an additional momentum parameter. For the quadratic case, the convergence conditions are obtained with the use of the first Lyapunov method. For the non-quadratic case, sufficiently smooth strongly convex functions are obtained, and these conditions guarantee local convergence.An approach to finding optimal parameter values based on the solution of a constrained optimization problem is proposed. The effect of an additional parameter on the convergence rate is analyzed. With the use of an ordinary differential equation, equivalent to the method, the damping effect of this parameter on the oscillations, which is typical for the non-monotonic convergence of the heavy ball method, is demonstrated. In different numerical examples for non-quadratic convex and non-convex test functions and machine learning problems (regularized smoothed elastic net regression, logistic regression, and recurrent neural network training), the positive influence of an additional parameter value on the convergence process is demonstrated. Full article
Show Figures

Figure 1

16 pages, 808 KiB  
Article
GDUI: Guided Diffusion Model for Unlabeled Images
by Xuanyuan Xie and Jieyu Zhao
Algorithms 2024, 17(3), 125; https://doi.org/10.3390/a17030125 - 18 Mar 2024
Viewed by 806
Abstract
The diffusion model has made progress in the field of image synthesis, especially in the area of conditional image synthesis. However, this improvement is highly dependent on large annotated datasets. To tackle this challenge, we present the Guided Diffusion model for Unlabeled Images [...] Read more.
The diffusion model has made progress in the field of image synthesis, especially in the area of conditional image synthesis. However, this improvement is highly dependent on large annotated datasets. To tackle this challenge, we present the Guided Diffusion model for Unlabeled Images (GDUI) framework in this article. It utilizes the inherent feature similarity and semantic differences in the data, as well as the downstream transferability of Contrastive Language-Image Pretraining (CLIP), to guide the diffusion model in generating high-quality images. We design two semantic-aware algorithms, namely, the pseudo-label-matching algorithm and label-matching refinement algorithm, to match the clustering results with the true semantic information and provide more accurate guidance for the diffusion model. First, GDUI encodes the image into a semantically meaningful latent vector through clustering. Then, pseudo-label matching is used to complete the matching of the true semantic information of the image. Finally, the label-matching refinement algorithm is used to adjust the irrelevant semantic information in the data, thereby improving the quality of the guided diffusion model image generation. Our experiments on labeled datasets show that GDUI outperforms diffusion models without any guidance and significantly reduces the gap between it and models guided by ground-truth labels. Full article
(This article belongs to the Special Issue Algorithms for Image Processing and Machine Vision)
Show Figures

Figure 1

20 pages, 7017 KiB  
Article
Exploring Virtual Environments to Assess the Quality of Public Spaces
by Rachid Belaroussi, Elie Issa, Leonardo Cameli, Claudio Lantieri and Sonia Adelé
Algorithms 2024, 17(3), 124; https://doi.org/10.3390/a17030124 - 16 Mar 2024
Viewed by 813
Abstract
Human impression plays a crucial role in effectively designing infrastructures that support active mobility such as walking and cycling. By involving users early in the design process, valuable insights can be gathered before physical environments are constructed. This proactive approach enhances the attractiveness [...] Read more.
Human impression plays a crucial role in effectively designing infrastructures that support active mobility such as walking and cycling. By involving users early in the design process, valuable insights can be gathered before physical environments are constructed. This proactive approach enhances the attractiveness and safety of designed spaces for users. This study conducts an experiment comparing real street observations with immersive virtual reality (VR) visits to evaluate user perceptions and assess the quality of public spaces. For this experiment, a high-resolution 3D city model of a large-scale neighborhood was created, utilizing Building Information Modeling (BIM) and Geographic Information System (GIS) data. The model incorporated dynamic elements representing various urban environments: a public area with a tramway station, a commercial street with a road, and a residential playground with green spaces. Participants were presented with identical views of existing urban scenes, both in reality and through reconstructed 3D scenes using a Head-Mounted Display (HMD). They were asked questions related to the quality of the streetscape, its walkability, and cyclability. From the questionnaire, algorithms for assessing public spaces were computed, namely Sustainable Mobility Indicators (SUMI) and Pedestrian Level of Service (PLOS). The study quantifies the relevance of these indicators in a VR setup and correlates them with critical factors influencing the experience of using and spending time on a street. This research contributes to understanding the suitability of these algorithms in a VR environment for predicting the quality of future spaces before occupancy. Full article
(This article belongs to the Special Issue Algorithms for Virtual and Augmented Environments)
Show Figures

Graphical abstract

12 pages, 1526 KiB  
Article
An Efficient Third-Order Scheme Based on Runge–Kutta and Taylor Series Expansion for Solving Initial Value Problems
by Noori Y. Abdul-Hassan, Zainab J. Kadum and Ali Hasan Ali
Algorithms 2024, 17(3), 123; https://doi.org/10.3390/a17030123 - 16 Mar 2024
Viewed by 640
Abstract
In this paper, we propose a new numerical scheme based on a variation of the standard formulation of the Runge–Kutta method using Taylor series expansion for solving initial value problems (IVPs) in ordinary differential equations. Analytically, the accuracy, consistency, and absolute stability of [...] Read more.
In this paper, we propose a new numerical scheme based on a variation of the standard formulation of the Runge–Kutta method using Taylor series expansion for solving initial value problems (IVPs) in ordinary differential equations. Analytically, the accuracy, consistency, and absolute stability of the new method are discussed. It is established that the new method is consistent and stable and has third-order convergence. Numerically, we present two models involving applications from physics and engineering to illustrate the efficiency and accuracy of our new method and compare it with further pertinent techniques carried out in the same order. Full article
(This article belongs to the Special Issue Mathematical Modelling in Engineering and Human Behaviour)
Show Figures

Figure 1

16 pages, 5690 KiB  
Article
Highly Imbalanced Classification of Gout Using Data Resampling and Ensemble Method
by Xiaonan Si, Lei Wang, Wenchang Xu, Biao Wang and Wenbo Cheng
Algorithms 2024, 17(3), 122; https://doi.org/10.3390/a17030122 - 15 Mar 2024
Viewed by 726
Abstract
Gout is one of the most painful diseases in the world. Accurate classification of gout is crucial for diagnosis and treatment which can potentially save lives. However, the current methods for classifying gout periods have demonstrated poor performance and have received little attention. [...] Read more.
Gout is one of the most painful diseases in the world. Accurate classification of gout is crucial for diagnosis and treatment which can potentially save lives. However, the current methods for classifying gout periods have demonstrated poor performance and have received little attention. This is due to a significant data imbalance problem that affects the learning attention for the majority and minority classes. To overcome this problem, a resampling method called ENaNSMOTE-Tomek link is proposed. It uses extended natural neighbors to generate samples that fall within the minority class and then applies the Tomek link technique to eliminate instances that contribute to noise. The model combines the ensemble ’bagging’ technique with the proposed resampling technique to improve the quality of generated samples. The performance of individual classifiers and hybrid models on an imbalanced gout dataset taken from the electronic medical records of a hospital is evaluated. The results of the classification demonstrate that the proposed strategy is more accurate than some imbalanced gout diagnosis techniques, with an accuracy of 80.87% and an AUC of 87.10%. This indicates that the proposed algorithm can alleviate the problems caused by imbalanced gout data and help experts better diagnose their patients. Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms in Healthcare)
Show Figures

Figure 1

23 pages, 1098 KiB  
Article
Modeling of Some Classes of Extended Oscillators: Simulations, Algorithms, Generating Chaos, and Open Problems
by Nikolay Kyurkchiev, Tsvetelin Zaevski, Anton Iliev, Vesselin Kyurkchiev and Asen Rahnev
Algorithms 2024, 17(3), 121; https://doi.org/10.3390/a17030121 - 15 Mar 2024
Viewed by 680
Abstract
In this article, we propose some extended oscillator models. Various experiments are performed. The models are studied using the Melnikov approach. We show some integral units for researching the behavior of these hypothetical oscillators. These will be implemented as add-on sections of a [...] Read more.
In this article, we propose some extended oscillator models. Various experiments are performed. The models are studied using the Melnikov approach. We show some integral units for researching the behavior of these hypothetical oscillators. These will be implemented as add-on sections of a thoughtful main web-based application for researching computations. One of the main goals of the study is to share the difficulties that researchers (who are not necessarily professional mathematicians) encounter in using contemporary computer algebraic systems (CASs) for scientific research to examine in detail the dynamics of modifications of classical and newer models that are emerging in the literature (for the large values of the parameters of the models). The present article is a natural continuation of the research in the direction that has been indicated and discussed in our previous investigations. One possible application that the Melnikov function may find in the modeling of a radiating antenna diagram is also discussed. Some probability-based constructions are also presented. We hope that some of these notes will be reflected in upcoming registered rectifications of the CAS. The aim of studying the design realization (scheme, manufacture, output, etc.) of the explored differential models can be viewed as not yet being met. Full article
Show Figures

Figure 1

16 pages, 1017 KiB  
Article
Efficient Estimation of Generative Models Using Tukey Depth
by Minh-Quan Vo, Thu Nguyen, Michael A. Riegler and Hugo L. Hammer
Algorithms 2024, 17(3), 120; https://doi.org/10.3390/a17030120 - 13 Mar 2024
Viewed by 813
Abstract
Generative models have recently received a lot of attention. However, a challenge with such models is that it is usually not possible to compute the likelihood function, which makes parameter estimation or training of the models challenging. The most commonly used alternative strategy [...] Read more.
Generative models have recently received a lot of attention. However, a challenge with such models is that it is usually not possible to compute the likelihood function, which makes parameter estimation or training of the models challenging. The most commonly used alternative strategy is called likelihood-free estimation, based on finding values of the model parameters such that a set of selected statistics have similar values in the dataset and in samples generated from the model. However, a challenge is how to select statistics that are efficient in estimating unknown parameters. The most commonly used statistics are the mean vector, variances, and correlations between variables, but they may be less relevant in estimating the unknown parameters. We suggest utilizing Tukey depth contours (TDCs) as statistics in likelihood-free estimation. TDCs are highly flexible and can capture almost any property of multivariate data, in addition, they seem to be as of yet unexplored for likelihood-free estimation. We demonstrate that TDC statistics are able to estimate the unknown parameters more efficiently than mean, variance, and correlation in likelihood-free estimation. We further apply the TDC statistics to estimate the properties of requests to a computer system, demonstrating their real-life applicability. The suggested method is able to efficiently find the unknown parameters of the request distribution and quantify the estimation uncertainty. Full article
Show Figures

Figure 1

15 pages, 15233 KiB  
Article
A Preprocessing Method for Coronary Artery Stenosis Detection Based on Deep Learning
by Yanjun Li, Takaaki Yoshimura, Yuto Horima and Hiroyuki Sugimori
Algorithms 2024, 17(3), 119; https://doi.org/10.3390/a17030119 - 13 Mar 2024
Viewed by 781
Abstract
The detection of coronary artery stenosis is one of the most important indicators for the diagnosis of coronary artery disease. However, stenosis in branch vessels is often difficult to detect using computer-aided systems and even radiologists because of several factors, such as imaging [...] Read more.
The detection of coronary artery stenosis is one of the most important indicators for the diagnosis of coronary artery disease. However, stenosis in branch vessels is often difficult to detect using computer-aided systems and even radiologists because of several factors, such as imaging angle and contrast agent inhomogeneity. Traditional coronary artery stenosis localization algorithms often only detect aortic stenosis and ignore branch vessels that may also cause major health threats. Therefore, improving the localization of branch vessel stenosis in coronary angiographic images is a potential development property. In this study, we propose a preprocessing approach that combines vessel enhancement and image fusion as a prerequisite for deep learning. The sensitivity of the neural network to stenosis features is improved by enhancing the blurry features in coronary angiographic images. By validating five neural networks, such as YOLOv4 and R-FCN-Inceptionresnetv2, our proposed method can improve the performance of deep learning network applications on the images from six common imaging angles. The results showed that the proposed method is suitable as a preprocessing method for coronary angiographic image processing based on deep learning and can be used to amend the recognition ability of the deep model for fine vessel stenosis. Full article
Show Figures

Figure 1

23 pages, 5003 KiB  
Article
Active Data Selection and Information Seeking
by Thomas Parr, Karl Friston and Peter Zeidman
Algorithms 2024, 17(3), 118; https://doi.org/10.3390/a17030118 - 12 Mar 2024
Viewed by 1263
Abstract
Bayesian inference typically focuses upon two issues. The first is estimating the parameters of some model from data, and the second is quantifying the evidence for alternative hypotheses—formulated as alternative models. This paper focuses upon a third issue. Our interest is in the [...] Read more.
Bayesian inference typically focuses upon two issues. The first is estimating the parameters of some model from data, and the second is quantifying the evidence for alternative hypotheses—formulated as alternative models. This paper focuses upon a third issue. Our interest is in the selection of data—either through sampling subsets of data from a large dataset or through optimising experimental design—based upon the models we have of how those data are generated. Optimising data-selection ensures we can achieve good inference with fewer data, saving on computational and experimental costs. This paper aims to unpack the principles of active sampling of data by drawing from neurobiological research on animal exploration and from the theory of optimal experimental design. We offer an overview of the salient points from these fields and illustrate their application in simple toy examples, ranging from function approximation with basis sets to inference about processes that evolve over time. Finally, we consider how this approach to data selection could be applied to the design of (Bayes-adaptive) clinical trials. Full article
(This article belongs to the Special Issue Bayesian Networks and Causal Reasoning)
Show Figures

Figure 1

22 pages, 8906 KiB  
Article
Field Programmable Gate Array-Based Acceleration Algorithm Design for Dynamic Star Map Parallel Computing
by Bo Cui, Lingyun Wang, Guangxi Li and Xian Ren
Algorithms 2024, 17(3), 117; https://doi.org/10.3390/a17030117 - 12 Mar 2024
Viewed by 747
Abstract
The dynamic star simulator is a commonly used ground-test calibration device for star sensors. For the problems of slow calculation speed, low integration, and high power consumption in the traditional star chart simulation method, this paper designs a FPGA-based star chart display algorithm [...] Read more.
The dynamic star simulator is a commonly used ground-test calibration device for star sensors. For the problems of slow calculation speed, low integration, and high power consumption in the traditional star chart simulation method, this paper designs a FPGA-based star chart display algorithm for a dynamic star simulator. The design adopts the USB 2.0 protocol to obtain the attitude data, uses the SDRAM to cache the attitude data and video stream, extracts the effective navigation star points by searching the starry sky equidistant right ascension and declination partitions, and realizes the pipelined displaying of the star map by using the parallel computing capability of the FPGA. Test results show that under the conditions of chart field of view of Φ20° and simulated magnitude of 2.06.0 Mv, the longest time for calculating a chart is 72 μs under the clock of 148.5 MHz, which effectively improves the chart display speed of the dynamic star simulator. The FPGA-based star map display algorithm gets rid of the dependence of the existing algorithm on the computer, reduces the volume and power consumption of the dynamic star simulator, and realizes the miniaturization and portable demand of the dynamic star simulator. Full article
Show Figures

Figure 1

23 pages, 1470 KiB  
Article
Progressive Multiple Alignment of Graphs
by Marcos E. González Laffitte and Peter F. Stadler
Algorithms 2024, 17(3), 116; https://doi.org/10.3390/a17030116 - 11 Mar 2024
Viewed by 1182
Abstract
The comparison of multiple (labeled) graphs with unrelated vertex sets is an important task in diverse areas of applications. Conceptually, it is often closely related to multiple sequence alignments since one aims to determine a correspondence, or more precisely, a multipartite matching between [...] Read more.
The comparison of multiple (labeled) graphs with unrelated vertex sets is an important task in diverse areas of applications. Conceptually, it is often closely related to multiple sequence alignments since one aims to determine a correspondence, or more precisely, a multipartite matching between the vertex sets. There, the goal is to match vertices that are similar in terms of labels and local neighborhoods. Alignments of sequences and ordered forests, however, have a second aspect that does not seem to be considered for graph comparison, namely the idea that an alignment is a superobject from which the constituent input objects can be recovered faithfully as well-defined projections. Progressive alignment algorithms are based on the idea of computing multiple alignments as a pairwise alignment of the alignments of two disjoint subsets of the input objects. Our formal framework guarantees that alignments have compositional properties that make alignments of alignments well-defined. The various similarity-based graph matching constructions do not share this property and solve substantially different optimization problems. We demonstrate that optimal multiple graph alignments can be approximated well by means of progressive alignment schemes. The solution of the pairwise alignment problem is reduced formally to computing maximal common induced subgraphs. Similar to the ambiguities arising from consecutive indels, pairwise alignments of graph alignments require the consideration of ambiguous edges that may appear between alignment columns with complementary gap patterns. We report a simple reference implementation in Python/NetworkX intended to serve as starting point for further developments. The computational feasibility of our approach is demonstrated on test sets of small graphs that mimimc in particular applications to molecular graphs. Full article
(This article belongs to the Special Issue Graph Algorithms and Graph Labeling)
Show Figures

Figure 1

28 pages, 14896 KiB  
Article
IWO-IGA—A Hybrid Whale Optimization Algorithm Featuring Improved Genetic Characteristics for Mapping Real-Time Applications onto 2D Network on Chip
by Sharoon Saleem, Fawad Hussain and Naveed Khan Baloch
Algorithms 2024, 17(3), 115; https://doi.org/10.3390/a17030115 - 10 Mar 2024
Viewed by 758
Abstract
Network on Chip (NoC) has emerged as a potential substitute for the communication model in modern computer systems with extensive integration. Among the numerous design challenges, application mapping on the NoC system poses one of the most complex and demanding optimization problems. In [...] Read more.
Network on Chip (NoC) has emerged as a potential substitute for the communication model in modern computer systems with extensive integration. Among the numerous design challenges, application mapping on the NoC system poses one of the most complex and demanding optimization problems. In this research, we propose a hybrid improved whale optimization algorithm with enhanced genetic properties (IWOA-IGA) to optimally map real-time applications onto the 2D NoC Platform. The IWOA-IGA is a novel approach combining an improved whale optimization algorithm with the ability of a refined genetic algorithm to optimally map application tasks. A comprehensive comparison is performed between the proposed method and other state-of-the-art algorithms through rigorous analysis. The evaluation consists of real-time applications, benchmarks, and a collection of arbitrarily scaled and procedurally generated large-task graphs. The proposed IWOA-IGA indicates an average improvement in power reduction, improved energy consumption, and latency over state-of-the-art algorithms. Performance based on the Convergence Factor, which assesses the algorithm’s efficiency in achieving better convergence after running for a specific number of iterations over other efficiently developed techniques, is introduced in this research work. These results demonstrate the algorithm’s superior convergence performance when applied to real-world and synthetic task graphs. Our research findings spotlight the superior performance of hybrid improved whale optimization integrated with enhanced GA features, emphasizing its potential for application mapping in NoC-based systems. Full article
(This article belongs to the Special Issue Metaheuristic Algorithms in Optimal Design of Engineering Problems)
Show Figures

Figure 1

20 pages, 1984 KiB  
Article
Deep-Shallow Metaclassifier with Synthetic Minority Oversampling for Anomaly Detection in a Time Series
by MohammadHossein Reshadi, Wen Li, Wenjie Xu, Precious Omashor, Albert Dinh, Scott Dick, Yuntong She and Michael Lipsett
Algorithms 2024, 17(3), 114; https://doi.org/10.3390/a17030114 - 10 Mar 2024
Viewed by 904
Abstract
Anomaly detection in data streams (and particularly time series) is today a vitally important task. Machine learning algorithms are a common design for achieving this goal. In particular, deep learning has, in the last decade, proven to be substantially more accurate than shallow [...] Read more.
Anomaly detection in data streams (and particularly time series) is today a vitally important task. Machine learning algorithms are a common design for achieving this goal. In particular, deep learning has, in the last decade, proven to be substantially more accurate than shallow learning in a wide variety of machine learning problems, and deep anomaly detection is very effective for point anomalies. However, deep semi-supervised contextual anomaly detection (in which anomalies within a time series are rare and none at all occur in the algorithm’s training data) is a more difficult problem. Hybrid anomaly detectors (a “normal model” followed by a comparator) are one approach to these problems, but the separate loss functions for the two components can lead to inferior performance. We investigate a novel synthetic-example oversampling technique to harmonize the two components of a hybrid system, thus improving the anomaly detector’s performance. We evaluate our algorithm on two distinct problems: identifying pipeline leaks and patient-ventilator asynchrony. Full article
(This article belongs to the Special Issue Hybrid Intelligent Algorithms)
Show Figures

Figure 1

15 pages, 2467 KiB  
Article
Evaluation of Neural Network Effectiveness on Sliding Mode Control of Delta Robot for Trajectory Tracking
by Anni Zhao, Arash Toudeshki, Reza Ehsani, Joshua H. Viers and Jian-Qiao Sun
Algorithms 2024, 17(3), 113; https://doi.org/10.3390/a17030113 - 08 Mar 2024
Viewed by 911
Abstract
The Delta robot is an over-actuated parallel robot with highly nonlinear kinematics and dynamics. Designing the control for a Delta robot to carry out various operations is a challenging task. Various advanced control algorithms, such as adaptive control, sliding mode control, and model [...] Read more.
The Delta robot is an over-actuated parallel robot with highly nonlinear kinematics and dynamics. Designing the control for a Delta robot to carry out various operations is a challenging task. Various advanced control algorithms, such as adaptive control, sliding mode control, and model predictive control, have been investigated for trajectory tracking of the Delta robot. However, these control algorithms require a reliable input–output model of the Delta robot. To address this issue, we have created a control-affine neural network model of the Delta robot with stepper motors. This is a completely data-driven model intended for control design consideration and is not derivable from Newton’s law or Lagrange’s equation. The neural networks are trained with randomly sampled data in a sufficiently large workspace. The sliding mode control for trajectory tracking is then designed with the help of the neural network model. Extensive numerical results are obtained to show that the neural network model together with the sliding mode control exhibits outstanding performance, achieving a trajectory tracking error below 5 cm on average for the Delta robot. Future work will include experimental validation of the proposed neural network input–output model for control design for the Delta robot. Furthermore, transfer learnings can be conducted to further refine the neural network input–output model and the sliding mode control when new experimental data become available. Full article
Show Figures

Figure 1

11 pages, 4441 KiB  
Article
Exploratory Data Analysis and Searching Cliques in Graphs
by András Hubai, Sándor Szabó and Bogdán Zaválnij
Algorithms 2024, 17(3), 112; https://doi.org/10.3390/a17030112 - 07 Mar 2024
Viewed by 755
Abstract
The principal component analysis is a well-known and widely used technique to determine the essential dimension of a data set. Broadly speaking, it aims to find a low-dimensional linear manifold that retains a large part of the information contained in the original data [...] Read more.
The principal component analysis is a well-known and widely used technique to determine the essential dimension of a data set. Broadly speaking, it aims to find a low-dimensional linear manifold that retains a large part of the information contained in the original data set. It may be the case that one cannot approximate the entirety of the original data set using a single low-dimensional linear manifold even though large subsets of it are amenable to such approximations. For these cases we raise the related but different challenge (problem) of locating subsets of a high dimensional data set that are approximately 1-dimensional. Naturally, we are interested in the largest of such subsets. We propose a method for finding these 1-dimensional manifolds by finding cliques in a purpose-built auxiliary graph. Full article
Show Figures

Figure 1

12 pages, 826 KiB  
Article
A Markov Chain Genetic Algorithm Approach for Non-Parametric Posterior Distribution Sampling of Regression Parameters
by Parag C. Pendharkar
Algorithms 2024, 17(3), 111; https://doi.org/10.3390/a17030111 - 07 Mar 2024
Viewed by 777
Abstract
This paper proposes a genetic algorithm-based Markov Chain approach that can be used for non-parametric estimation of regression coefficients and their statistical confidence bounds. The proposed approach can generate samples from an unknown probability density function if a formal functional form of its [...] Read more.
This paper proposes a genetic algorithm-based Markov Chain approach that can be used for non-parametric estimation of regression coefficients and their statistical confidence bounds. The proposed approach can generate samples from an unknown probability density function if a formal functional form of its likelihood is known. The approach is tested in the non-parametric estimation of regression coefficients, where the least-square minimizing function is considered the maximum likelihood of a multivariate distribution. This approach has an advantage over traditional Markov Chain Monte Carlo methods because it is proven to converge and generate unbiased samples computationally efficiently. Full article
(This article belongs to the Special Issue Nature-Inspired Algorithms in Machine Learning (2nd Edition))
Show Figures

Figure 1

17 pages, 595 KiB  
Article
Electric Vehicle Ordered Charging Planning Based on Improved Dual-Population Genetic Moth–Flame Optimization
by Shuang Che, Yan Chen, Longda Wang and Chuanfang Xu
Algorithms 2024, 17(3), 110; https://doi.org/10.3390/a17030110 - 06 Mar 2024
Cited by 1 | Viewed by 811
Abstract
This work discusses the electric vehicle (EV) ordered charging planning (OCP) optimization problem. To address this issue, an improved dual-population genetic moth–flame optimization (IDPGMFO) is proposed. Specifically, to obtain an appreciative solution of EV OCP, the design for a dual-population genetic mechanism integrated [...] Read more.
This work discusses the electric vehicle (EV) ordered charging planning (OCP) optimization problem. To address this issue, an improved dual-population genetic moth–flame optimization (IDPGMFO) is proposed. Specifically, to obtain an appreciative solution of EV OCP, the design for a dual-population genetic mechanism integrated into moth–flame optimization is provided. To enhance the global optimization performance, the adaptive nonlinear decreasing strategies with selection, crossover and mutation probability, as well as the weight coefficient, are also designed. Additionally, opposition-based learning (OBL) is also introduced simultaneously. The simulation results show that the proposed improvement strategies can effectively improve the global optimization performance. Obviously, more ideal optimization solution of the EV OCP optimization problem can be obtained by using IDPGMFO. Full article
Show Figures

Graphical abstract

16 pages, 4886 KiB  
Article
Application of Split Coordinate Channel Attention Embedding U2Net in Salient Object Detection
by Yuhuan Wu and Yonghong Wu
Algorithms 2024, 17(3), 109; https://doi.org/10.3390/a17030109 - 06 Mar 2024
Viewed by 802
Abstract
Salient object detection (SOD) aims to identify the most visually striking objects in a scene, simulating the function of the biological visual attention system. The attention mechanism in deep learning is commonly used as an enhancement strategy which enables the neural network to [...] Read more.
Salient object detection (SOD) aims to identify the most visually striking objects in a scene, simulating the function of the biological visual attention system. The attention mechanism in deep learning is commonly used as an enhancement strategy which enables the neural network to concentrate on the relevant parts when processing input data, effectively improving the model’s learning and prediction abilities. Existing saliency object detection methods based on RGB deep learning typically treat all regions equally by using the extracted features, overlooking the fact that different regions have varying contributions to the final predictions. Based on the U2Net algorithm, this paper incorporates the split coordinate channel attention (SCCA) mechanism into the feature extraction stage. SCCA conducts spatial transformation in width and height dimensions to efficiently extract the location information of the target to be detected. While pixel-level semantic segmentation based on annotation has been successful, it assigns the same weight to each pixel which leads to poor performance in detecting the boundary of objects. In this paper, the Canny edge detection loss is incorporated into the loss calculation stage to improve the model’s ability to detect object edges. Based on the DUTS and HKU-IS datasets, experiments confirm that the proposed strategies effectively enhance the model’s detection performance, resulting in a 0.8% and 0.7% increase in the F1-score of U2Net. This paper also compares the traditional attention modules with the newly proposed attention, and the SCCA attention module achieves a top-three performance in prediction time, mean absolute error (MAE), F1-score, and model size on both experimental datasets. Full article
Show Figures

Figure 1

25 pages, 1190 KiB  
Article
Data Mining Techniques for Endometriosis Detection in a Data-Scarce Medical Dataset
by Pablo Caballero, Luis Gonzalez-Abril, Juan A. Ortega and Áurea Simon-Soro
Algorithms 2024, 17(3), 108; https://doi.org/10.3390/a17030108 - 04 Mar 2024
Viewed by 1071
Abstract
Endometriosis (EM) is a chronic inflammatory estrogen-dependent disorder that affects 10% of women worldwide. It affects the female reproductive tract and its resident microbiota, as well as distal body sites that can serve as surrogate markers of EM. Currently, no single definitive biomarker [...] Read more.
Endometriosis (EM) is a chronic inflammatory estrogen-dependent disorder that affects 10% of women worldwide. It affects the female reproductive tract and its resident microbiota, as well as distal body sites that can serve as surrogate markers of EM. Currently, no single definitive biomarker can diagnose EM. For this pilot study, we analyzed a cohort of 21 patients with endometriosis and infertility-associated conditions. A microbiome dataset was created using five sample types taken from the reproductive and gastrointestinal tracts of each patient. We evaluated several machine learning algorithms for EM detection using these features. The characteristics of the dataset were derived from endometrial biopsy, endometrial fluid, vaginal, oral, and fecal samples. Despite limited data, the algorithms demonstrated high performance with respect to the F1 score. In addition, they suggested that disease diagnosis could potentially be improved by using less medically invasive procedures. Overall, the results indicate that machine learning algorithms can be useful tools for diagnosing endometriosis in low-resource settings where data availability and availability are limited. We recommend that future studies explore the complexities of the EM disorder using artificial intelligence and prediction modeling to further define the characteristics of the endometriosis phenotype. Full article
Show Figures

Figure 1

15 pages, 318 KiB  
Article
Application of the Parabola Method in Nonconvex Optimization
by Anton Kolosnitsyn, Oleg Khamisov, Eugene Semenkin and Vladimir Nelyub
Algorithms 2024, 17(3), 107; https://doi.org/10.3390/a17030107 - 01 Mar 2024
Viewed by 863
Abstract
We consider the Golden Section and Parabola Methods for solving univariate optimization problems. For multivariate problems, we use these methods as line search procedures in combination with well-known zero-order methods such as the coordinate descent method, the Hooke and Jeeves method, and the [...] Read more.
We consider the Golden Section and Parabola Methods for solving univariate optimization problems. For multivariate problems, we use these methods as line search procedures in combination with well-known zero-order methods such as the coordinate descent method, the Hooke and Jeeves method, and the Rosenbrock method. A comprehensive numerical comparison of the obtained versions of zero-order methods is given in the present work. The set of test problems includes nonconvex functions with a large number of local and global optimum points. Zero-order methods combined with the Parabola method demonstrate high performance and quite frequently find the global optimum even for large problems (up to 100 variables). Full article
(This article belongs to the Special Issue Biology-Inspired Algorithms and optimization)
Show Figures

Figure 1

27 pages, 11496 KiB  
Article
Automatic Optimization of Deep Learning Training through Feature-Aware-Based Dataset Splitting
by Somayeh Shahrabadi, Telmo Adão, Emanuel Peres, Raul Morais, Luís G. Magalhães and Victor Alves
Algorithms 2024, 17(3), 106; https://doi.org/10.3390/a17030106 - 29 Feb 2024
Viewed by 1102
Abstract
The proliferation of classification-capable artificial intelligence (AI) across a wide range of domains (e.g., agriculture, construction, etc.) has been allowed to optimize and complement several tasks, typically operationalized by humans. The computational training that allows providing such support is frequently hindered by various [...] Read more.
The proliferation of classification-capable artificial intelligence (AI) across a wide range of domains (e.g., agriculture, construction, etc.) has been allowed to optimize and complement several tasks, typically operationalized by humans. The computational training that allows providing such support is frequently hindered by various challenges related to datasets, including the scarcity of examples and imbalanced class distributions, which have detrimental effects on the production of accurate models. For a proper approach to these challenges, strategies smarter than the traditional brute force-based K-fold cross-validation or the naivety of hold-out are required, with the following main goals in mind: (1) carrying out one-shot, close-to-optimal data arrangements, accelerating conventional training optimization; and (2) aiming at maximizing the capacity of inference models to its fullest extent while relieving computational burden. To that end, in this paper, two image-based feature-aware dataset splitting approaches are proposed, hypothesizing a contribution towards attaining classification models that are closer to their full inference potential. Both rely on strategic image harvesting: while one of them hinges on weighted random selection out of a feature-based clusters set, the other involves a balanced picking process from a sorted list that stores data features’ distances to the centroid of a whole feature space. Comparative tests on datasets related to grapevine leaves phenotyping and bridge defects showcase promising results, highlighting a viable alternative to K-fold cross-validation and hold-out methods. Full article
Show Figures

Figure 1

6 pages, 174 KiB  
Editorial
Artificial Intelligence Algorithms for Healthcare
by Dmytro Chumachenko and Sergiy Yakovlev
Algorithms 2024, 17(3), 105; https://doi.org/10.3390/a17030105 - 28 Feb 2024
Viewed by 1018
Abstract
In an era where technological advancements are rapidly transforming industries, healthcare is the primary beneficiary of such progress [...] Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms for Healthcare)
28 pages, 933 KiB  
Article
A Systematic Evaluation of Recurrent Neural Network Models for Edge Intelligence and Human Activity Recognition Applications
by Varsha S. Lalapura, Veerender Reddy Bhimavarapu, J. Amudha and Hariram Selvamurugan Satheesh
Algorithms 2024, 17(3), 104; https://doi.org/10.3390/a17030104 - 28 Feb 2024
Viewed by 921
Abstract
The Recurrent Neural Networks (RNNs) are an essential class of supervised learning algorithms. Complex tasks like speech recognition, machine translation, sentiment classification, weather prediction, etc., are now performed by well-trained RNNs. Local or cloud-based GPU machines are used to train them. However, inference [...] Read more.
The Recurrent Neural Networks (RNNs) are an essential class of supervised learning algorithms. Complex tasks like speech recognition, machine translation, sentiment classification, weather prediction, etc., are now performed by well-trained RNNs. Local or cloud-based GPU machines are used to train them. However, inference is now shifting to miniature, mobile, IoT devices and even micro-controllers. Due to their colossal memory and computing requirements, mapping RNNs directly onto resource-constrained platforms is arcane and challenging. The efficacy of edge-intelligent RNNs (EI-RNNs) must satisfy both performance and memory-fitting requirements at the same time without compromising one for the other. This study’s aim was to provide an empirical evaluation and optimization of historic as well as recent RNN architectures for high-performance and low-memory footprint goals. We focused on Human Activity Recognition (HAR) tasks based on wearable sensor data for embedded healthcare applications. We evaluated and optimized six different recurrent units, namely Vanilla RNNs, Long Short-Term Memory (LSTM) units, Gated Recurrent Units (GRUs), Fast Gated Recurrent Neural Networks (FGRNNs), Fast Recurrent Neural Networks (FRNNs), and Unitary Gated Recurrent Neural Networks (UGRNNs) on eight publicly available time-series HAR datasets. We used the hold-out and cross-validation protocols for training the RNNs. We used low-rank parameterization, iterative hard thresholding, and spare retraining compression for RNNs. We found that efficient training (i.e., dataset handling and preprocessing procedures, hyperparameter tuning, and so on, and suitable compression methods (like low-rank parameterization and iterative pruning) are critical in optimizing RNNs for performance and memory efficiency. We implemented the inference of the optimized models on Raspberry Pi. Full article
Show Figures

Figure 1

36 pages, 804 KiB  
Review
Object Detection in Autonomous Vehicles under Adverse Weather: A Review of Traditional and Deep Learning Approaches
by Noor Ul Ain Tahir, Zuping Zhang, Muhammad Asim, Junhong Chen and Mohammed ELAffendi
Algorithms 2024, 17(3), 103; https://doi.org/10.3390/a17030103 - 26 Feb 2024
Viewed by 1819
Abstract
Enhancing the environmental perception of autonomous vehicles (AVs) in intelligent transportation systems requires computer vision technology to be effective in detecting objects and obstacles, particularly in adverse weather conditions. Adverse weather circumstances present serious difficulties for object-detecting systems, which are essential to contemporary [...] Read more.
Enhancing the environmental perception of autonomous vehicles (AVs) in intelligent transportation systems requires computer vision technology to be effective in detecting objects and obstacles, particularly in adverse weather conditions. Adverse weather circumstances present serious difficulties for object-detecting systems, which are essential to contemporary safety procedures, infrastructure for monitoring, and intelligent transportation. AVs primarily depend on image processing algorithms that utilize a wide range of onboard visual sensors for guidance and decisionmaking. Ensuring the consistent identification of critical elements such as vehicles, pedestrians, and road lanes, even in adverse weather, is a paramount objective. This paper not only provides a comprehensive review of the literature on object detection (OD) under adverse weather conditions but also delves into the ever-evolving realm of the architecture of AVs, challenges for automated vehicles in adverse weather, the basic structure of OD, and explores the landscape of traditional and deep learning (DL) approaches for OD within the realm of AVs. These approaches are essential for advancing the capabilities of AVs in recognizing and responding to objects in their surroundings. This paper further investigates previous research that has employed both traditional and DL methodologies for the detection of vehicles, pedestrians, and road lanes, effectively linking these approaches with the evolving field of AVs. Moreover, this paper offers an in-depth analysis of the datasets commonly employed in AV research, with a specific focus on the detection of key elements in various environmental conditions, and then summarizes the evaluation matrix. We expect that this review paper will help scholars to gain a better understanding of this area of research. Full article
Show Figures

Figure 1

19 pages, 689 KiB  
Article
Root Cause Tracing Using Equipment Process Accuracy Evaluation for Looper in Hot Rolling
by Fengwei Jing, Fenghe Li, Yong Song, Jie Li, Zhanbiao Feng and Jin Guo 
Algorithms 2024, 17(3), 102; https://doi.org/10.3390/a17030102 - 26 Feb 2024
Viewed by 901
Abstract
The concept of production stability in hot strip rolling encapsulates the ability of a production line to consistently maintain its output levels and uphold the quality of its products, thus embodying the steady and uninterrupted nature of the production yield. This scholarly paper [...] Read more.
The concept of production stability in hot strip rolling encapsulates the ability of a production line to consistently maintain its output levels and uphold the quality of its products, thus embodying the steady and uninterrupted nature of the production yield. This scholarly paper focuses on the paramount looper equipment in the finishing rolling area, utilizing it as a case study to investigate approaches for identifying the origins of instabilities, specifically when faced with inadequate looper performance. Initially, the paper establishes the equipment process accuracy evaluation (EPAE) model for the looper, grounded in the precision of the looper’s operational process, to accurately depict the looper’s functioning state. Subsequently, it delves into the interplay between the EPAE metrics and overall production stability, advocating for the use of EPAE scores as direct indicators of production stability. The study further introduces a novel algorithm designed to trace the root causes of issues, categorizing them into material, equipment, and control factors, thereby facilitating on-site fault rectification. Finally, the practicality and effectiveness of this methodology are substantiated through its application on the 2250 hot rolling equipment production line. This paper provides a new approach for fault tracing in the hot rolling process. Full article
Show Figures

Figure 1

20 pages, 1674 KiB  
Article
Application of Genetic Algorithms for Periodicity Recognition and Finite Sequences Sorting
by Mukhtar Zhassuzak, Marat Akhmet, Yedilkhan Amirgaliyev and Zholdas Buribayev
Algorithms 2024, 17(3), 101; https://doi.org/10.3390/a17030101 - 26 Feb 2024
Viewed by 916
Abstract
Unpredictable strings are sequences of data with complex and erratic behavior, which makes them an object of interest in various scientific fields. Unpredictable strings related to chaos theory was investigated using a genetic algorithm. This paper presents a new genetic algorithm for converting [...] Read more.
Unpredictable strings are sequences of data with complex and erratic behavior, which makes them an object of interest in various scientific fields. Unpredictable strings related to chaos theory was investigated using a genetic algorithm. This paper presents a new genetic algorithm for converting large binary sequences into their periodic form. The MakePeriod method is also presented, which is aimed at optimizing the search for such periodic sequences, which significantly reduces the number of generations to achieve the result of the problem under consideration. The analysis of the deviation of a nonperiodic sequence from its considered periodic transformation was carried out, and methods of crossover and mutation were investigated. The proposed algorithm and its associated conclusions can be applied to processing large sequences and different values of the period, and also emphasize the importance of choosing the right methods of crossover and mutation when applying genetic algorithms to this task. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop