Algorithms doi: 10.3390/a17030126
Authors: Gerasim V. Krivovichev Valentina Yu. Sergeeva
The paper is devoted to the theoretical and numerical analysis of the two-step method, constructed as a modification of Polyak’s heavy ball method with the inclusion of an additional momentum parameter. For the quadratic case, the convergence conditions are obtained with the use of the first Lyapunov method. For the non-quadratic case, sufficiently smooth strongly convex functions are obtained, and these conditions guarantee local convergence.An approach to finding optimal parameter values based on the solution of a constrained optimization problem is proposed. The effect of an additional parameter on the convergence rate is analyzed. With the use of an ordinary differential equation, equivalent to the method, the damping effect of this parameter on the oscillations, which is typical for the non-monotonic convergence of the heavy ball method, is demonstrated. In different numerical examples for non-quadratic convex and non-convex test functions and machine learning problems (regularized smoothed elastic net regression, logistic regression, and recurrent neural network training), the positive influence of an additional parameter value on the convergence process is demonstrated.
]]>Algorithms doi: 10.3390/a17030125
Authors: Xuanyuan Xie Jieyu Zhao
The diffusion model has made progress in the field of image synthesis, especially in the area of conditional image synthesis. However, this improvement is highly dependent on large annotated datasets. To tackle this challenge, we present the Guided Diffusion model for Unlabeled Images (GDUI) framework in this article. It utilizes the inherent feature similarity and semantic differences in the data, as well as the downstream transferability of Contrastive Language-Image Pretraining (CLIP), to guide the diffusion model in generating high-quality images. We design two semantic-aware algorithms, namely, the pseudo-label-matching algorithm and label-matching refinement algorithm, to match the clustering results with the true semantic information and provide more accurate guidance for the diffusion model. First, GDUI encodes the image into a semantically meaningful latent vector through clustering. Then, pseudo-label matching is used to complete the matching of the true semantic information of the image. Finally, the label-matching refinement algorithm is used to adjust the irrelevant semantic information in the data, thereby improving the quality of the guided diffusion model image generation. Our experiments on labeled datasets show that GDUI outperforms diffusion models without any guidance and significantly reduces the gap between it and models guided by ground-truth labels.
]]>Algorithms doi: 10.3390/a17030124
Authors: Rachid Belaroussi Elie Issa Leonardo Cameli Claudio Lantieri Sonia Adelé
Human impression plays a crucial role in effectively designing infrastructures that support active mobility such as walking and cycling. By involving users early in the design process, valuable insights can be gathered before physical environments are constructed. This proactive approach enhances the attractiveness and safety of designed spaces for users. This study conducts an experiment comparing real street observations with immersive virtual reality (VR) visits to evaluate user perceptions and assess the quality of public spaces. For this experiment, a high-resolution 3D city model of a large-scale neighborhood was created, utilizing Building Information Modeling (BIM) and Geographic Information System (GIS) data. The model incorporated dynamic elements representing various urban environments: a public area with a tramway station, a commercial street with a road, and a residential playground with green spaces. Participants were presented with identical views of existing urban scenes, both in reality and through reconstructed 3D scenes using a Head-Mounted Display (HMD). They were asked questions related to the quality of the streetscape, its walkability, and cyclability. From the questionnaire, algorithms for assessing public spaces were computed, namely Sustainable Mobility Indicators (SUMI) and Pedestrian Level of Service (PLOS). The study quantifies the relevance of these indicators in a VR setup and correlates them with critical factors influencing the experience of using and spending time on a street. This research contributes to understanding the suitability of these algorithms in a VR environment for predicting the quality of future spaces before occupancy.
]]>Algorithms doi: 10.3390/a17030123
Authors: Noori Y. Abdul-Hassan Zainab J. Kadum Ali Hasan Ali
In this paper, we propose a new numerical scheme based on a variation of the standard formulation of the Runge–Kutta method using Taylor series expansion for solving initial value problems (IVPs) in ordinary differential equations. Analytically, the accuracy, consistency, and absolute stability of the new method are discussed. It is established that the new method is consistent and stable and has third-order convergence. Numerically, we present two models involving applications from physics and engineering to illustrate the efficiency and accuracy of our new method and compare it with further pertinent techniques carried out in the same order.
]]>Algorithms doi: 10.3390/a17030122
Authors: Xiaonan Si Lei Wang Wenchang Xu Biao Wang Wenbo Cheng
Gout is one of the most painful diseases in the world. Accurate classification of gout is crucial for diagnosis and treatment which can potentially save lives. However, the current methods for classifying gout periods have demonstrated poor performance and have received little attention. This is due to a significant data imbalance problem that affects the learning attention for the majority and minority classes. To overcome this problem, a resampling method called ENaNSMOTE-Tomek link is proposed. It uses extended natural neighbors to generate samples that fall within the minority class and then applies the Tomek link technique to eliminate instances that contribute to noise. The model combines the ensemble ’bagging’ technique with the proposed resampling technique to improve the quality of generated samples. The performance of individual classifiers and hybrid models on an imbalanced gout dataset taken from the electronic medical records of a hospital is evaluated. The results of the classification demonstrate that the proposed strategy is more accurate than some imbalanced gout diagnosis techniques, with an accuracy of 80.87% and an AUC of 87.10%. This indicates that the proposed algorithm can alleviate the problems caused by imbalanced gout data and help experts better diagnose their patients.
]]>Algorithms doi: 10.3390/a17030121
Authors: Nikolay Kyurkchiev Tsvetelin Zaevski Anton Iliev Vesselin Kyurkchiev Asen Rahnev
In this article, we propose some extended oscillator models. Various experiments are performed. The models are studied using the Melnikov approach. We show some integral units for researching the behavior of these hypothetical oscillators. These will be implemented as add-on sections of a thoughtful main web-based application for researching computations. One of the main goals of the study is to share the difficulties that researchers (who are not necessarily professional mathematicians) encounter in using contemporary computer algebraic systems (CASs) for scientific research to examine in detail the dynamics of modifications of classical and newer models that are emerging in the literature (for the large values of the parameters of the models). The present article is a natural continuation of the research in the direction that has been indicated and discussed in our previous investigations. One possible application that the Melnikov function may find in the modeling of a radiating antenna diagram is also discussed. Some probability-based constructions are also presented. We hope that some of these notes will be reflected in upcoming registered rectifications of the CAS. The aim of studying the design realization (scheme, manufacture, output, etc.) of the explored differential models can be viewed as not yet being met.
]]>Algorithms doi: 10.3390/a17030120
Authors: Minh-Quan Vo Thu Nguyen Michael A. Riegler Hugo L. Hammer
Generative models have recently received a lot of attention. However, a challenge with such models is that it is usually not possible to compute the likelihood function, which makes parameter estimation or training of the models challenging. The most commonly used alternative strategy is called likelihood-free estimation, based on finding values of the model parameters such that a set of selected statistics have similar values in the dataset and in samples generated from the model. However, a challenge is how to select statistics that are efficient in estimating unknown parameters. The most commonly used statistics are the mean vector, variances, and correlations between variables, but they may be less relevant in estimating the unknown parameters. We suggest utilizing Tukey depth contours (TDCs) as statistics in likelihood-free estimation. TDCs are highly flexible and can capture almost any property of multivariate data, in addition, they seem to be as of yet unexplored for likelihood-free estimation. We demonstrate that TDC statistics are able to estimate the unknown parameters more efficiently than mean, variance, and correlation in likelihood-free estimation. We further apply the TDC statistics to estimate the properties of requests to a computer system, demonstrating their real-life applicability. The suggested method is able to efficiently find the unknown parameters of the request distribution and quantify the estimation uncertainty.
]]>Algorithms doi: 10.3390/a17030119
Authors: Yanjun Li Takaaki Yoshimura Yuto Horima Hiroyuki Sugimori
The detection of coronary artery stenosis is one of the most important indicators for the diagnosis of coronary artery disease. However, stenosis in branch vessels is often difficult to detect using computer-aided systems and even radiologists because of several factors, such as imaging angle and contrast agent inhomogeneity. Traditional coronary artery stenosis localization algorithms often only detect aortic stenosis and ignore branch vessels that may also cause major health threats. Therefore, improving the localization of branch vessel stenosis in coronary angiographic images is a potential development property. In this study, we propose a preprocessing approach that combines vessel enhancement and image fusion as a prerequisite for deep learning. The sensitivity of the neural network to stenosis features is improved by enhancing the blurry features in coronary angiographic images. By validating five neural networks, such as YOLOv4 and R-FCN-Inceptionresnetv2, our proposed method can improve the performance of deep learning network applications on the images from six common imaging angles. The results showed that the proposed method is suitable as a preprocessing method for coronary angiographic image processing based on deep learning and can be used to amend the recognition ability of the deep model for fine vessel stenosis.
]]>Algorithms doi: 10.3390/a17030118
Authors: Thomas Parr Karl Friston Peter Zeidman
Bayesian inference typically focuses upon two issues. The first is estimating the parameters of some model from data, and the second is quantifying the evidence for alternative hypotheses—formulated as alternative models. This paper focuses upon a third issue. Our interest is in the selection of data—either through sampling subsets of data from a large dataset or through optimising experimental design—based upon the models we have of how those data are generated. Optimising data-selection ensures we can achieve good inference with fewer data, saving on computational and experimental costs. This paper aims to unpack the principles of active sampling of data by drawing from neurobiological research on animal exploration and from the theory of optimal experimental design. We offer an overview of the salient points from these fields and illustrate their application in simple toy examples, ranging from function approximation with basis sets to inference about processes that evolve over time. Finally, we consider how this approach to data selection could be applied to the design of (Bayes-adaptive) clinical trials.
]]>Algorithms doi: 10.3390/a17030117
Authors: Bo Cui Lingyun Wang Guangxi Li Xian Ren
The dynamic star simulator is a commonly used ground-test calibration device for star sensors. For the problems of slow calculation speed, low integration, and high power consumption in the traditional star chart simulation method, this paper designs a FPGA-based star chart display algorithm for a dynamic star simulator. The design adopts the USB 2.0 protocol to obtain the attitude data, uses the SDRAM to cache the attitude data and video stream, extracts the effective navigation star points by searching the starry sky equidistant right ascension and declination partitions, and realizes the pipelined displaying of the star map by using the parallel computing capability of the FPGA. Test results show that under the conditions of chart field of view of Φ20° and simulated magnitude of 2.0∼6.0 Mv, the longest time for calculating a chart is 72 μs under the clock of 148.5 MHz, which effectively improves the chart display speed of the dynamic star simulator. The FPGA-based star map display algorithm gets rid of the dependence of the existing algorithm on the computer, reduces the volume and power consumption of the dynamic star simulator, and realizes the miniaturization and portable demand of the dynamic star simulator.
]]>Algorithms doi: 10.3390/a17030116
Authors: Marcos E. González Laffitte Peter F. Stadler
The comparison of multiple (labeled) graphs with unrelated vertex sets is an important task in diverse areas of applications. Conceptually, it is often closely related to multiple sequence alignments since one aims to determine a correspondence, or more precisely, a multipartite matching between the vertex sets. There, the goal is to match vertices that are similar in terms of labels and local neighborhoods. Alignments of sequences and ordered forests, however, have a second aspect that does not seem to be considered for graph comparison, namely the idea that an alignment is a superobject from which the constituent input objects can be recovered faithfully as well-defined projections. Progressive alignment algorithms are based on the idea of computing multiple alignments as a pairwise alignment of the alignments of two disjoint subsets of the input objects. Our formal framework guarantees that alignments have compositional properties that make alignments of alignments well-defined. The various similarity-based graph matching constructions do not share this property and solve substantially different optimization problems. We demonstrate that optimal multiple graph alignments can be approximated well by means of progressive alignment schemes. The solution of the pairwise alignment problem is reduced formally to computing maximal common induced subgraphs. Similar to the ambiguities arising from consecutive indels, pairwise alignments of graph alignments require the consideration of ambiguous edges that may appear between alignment columns with complementary gap patterns. We report a simple reference implementation in Python/NetworkX intended to serve as starting point for further developments. The computational feasibility of our approach is demonstrated on test sets of small graphs that mimimc in particular applications to molecular graphs.
]]>Algorithms doi: 10.3390/a17030115
Authors: Sharoon Saleem Fawad Hussain Naveed Khan Baloch
Network on Chip (NoC) has emerged as a potential substitute for the communication model in modern computer systems with extensive integration. Among the numerous design challenges, application mapping on the NoC system poses one of the most complex and demanding optimization problems. In this research, we propose a hybrid improved whale optimization algorithm with enhanced genetic properties (IWOA-IGA) to optimally map real-time applications onto the 2D NoC Platform. The IWOA-IGA is a novel approach combining an improved whale optimization algorithm with the ability of a refined genetic algorithm to optimally map application tasks. A comprehensive comparison is performed between the proposed method and other state-of-the-art algorithms through rigorous analysis. The evaluation consists of real-time applications, benchmarks, and a collection of arbitrarily scaled and procedurally generated large-task graphs. The proposed IWOA-IGA indicates an average improvement in power reduction, improved energy consumption, and latency over state-of-the-art algorithms. Performance based on the Convergence Factor, which assesses the algorithm’s efficiency in achieving better convergence after running for a specific number of iterations over other efficiently developed techniques, is introduced in this research work. These results demonstrate the algorithm’s superior convergence performance when applied to real-world and synthetic task graphs. Our research findings spotlight the superior performance of hybrid improved whale optimization integrated with enhanced GA features, emphasizing its potential for application mapping in NoC-based systems.
]]>Algorithms doi: 10.3390/a17030114
Authors: MohammadHossein Reshadi Wen Li Wenjie Xu Precious Omashor Albert Dinh Scott Dick Yuntong She Michael Lipsett
Anomaly detection in data streams (and particularly time series) is today a vitally important task. Machine learning algorithms are a common design for achieving this goal. In particular, deep learning has, in the last decade, proven to be substantially more accurate than shallow learning in a wide variety of machine learning problems, and deep anomaly detection is very effective for point anomalies. However, deep semi-supervised contextual anomaly detection (in which anomalies within a time series are rare and none at all occur in the algorithm’s training data) is a more difficult problem. Hybrid anomaly detectors (a “normal model” followed by a comparator) are one approach to these problems, but the separate loss functions for the two components can lead to inferior performance. We investigate a novel synthetic-example oversampling technique to harmonize the two components of a hybrid system, thus improving the anomaly detector’s performance. We evaluate our algorithm on two distinct problems: identifying pipeline leaks and patient-ventilator asynchrony.
]]>Algorithms doi: 10.3390/a17030113
Authors: Anni Zhao Arash Toudeshki Reza Ehsani Joshua H. Viers Jian-Qiao Sun
The Delta robot is an over-actuated parallel robot with highly nonlinear kinematics and dynamics. Designing the control for a Delta robot to carry out various operations is a challenging task. Various advanced control algorithms, such as adaptive control, sliding mode control, and model predictive control, have been investigated for trajectory tracking of the Delta robot. However, these control algorithms require a reliable input–output model of the Delta robot. To address this issue, we have created a control-affine neural network model of the Delta robot with stepper motors. This is a completely data-driven model intended for control design consideration and is not derivable from Newton’s law or Lagrange’s equation. The neural networks are trained with randomly sampled data in a sufficiently large workspace. The sliding mode control for trajectory tracking is then designed with the help of the neural network model. Extensive numerical results are obtained to show that the neural network model together with the sliding mode control exhibits outstanding performance, achieving a trajectory tracking error below 5 cm on average for the Delta robot. Future work will include experimental validation of the proposed neural network input–output model for control design for the Delta robot. Furthermore, transfer learnings can be conducted to further refine the neural network input–output model and the sliding mode control when new experimental data become available.
]]>Algorithms doi: 10.3390/a17030112
Authors: András Hubai Sándor Szabó Bogdán Zaválnij
The principal component analysis is a well-known and widely used technique to determine the essential dimension of a data set. Broadly speaking, it aims to find a low-dimensional linear manifold that retains a large part of the information contained in the original data set. It may be the case that one cannot approximate the entirety of the original data set using a single low-dimensional linear manifold even though large subsets of it are amenable to such approximations. For these cases we raise the related but different challenge (problem) of locating subsets of a high dimensional data set that are approximately 1-dimensional. Naturally, we are interested in the largest of such subsets. We propose a method for finding these 1-dimensional manifolds by finding cliques in a purpose-built auxiliary graph.
]]>Algorithms doi: 10.3390/a17030111
Authors: Parag C. Pendharkar
This paper proposes a genetic algorithm-based Markov Chain approach that can be used for non-parametric estimation of regression coefficients and their statistical confidence bounds. The proposed approach can generate samples from an unknown probability density function if a formal functional form of its likelihood is known. The approach is tested in the non-parametric estimation of regression coefficients, where the least-square minimizing function is considered the maximum likelihood of a multivariate distribution. This approach has an advantage over traditional Markov Chain Monte Carlo methods because it is proven to converge and generate unbiased samples computationally efficiently.
]]>Algorithms doi: 10.3390/a17030110
Authors: Shuang Che Yan Chen Longda Wang Chuanfang Xu
This work discusses the electric vehicle (EV) ordered charging planning (OCP) optimization problem. To address this issue, an improved dual-population genetic moth–flame optimization (IDPGMFO) is proposed. Specifically, to obtain an appreciative solution of EV OCP, the design for a dual-population genetic mechanism integrated into moth–flame optimization is provided. To enhance the global optimization performance, the adaptive nonlinear decreasing strategies with selection, crossover and mutation probability, as well as the weight coefficient, are also designed. Additionally, opposition-based learning (OBL) is also introduced simultaneously. The simulation results show that the proposed improvement strategies can effectively improve the global optimization performance. Obviously, more ideal optimization solution of the EV OCP optimization problem can be obtained by using IDPGMFO.
]]>Algorithms doi: 10.3390/a17030109
Authors: Yuhuan Wu Yonghong Wu
Salient object detection (SOD) aims to identify the most visually striking objects in a scene, simulating the function of the biological visual attention system. The attention mechanism in deep learning is commonly used as an enhancement strategy which enables the neural network to concentrate on the relevant parts when processing input data, effectively improving the model’s learning and prediction abilities. Existing saliency object detection methods based on RGB deep learning typically treat all regions equally by using the extracted features, overlooking the fact that different regions have varying contributions to the final predictions. Based on the U2Net algorithm, this paper incorporates the split coordinate channel attention (SCCA) mechanism into the feature extraction stage. SCCA conducts spatial transformation in width and height dimensions to efficiently extract the location information of the target to be detected. While pixel-level semantic segmentation based on annotation has been successful, it assigns the same weight to each pixel which leads to poor performance in detecting the boundary of objects. In this paper, the Canny edge detection loss is incorporated into the loss calculation stage to improve the model’s ability to detect object edges. Based on the DUTS and HKU-IS datasets, experiments confirm that the proposed strategies effectively enhance the model’s detection performance, resulting in a 0.8% and 0.7% increase in the F1-score of U2Net. This paper also compares the traditional attention modules with the newly proposed attention, and the SCCA attention module achieves a top-three performance in prediction time, mean absolute error (MAE), F1-score, and model size on both experimental datasets.
]]>Algorithms doi: 10.3390/a17030108
Authors: Pablo Caballero Luis Gonzalez-Abril Juan A. Ortega Áurea Simon-Soro
Endometriosis (EM) is a chronic inflammatory estrogen-dependent disorder that affects 10% of women worldwide. It affects the female reproductive tract and its resident microbiota, as well as distal body sites that can serve as surrogate markers of EM. Currently, no single definitive biomarker can diagnose EM. For this pilot study, we analyzed a cohort of 21 patients with endometriosis and infertility-associated conditions. A microbiome dataset was created using five sample types taken from the reproductive and gastrointestinal tracts of each patient. We evaluated several machine learning algorithms for EM detection using these features. The characteristics of the dataset were derived from endometrial biopsy, endometrial fluid, vaginal, oral, and fecal samples. Despite limited data, the algorithms demonstrated high performance with respect to the F1 score. In addition, they suggested that disease diagnosis could potentially be improved by using less medically invasive procedures. Overall, the results indicate that machine learning algorithms can be useful tools for diagnosing endometriosis in low-resource settings where data availability and availability are limited. We recommend that future studies explore the complexities of the EM disorder using artificial intelligence and prediction modeling to further define the characteristics of the endometriosis phenotype.
]]>Algorithms doi: 10.3390/a17030107
Authors: Anton Kolosnitsyn Oleg Khamisov Eugene Semenkin Vladimir Nelyub
We consider the Golden Section and Parabola Methods for solving univariate optimization problems. For multivariate problems, we use these methods as line search procedures in combination with well-known zero-order methods such as the coordinate descent method, the Hooke and Jeeves method, and the Rosenbrock method. A comprehensive numerical comparison of the obtained versions of zero-order methods is given in the present work. The set of test problems includes nonconvex functions with a large number of local and global optimum points. Zero-order methods combined with the Parabola method demonstrate high performance and quite frequently find the global optimum even for large problems (up to 100 variables).
]]>Algorithms doi: 10.3390/a17030106
Authors: Somayeh Shahrabadi Telmo Adão Emanuel Peres Raul Morais Luís G. Magalhães Victor Alves
The proliferation of classification-capable artificial intelligence (AI) across a wide range of domains (e.g., agriculture, construction, etc.) has been allowed to optimize and complement several tasks, typically operationalized by humans. The computational training that allows providing such support is frequently hindered by various challenges related to datasets, including the scarcity of examples and imbalanced class distributions, which have detrimental effects on the production of accurate models. For a proper approach to these challenges, strategies smarter than the traditional brute force-based K-fold cross-validation or the naivety of hold-out are required, with the following main goals in mind: (1) carrying out one-shot, close-to-optimal data arrangements, accelerating conventional training optimization; and (2) aiming at maximizing the capacity of inference models to its fullest extent while relieving computational burden. To that end, in this paper, two image-based feature-aware dataset splitting approaches are proposed, hypothesizing a contribution towards attaining classification models that are closer to their full inference potential. Both rely on strategic image harvesting: while one of them hinges on weighted random selection out of a feature-based clusters set, the other involves a balanced picking process from a sorted list that stores data features’ distances to the centroid of a whole feature space. Comparative tests on datasets related to grapevine leaves phenotyping and bridge defects showcase promising results, highlighting a viable alternative to K-fold cross-validation and hold-out methods.
]]>Algorithms doi: 10.3390/a17030105
Authors: Dmytro Chumachenko Sergiy Yakovlev
In an era where technological advancements are rapidly transforming industries, healthcare is the primary beneficiary of such progress [...]
]]>Algorithms doi: 10.3390/a17030104
Authors: Varsha S. Lalapura Veerender Reddy Bhimavarapu J. Amudha Hariram Selvamurugan Satheesh
The Recurrent Neural Networks (RNNs) are an essential class of supervised learning algorithms. Complex tasks like speech recognition, machine translation, sentiment classification, weather prediction, etc., are now performed by well-trained RNNs. Local or cloud-based GPU machines are used to train them. However, inference is now shifting to miniature, mobile, IoT devices and even micro-controllers. Due to their colossal memory and computing requirements, mapping RNNs directly onto resource-constrained platforms is arcane and challenging. The efficacy of edge-intelligent RNNs (EI-RNNs) must satisfy both performance and memory-fitting requirements at the same time without compromising one for the other. This study’s aim was to provide an empirical evaluation and optimization of historic as well as recent RNN architectures for high-performance and low-memory footprint goals. We focused on Human Activity Recognition (HAR) tasks based on wearable sensor data for embedded healthcare applications. We evaluated and optimized six different recurrent units, namely Vanilla RNNs, Long Short-Term Memory (LSTM) units, Gated Recurrent Units (GRUs), Fast Gated Recurrent Neural Networks (FGRNNs), Fast Recurrent Neural Networks (FRNNs), and Unitary Gated Recurrent Neural Networks (UGRNNs) on eight publicly available time-series HAR datasets. We used the hold-out and cross-validation protocols for training the RNNs. We used low-rank parameterization, iterative hard thresholding, and spare retraining compression for RNNs. We found that efficient training (i.e., dataset handling and preprocessing procedures, hyperparameter tuning, and so on, and suitable compression methods (like low-rank parameterization and iterative pruning) are critical in optimizing RNNs for performance and memory efficiency. We implemented the inference of the optimized models on Raspberry Pi.
]]>Algorithms doi: 10.3390/a17030103
Authors: Noor Ul Ain Tahir Zuping Zhang Muhammad Asim Junhong Chen Mohammed ELAffendi
Enhancing the environmental perception of autonomous vehicles (AVs) in intelligent transportation systems requires computer vision technology to be effective in detecting objects and obstacles, particularly in adverse weather conditions. Adverse weather circumstances present serious difficulties for object-detecting systems, which are essential to contemporary safety procedures, infrastructure for monitoring, and intelligent transportation. AVs primarily depend on image processing algorithms that utilize a wide range of onboard visual sensors for guidance and decisionmaking. Ensuring the consistent identification of critical elements such as vehicles, pedestrians, and road lanes, even in adverse weather, is a paramount objective. This paper not only provides a comprehensive review of the literature on object detection (OD) under adverse weather conditions but also delves into the ever-evolving realm of the architecture of AVs, challenges for automated vehicles in adverse weather, the basic structure of OD, and explores the landscape of traditional and deep learning (DL) approaches for OD within the realm of AVs. These approaches are essential for advancing the capabilities of AVs in recognizing and responding to objects in their surroundings. This paper further investigates previous research that has employed both traditional and DL methodologies for the detection of vehicles, pedestrians, and road lanes, effectively linking these approaches with the evolving field of AVs. Moreover, this paper offers an in-depth analysis of the datasets commonly employed in AV research, with a specific focus on the detection of key elements in various environmental conditions, and then summarizes the evaluation matrix. We expect that this review paper will help scholars to gain a better understanding of this area of research.
]]>Algorithms doi: 10.3390/a17030102
Authors: Fengwei Jing Fenghe Li Yong Song Jie Li Zhanbiao Feng Jin Guo
The concept of production stability in hot strip rolling encapsulates the ability of a production line to consistently maintain its output levels and uphold the quality of its products, thus embodying the steady and uninterrupted nature of the production yield. This scholarly paper focuses on the paramount looper equipment in the finishing rolling area, utilizing it as a case study to investigate approaches for identifying the origins of instabilities, specifically when faced with inadequate looper performance. Initially, the paper establishes the equipment process accuracy evaluation (EPAE) model for the looper, grounded in the precision of the looper’s operational process, to accurately depict the looper’s functioning state. Subsequently, it delves into the interplay between the EPAE metrics and overall production stability, advocating for the use of EPAE scores as direct indicators of production stability. The study further introduces a novel algorithm designed to trace the root causes of issues, categorizing them into material, equipment, and control factors, thereby facilitating on-site fault rectification. Finally, the practicality and effectiveness of this methodology are substantiated through its application on the 2250 hot rolling equipment production line. This paper provides a new approach for fault tracing in the hot rolling process.
]]>Algorithms doi: 10.3390/a17030101
Authors: Mukhtar Zhassuzak Marat Akhmet Yedilkhan Amirgaliyev Zholdas Buribayev
Unpredictable strings are sequences of data with complex and erratic behavior, which makes them an object of interest in various scientific fields. Unpredictable strings related to chaos theory was investigated using a genetic algorithm. This paper presents a new genetic algorithm for converting large binary sequences into their periodic form. The MakePeriod method is also presented, which is aimed at optimizing the search for such periodic sequences, which significantly reduces the number of generations to achieve the result of the problem under consideration. The analysis of the deviation of a nonperiodic sequence from its considered periodic transformation was carried out, and methods of crossover and mutation were investigated. The proposed algorithm and its associated conclusions can be applied to processing large sequences and different values of the period, and also emphasize the importance of choosing the right methods of crossover and mutation when applying genetic algorithms to this task.
]]>Algorithms doi: 10.3390/a17030100
Authors: Andrea Adriani Stefano Serra-Capizzano Cristina Tablino-Possio
We consider the Helmholtz equation and the fractional Laplacian in the case of the complex-valued unbounded variable coefficient wave number μ, approximated by finite differences. In a recent analysis, singular value clustering and eigenvalue clustering have been proposed for a τ preconditioning when the variable coefficient wave number μ is uniformly bounded. Here, we extend the analysis to the unbounded case by focusing on the case of a power singularity. Several numerical experiments concerning the spectral behavior and convergence of the related preconditioned GMRES are presented.
]]>Algorithms doi: 10.3390/a17030099
Authors: Saikat Das Mohammad Ashrafuzzaman Frederick T. Sheldon Sajjan Shiva
The distributed denial of service (DDoS) attack is one of the most pernicious threats in cyberspace. Catastrophic failures over the past two decades have resulted in catastrophic and costly disruption of services across all sectors and critical infrastructure. Machine-learning-based approaches have shown promise in developing intrusion detection systems (IDSs) for detecting cyber-attacks, such as DDoS. Herein, we present a solution to detect DDoS attacks through an ensemble-based machine learning approach that combines supervised and unsupervised machine learning ensemble frameworks. This combination produces higher performance in detecting known DDoS attacks using supervised ensemble and for zero-day DDoS attacks using an unsupervised ensemble. The unsupervised ensemble, which employs novelty and outlier detection, is effective in identifying prior unseen attacks. The ensemble framework is tested using three well-known benchmark datasets, NSL-KDD, UNSW-NB15, and CICIDS2017. The results show that ensemble classifiers significantly outperform single-classifier-based approaches. Our model with combined supervised and unsupervised ensemble models correctly detects up to 99.1% of the DDoS attacks, with a negligible rate of false alarms.
]]>Algorithms doi: 10.3390/a17030098
Authors: Panagiotis D. Paraschos Georgios K. Koulinas Dimitrios E. Koulouriotis
The manufacturing industry often faces challenges related to customer satisfaction, system degradation, product sustainability, inventory, and operation management. If not addressed, these challenges can be substantially harmful and costly for the sustainability of manufacturing plants. Paradigms, e.g., Industry 4.0 and smart manufacturing, provide effective and innovative solutions, aiming at managing manufacturing operations, and controlling the quality of completed goods offered to the customers. Aiming at that end, this paper endeavors to mitigate the described challenges in a multi-stage degrading manufacturing/remanufacturing system through the implementation of an intelligent machine learning-based decision-making mechanism. To carry out decision-making, reinforcement learning is coupled with lean green manufacturing. The scope of this implementation is the creation of a smart lean and sustainable production environment that has a minimal environmental impact. Considering the latter, this effort is made to reduce material consumption and extend the lifecycle of manufactured products using pull production, predictive maintenance, and circular economy strategies. To validate this, a well-defined experimental analysis meticulously investigates the behavior and performance of the proposed mechanism. Results obtained by this analysis support the presented reinforcement learning/ad hoc control mechanism’s capability and competence achieving both high system sustainability and enhanced material reuse.
]]>Algorithms doi: 10.3390/a17030097
Authors: Anibal Pedraza Lucia Gonzalez Oscar Deniz Gloria Bueno
HER2 overexpression is a prognostic and predictive factor observed in about 15% to 20% of breast cancer cases. The assessment of its expression directly affects the selection of treatment and prognosis. The measurement of HER2 status is performed by an expert pathologist who assigns a score of 0, 1, 2+, or 3+ based on the gene expression. There is a high probability of interobserver variability in this evaluation, especially when it comes to class 2+. This is reasonable as the primary cause of error in multiclass classification problems typically arises in the intermediate classes. This work proposes a novel approach to expand the decision limit and divide it into two additional classes, that is 1.5+ and 2.5+. This subdivision facilitates both feature learning and pathology assessment. The method was evaluated using various neural networks models capable of performing patch-wise grading of HER2 whole slide images (WSI). Then, the outcomes of the 7-class classification were merged back into 5 classes in accordance with the pathologists’ criteria and to compare the results with the initial 5-class model. Optimal outcomes were achieved by employing colour transfer for data augmentation, and the ResNet-101 architecture with 7 classes. A sensitivity of 0.91 was achieved for class 2+ and 0.97 for 3+. Furthermore, this model offers the highest level of confidence, ranging from 92% to 94% for 2+ and 96% to 97% for 3+. In contrast, a dataset containing only 5 classes demonstrates a sensitivity performance that is 5% lower for the same network.
]]>Algorithms doi: 10.3390/a17030096
Authors: Monika Rybczak Krystian Kozakiewicz
Today, specific convolution neural network (CNN) models assigned to specific tasks are often used. In this article, the authors explored three models: MobileNet, EfficientNetB0, and InceptionV3 combined. The authors were interested in investigating how quickly an artificial intelligence model can be taught with limited computer resources. Three types of training bases were investigated, starting with a simple base verifying five colours, then recognizing two different orthogonal elements, followed by more complex images from different families. This research aimed to demonstrate the capabilities of the models based on training base parameters such as the number of images and epoch types. Architectures proposed by the authors in these cases were chosen based on simulation studies conducted on a virtual machine with limited hardware parameters. The proposals present the advantages and disadvantages of the different models based on the TensorFlow and Keras libraries in the Jupiter environment based on the Python programming language. An artificial intelligence model with a combination of MobileNet, proposed by Siemens, and Efficient and Inception, selected by the authors, allows for further work to be conducted on image classification, but with limited computer resources for industrial implementation on a programmable logical controller (PLC). The study showed a 90% success rate, with a learning time of 180 s.
]]>Algorithms doi: 10.3390/a17030095
Authors: Kenneth Lange
The current paper proposes and tests algorithms for finding the diameter of a compact convex set and the farthest point in the set to another point. For these two nonconvex problems, I construct Frank–Wolfe and projected gradient ascent algorithms. Although these algorithms are guaranteed to go uphill, they can become trapped by local maxima. To avoid this defect, I investigate a homotopy method that gradually deforms a ball into the target set. Motivated by the Frank–Wolfe algorithm, I also find the support function of the intersection of a convex cone and a ball centered at the origin and elaborate a known bisection algorithm for calculating the support function of a convex sublevel set. The Frank–Wolfe and projected gradient algorithms are tested on five compact convex sets: (a) the box whose coordinates range between −1 and 1, (b) the intersection of the unit ball and the non-negative orthant, (c) the probability simplex, (d) the Manhattan-norm unit ball, and (e) a sublevel set of the elastic net penalty. Frank–Wolfe and projected gradient ascent are about equally fast on these test problems. Ignoring homotopy, the Frank–Wolfe algorithm is more reliable. However, homotopy allows projected gradient ascent to recover from its failures.
]]>Algorithms doi: 10.3390/a17030094
Authors: Mukhtar Fatihu Hamza
Due to increased complexity and interactions between various subsystems, higher-order MIMO systems present difficulties in terms of stability and control performance. This study effort provides a novel, all-encompassing method for creating a decentralized fractional-order control technique for higher-order systems. Given the greater number of variables that needed to be optimized for fractional order control in higher-order, multi-input, multi-output systems, the modified flower pollination optimization algorithm (MFPOA) optimization technique was chosen due to its rapid convergence speed and minimal computational effort. The goal of the design is to improve control performance. Maximum overshoot (Mp), rising time (tr), and settling time (ts) are the performance factors taken into consideration. The MFPOA approach is used to improve the settings of the proposed decentralized fractional-order proportional-integral-derivative (FOPID) controller. By exploring the parameter space and converging on the best controller settings, the MFPOA examines the parameter space and satisfies the imposed constraints by maintaining system stability. To evaluate the suggested approach, simulation studies on two systems are carried out. The results show that by decreasing the loop interactions between subsystems with improved stability, the decentralized control with the MFPOA-based FOPID controller provides better control performance.
]]>Algorithms doi: 10.3390/a17030093
Authors: Philip Dawid
This article surveys the variety of ways in which a directed acyclic graph (DAG) can be used to represent a problem of probabilistic causality. For each of these ways, we describe the relevant formal or informal semantics governing that representation. It is suggested that the cleanest such representation is that embodied in an augmented DAG, which contains nodes for non-stochastic intervention indicators in addition to the usual nodes for domain variables.
]]>Algorithms doi: 10.3390/a17030092
Authors: Hexin Lu Xiaodong Zhu Jingwei Cui Haifeng Jiang
The process of iris recognition can result in a decline in recognition performance when the resolution of the iris images is insufficient. In this study, a super-resolution model for iris images, namely SwinGIris, which combines the Swin Transformer and the Generative Adversarial Network (GAN), is introduced. SwinGIris performs quadruple super-resolution reconstruction for low-resolution iris images, aiming to improve the resolution of iris images and thereby improving the recognition accuracy of iris recognition systems. The model utilizes residual Swin Transformer blocks to extract depth global features, and the progressive upsampling method along with sub-pixel convolution is conducive to focusing on the high-frequency iris information in the presence of more non-iris information. In order to preserve high-frequency details, the discriminator employs a VGG-style relative classifier to guide the generator in generating super-resolution images. In experimental section, we enhance low-resolution (56 × 56) iris images to high-resolution (224 × 224) iris images. Experimental results indicate that the SwinGIris model achieves satisfactory outcomes in restoring low-resolution iris image textures while preserving identity information.
]]>Algorithms doi: 10.3390/a17030090
Authors: Suryakant Tyagi Sándor Szénási
Machine learning and speech emotion recognition are rapidly evolving fields, significantly impacting human-centered computing. Machine learning enables computers to learn from data and make predictions, while speech emotion recognition allows computers to identify and understand human emotions from speech. These technologies contribute to the creation of innovative human–computer interaction (HCI) applications. Deep learning algorithms, capable of learning high-level features directly from raw data, have given rise to new emotion recognition approaches employing models trained on advanced speech representations like spectrograms and time–frequency representations. This study introduces CNN and LSTM models with GWO optimization, aiming to determine optimal parameters for achieving enhanced accuracy within a specified parameter set. The proposed CNN and LSTM models with GWO optimization underwent performance testing on four diverse datasets—RAVDESS, SAVEE, TESS, and EMODB. The results indicated superior performance of the models compared to linear and kernelized SVM, with or without GWO optimizers.
]]>Algorithms doi: 10.3390/a17030091
Authors: Jie Wang Jie Yang Jiafan He Dongliang Peng
Semi-supervised learning has been proven to be effective in utilizing unlabeled samples to mitigate the problem of limited labeled data. Traditional semi-supervised learning methods generate pseudo-labels for unlabeled samples and train the classifier using both labeled and pseudo-labeled samples. However, in data-scarce scenarios, reliance on labeled samples for initial classifier generation can degrade performance. Methods based on consistency regularization have shown promising results by encouraging consistent outputs for different semantic variations of the same sample obtained through diverse augmentation techniques. However, existing methods typically utilize only weak and strong augmentation variants, limiting information extraction. Therefore, a multi-augmentation contrastive semi-supervised learning method (MAC-SSL) is proposed. MAC-SSL introduces moderate augmentation, combining outputs from moderately and weakly augmented unlabeled images to generate pseudo-labels. Cross-entropy loss ensures consistency between strongly augmented image outputs and pseudo-labels. Furthermore, the MixUP is adopted to blend outputs from labeled and unlabeled images, enhancing consistency between re-augmented outputs and new pseudo-labels. The proposed method achieves a state-of-the-art performance (accuracy) through extensive experiments conducted on multiple datasets with varying numbers of labeled samples. Ablation studies further investigate each component’s significance.
]]>Algorithms doi: 10.3390/a17030089
Authors: Sean Pascoe
Data envelopment analysis (DEA) has been proposed as a means of assessing alternative management options when there are multiple criteria with multiple indicators each. While the method has been widely applied, the implications of how the method is applied on the resultant management alternative ranking have not been previously considered. We consider the impact on option ranking of ignoring an implicit hierarchical structure when there are different numbers of indicators associated with potential higher-order objectives. We also consider the implications of the use of radial or slacks-based approaches on option ranking with and without a hierarchical structure. We use an artificial data set as well as data from a previous study to assess the implications of the approach adopted, with the aim to provide guidance for future applications of DEA for multi-criteria decision making. We find substantial benefits in applying a hierarchical approach in the evaluation of the management alternatives. We also find that slacks-based approaches are better able to differentiate between management alternatives given multiple objectives and indicators.
]]>Algorithms doi: 10.3390/a17020088
Authors: Lin Guo Anand Balu Nellippallil Warren F. Smith Janet K. Allen Farrokh Mistree
When dealing with engineering design problems, designers often encounter nonlinear and nonconvex features, multiple objectives, coupled decision making, and various levels of fidelity of sub-systems. To realize the design with limited computational resources, problems with the features above need to be linearized and then solved using solution algorithms for linear programming. The adaptive linear programming (ALP) algorithm is an extension of the Sequential Linear Programming algorithm where a nonlinear compromise decision support problem (cDSP) is iteratively linearized, and the resulting linear programming problem is solved with satisficing solutions returned. The reduced move coefficient (RMC) is used to define how far away from the boundary the next linearization is to be performed, and currently, it is determined based on a heuristic. The choice of RMC significantly affects the efficacy of the linearization process and, hence, the rapidity of finding the solution. In this paper, we propose a rule-based parameter-learning procedure to vary the RMC at each iteration, thereby significantly increasing the speed of determining the ultimate solution. To demonstrate the efficacy of the ALP algorithm with parameter learning (ALPPL), we use an industry-inspired problem, namely, the integrated design of a hot-rolling process chain for the production of a steel rod. Using the proposed ALPPL, we can incorporate domain expertise to identify the most relevant criteria to evaluate the performance of the linearization algorithm, quantify the criteria as evaluation indices, and tune the RMC to return the solutions that fall into the most desired range of each evaluation index. Compared with the old ALP algorithm using the golden section search to update the RMC, the ALPPL improves the algorithm by identifying the RMC values with better linearization performance without adding computational complexity. The insensitive region of the RMC is better explored using the ALPPL—the ALP only explores the insensitive region twice, whereas the ALPPL explores four times throughout the iterations. With ALPPL, we have a more comprehensive definition of linearization performance—given multiple design scenarios, using evaluation indices (EIs) including the statistics of deviations, the numbers of binding (active) constraints and bounds, the numbers of accumulated linear constraints, and the number of iterations. The desired range of evaluation indices (DEI) is also learned during the iterations. The RMC value that brings the most EIs into the DEI is returned as the best RMC, which ensures a balance between the accuracy of the linearization and the robustness of the solutions. For our test problem, the hot-rolling process chain, the ALP returns the best RMC in twelve iterations considering only the deviation as the linearization performance index, whereas the ALPPL returns the best RMC in fourteen iterations considering multiple EIs. The complexity of both the ALP and the ALPPL is O(n2). The parameter-learning steps can be customized to improve the parameter determination of other algorithms.
]]>Algorithms doi: 10.3390/a17020087
Authors: Gleice Kelly Barbosa Souza Samara Oliveira Silva Santos André Luiz Carvalho Ottoni Marcos Santos Oliveira Daniela Carine Ramires Oliveira Erivelton Geraldo Nepomuceno
Reinforcement learning is an important technique in various fields, particularly in automated machine learning for reinforcement learning (AutoRL). The integration of transfer learning (TL) with AutoRL in combinatorial optimization is an area that requires further research. This paper employs both AutoRL and TL to effectively tackle combinatorial optimization challenges, specifically the asymmetric traveling salesman problem (ATSP) and the sequential ordering problem (SOP). A statistical analysis was conducted to assess the impact of TL on the aforementioned problems. Furthermore, the Auto_TL_RL algorithm was introduced as a novel contribution, combining the AutoRL and TL methodologies. Empirical findings strongly support the effectiveness of this integration, resulting in solutions that were significantly more efficient than conventional techniques, with an 85.7% improvement in the preliminary analysis results. Additionally, the computational time was reduced in 13 instances (i.e., in 92.8% of the simulated problems). The TL-integrated model outperformed the optimal benchmarks, demonstrating its superior convergence. The Auto_TL_RL algorithm design allows for smooth transitions between the ATSP and SOP domains. In a comprehensive evaluation, Auto_TL_RL significantly outperformed traditional methodologies in 78% of the instances analyzed.
]]>Algorithms doi: 10.3390/a17020086
Authors: Fiza Zafar Alicia Cordero Husna Maryam Juan R. Torregrosa
Power flow problems can be solved in a variety of ways by using the Newton–Raphson approach. The nonlinear power flow equations depend upon voltages Vi and phase angle δ. An electrical power system is obtained by taking the partial derivatives of load flow equations which contain active and reactive powers. In this paper, we present an efficient seventh-order iterative scheme to obtain the solutions of nonlinear system of equations, with only three steps in its formulation. Then, we illustrate the computational cost for different operations such as matrix–matrix multiplication, matrix–vector multiplication, and LU-decomposition, which is then used to calculate the cost of our proposed method and is compared with the cost of already seventh-order methods. Furthermore, we elucidate the applicability of our newly developed scheme in an electrical power system. The two-bus, three-bus, and four-bus power flow problems are then solved by using load flow equations that describe the applicability of the new schemes.
]]>Algorithms doi: 10.3390/a17020085
Authors: Mfowabo Maphosa Wesley Doorsamy Babu Paul
The role of academic advising has been conducted by faculty-student advisors, who often have many students to advise quickly, making the process ineffective. The selection of the incorrect qualification increases the risk of dropping out, changing qualifications, or not finishing the qualification enrolled in the minimum time. This study harnesses a real-world dataset comprising student records across four engineering disciplines from the 2016 and 2017 academic years at a public South African university. The study examines the relative importance of features in models for predicting student performance and determining whether students are better suited for extended or mainstream programmes. The study employs a three-step methodology, encompassing data pre-processing, feature importance selection, and model training with evaluation, to predict student performance by addressing issues such as dataset imbalance, biases, and ethical considerations. By relying exclusively on high school performance data, predictions are based solely on students’ abilities, fostering fairness and minimising biases in predictive tasks. The results show that removing demographic features like ethnicity or nationality reduces bias. The study’s findings also highlight the significance of the following features: mathematics, physical sciences, and admission point scores when predicting student performance. The models are evaluated, demonstrating their ability to provide accurate predictions. The study’s results highlight varying performance among models and their key contributions, underscoring the potential to transform academic advising and enhance student decision-making. These models can be incorporated into the academic advising recommender system, thereby improving the quality of academic guidance.
]]>Algorithms doi: 10.3390/a17020084
Authors: Nyo Me Htun Toshiaki Owari Satoshi Tsuyuki Takuya Hiroshima
High-value timber species with economic and ecological importance are usually distributed at very low densities, such that accurate knowledge of the location of these trees within a forest is critical for forest management practices. Recent technological developments integrating unmanned aerial vehicle (UAV) imagery and deep learning provide an efficient method for mapping forest attributes. In this study, we explored the applicability of high-resolution UAV imagery and a deep learning algorithm to predict the distribution of high-value deciduous broadleaf tree crowns of Japanese oak (Quercus crispula) in an uneven-aged mixed forest in Hokkaido, northern Japan. UAV images were collected in September and October 2022 before and after the color change of the leaves of Japanese oak to identify the optimal timing of UAV image collection. RGB information extracted from the UAV images was analyzed using a ResU-Net model (U-Net model with a Residual Network 101 (ResNet101), pre-trained on large ImageNet datasets, as backbone). Our results, confirmed using validation data, showed that reliable F1 scores (>0.80) could be obtained with both UAV datasets. According to the overlay analyses of the segmentation results and all the annotated ground truth data, the best performance was that of the model with the October UAV dataset (F1 score of 0.95). Our case study highlights a potential methodology to offer a transferable approach to the management of high-value timber species in other regions.
]]>Algorithms doi: 10.3390/a17020083
Authors: Keno Jann Büscher Jan Philipp Degel Jan Oellerich
This paper provides a comprehensive overview of approaches to the determination of isocontours and isosurfaces from given data sets. Different algorithms are reported in the literature for this purpose, which originate from various application areas, such as computer graphics or medical imaging procedures. In all these applications, the challenge is to extract surfaces with a specific isovalue from a given characteristic, so called isosurfaces. These different application areas have given rise to solution approaches that all solve the problem of isocontouring in their own way. Based on the literature, the following four dominant methods can be identified: the marching cubes algorithms, the tessellation-based algorithms, the surface nets algorithms and the ray tracing algorithms. With regard to their application, it can be seen that the methods are mainly used in the fields of medical imaging, computer graphics and the visualization of simulation results. In our work, we provide a broad and compact overview of the common methods that are currently used in terms of isocontouring with respect to certain criteria and their individual limitations. In this context, we discuss the individual methods and identify possible future research directions in the field of isocontouring.
]]>Algorithms doi: 10.3390/a17020082
Authors: Mingyoung Jeng Alvir Nobel Vinayak Jha David Levy Dylan Kneidel Manu Chaudhary Ishraq Islam Evan Baumgartner Eade Vanderhoof Audrey Facer Manish Singh Abina Arshad Esam El-Araby
Convolutional neural networks (CNNs) have proven to be a very efficient class of machine learning (ML) architectures for handling multidimensional data by maintaining data locality, especially in the field of computer vision. Data pooling, a major component of CNNs, plays a crucial role in extracting important features of the input data and downsampling its dimensionality. Multidimensional pooling, however, is not efficiently implemented in existing ML algorithms. In particular, quantum machine learning (QML) algorithms have a tendency to ignore data locality for higher dimensions by representing/flattening multidimensional data as simple one-dimensional data. In this work, we propose using the quantum Haar transform (QHT) and quantum partial measurement for performing generalized pooling operations on multidimensional data. We present the corresponding decoherence-optimized quantum circuits for the proposed techniques along with their theoretical circuit depth analysis. Our experimental work was conducted using multidimensional data, ranging from 1-D audio data to 2-D image data to 3-D hyperspectral data, to demonstrate the scalability of the proposed methods. In our experiments, we utilized both noisy and noise-free quantum simulations on a state-of-the-art quantum simulator from IBM Quantum. We also show the efficiency of our proposed techniques for multidimensional data by reporting the fidelity of results.
]]>Algorithms doi: 10.3390/a17020081
Authors: Marian Wnuk
An important element of modern telecommunications is wireless radio networks, which enable mobile subscribers to access wireless networks. The cell area is divided into independent sectors served by directional antennas. As the number of mobile network subscribers served by a single base station increases, the problem of interference related to the operation of the radio link increases. To minimize the disadvantages of omnidirectional antennas, base stations use antennas with directional radiation characteristics. This solution allows you to optimize the operating conditions of the mobile network in terms of reducing the impact of interference, better managing the frequency spectrum and improving the energy efficiency of the system. The work presents an adaptive antenna algorithm used in mobile telephony. The principle of operation of adaptive systems, the properties of their elements and the configurations in which they are used in practice are described. On this basis, an algorithm for controlling the radiation characteristics of adaptive antennas is presented. The control is carried out using a microprocessor system. The simulation model is described. An algorithm was developed based on the Mathcad mathematical program, and the simulation results of this algorithm, i.e., changes in radiation characteristics as a result of changing the mobile position of subscribers, were presented in the form of selected radiation characteristics charts.
]]>Algorithms doi: 10.3390/a17020080
Authors: Antonella Nardin Fabio D’Andreagiovanni
Electric scooter sharing mobility services have recently spread in major cities all around the world. However, the bad parking behavior of users has become a major source of issues, provoking accidents and compromising urban decorum of public areas. Reducing wild parking habits can be pursued by setting reserved parking spaces. In this work, we consider the problem faced by a municipality that hosts e-scooter sharing services and must choose which locations in its territory may be rented as reserved parking lots to sharing companies, with the aim of maximizing a return on renting and while taking into account spatial consideration and parking needs of local residents. Since this problem may result difficult to solve even for a state-of-the-art optimization software, we propose a hybrid metaheuristic solution algorithm combining a quantum-inspired ant colony optimization algorithm with an exact large neighborhood search. Results of computational tests considering realistic instances referring to the Italian capital city of Rome show the superior performance of the proposed hybrid metaheuristic.
]]>Algorithms doi: 10.3390/a17020079
Authors: Haojie Wang Pingqing Fan Xipei Ma Yansong Wang
The intelligent identification of coal gangue on industrial conveyor belts is a crucial technology for the precise sorting of coal gangue. To address the issues in coal gangue detection algorithms, such as high false negative rates, complex network structures, and substantial model weights, an optimized coal gangue detection algorithm based on YOLOv5s is proposed. In the backbone network, a feature refinement module is employed for feature extraction, enhancing the capability to extract features for coal and gangue. The improved BIFPN structure is employed as the feature pyramid, augmenting the model’s capability for cross-scale feature fusion. In the prediction layer, the ESIOU is utilized as the bounding box regression loss function to rectify the misalignment issue between predicted and actual box angles. This approach expedites the convergence speed of the network while concurrently enhancing the accuracy of coal gangue detection. Channel pruning is implemented on the network to diminish model computational complexity and weight, consequently augmenting detection speed. The experimental results demonstrate that the refined YOLOv5s coal gangue detection algorithm outperforms the original YOLOv5s algorithm, achieving a notable accuracy enhancement of 2.2% to reach 93.8%. Concurrently, a substantial reduction in model weight by 38.8% is observed, resulting in a notable 56.2% increase in inference speed. These advancements meet the detection requirements for scenarios involving mixed coal gangue.
]]>Algorithms doi: 10.3390/a17020078
Authors: Marwah Abdulrazzaq Naser Aso Ahmed Majeed Muntadher Alsabah Taha Raad Al-Shaikhli Kawa M. Kaky
Cardiovascular disease is the leading cause of global mortality and responsible for millions of deaths annually. The mortality rate and overall consequences of cardiac disease can be reduced with early disease detection. However, conventional diagnostic methods encounter various challenges, including delayed treatment and misdiagnoses, which can impede the course of treatment and raise healthcare costs. The application of artificial intelligence (AI) techniques, especially machine learning (ML) algorithms, offers a promising pathway to address these challenges. This paper emphasizes the central role of machine learning in cardiac health and focuses on precise cardiovascular disease prediction. In particular, this paper is driven by the urgent need to fully utilize the potential of machine learning to enhance cardiovascular disease prediction. In light of the continued progress in machine learning and the growing public health implications of cardiovascular disease, this paper aims to offer a comprehensive analysis of the topic. This review paper encompasses a wide range of topics, including the types of cardiovascular disease, the significance of machine learning, feature selection, the evaluation of machine learning models, data collection & preprocessing, evaluation metrics for cardiovascular disease prediction, and the recent trends & suggestion for future works. In addition, this paper offers a holistic view of machine learning’s role in cardiovascular disease prediction and public health. We believe that our comprehensive review will contribute significantly to the existing body of knowledge in this essential area.
]]>Algorithms doi: 10.3390/a17020077
Authors: Jacek G. Puchalski Janusz D. Fidelus Paweł Fotowicz
One of the fundamental challenges in analyzing wind turbine performance is the occurrence of torque creep under load and without load. This phenomenon significantly impacts the proper functioning of torque transducers, thus necessitating the utilization of appropriate measurement data analysis algorithms. In this regard, employing the least squares method appears to be a suitable approach. Linear regression can be employed to investigate the creep trend itself, while visualizing the creep in the form of a non-linear curve using a third-degree polynomial can provide further insights. Additionally, calculating deviations between the measurement data and the regression curves proves beneficial in accurately assessing the data.
]]>Algorithms doi: 10.3390/a17020076
Authors: Angel E. Muñoz-Zavala Jorge E. Macías-Díaz Daniel Alba-Cuéllar José A. Guerrero-Díaz-de-León
This paper reviews the application of artificial neural network (ANN) models to time series prediction tasks. We begin by briefly introducing some basic concepts and terms related to time series analysis, and by outlining some of the most popular ANN architectures considered in the literature for time series forecasting purposes: feedforward neural networks, radial basis function networks, recurrent neural networks, and self-organizing maps. We analyze the strengths and weaknesses of these architectures in the context of time series modeling. We then summarize some recent time series ANN modeling applications found in the literature, focusing mainly on the previously outlined architectures. In our opinion, these summarized techniques constitute a representative sample of the research and development efforts made in this field. We aim to provide the general reader with a good perspective on how ANNs have been employed for time series modeling and forecasting tasks. Finally, we comment on possible new research directions in this area.
]]>Algorithms doi: 10.3390/a17020075
Authors: Stefano Alderighi Paolo Landa Elena Tànfani Angela Testi
Molecular genetic techniques allow for the diagnosing of hereditary diseases and congenital abnormalities prenatally. A high variability of treatments exists, engendering an inappropriate clinical response, an inefficient use of resources, and the violation of the principle of the equality of treatment for equal needs. The proposed framework is based on modeling clinical pathways that contribute to identifying major causes of variability in treatments justified by the clinical needs’ variability as well as depending on individual characteristics. An electronic data collection method for high-risk pregnant women addressing genetic facilities and laboratories was implemented. The collected data were analyzed retrospectively with two aims. The first is to identify how the whole activity of genetic services can be broken down into different clinical pathways. This was performed by building a flow chart with the help of doctors. The second aim consists of measuring the variability, within and among, the different paths due to individual characteristics. A set of statistical models was developed to determine the impact of the patient characteristics on the clinical pathway and its length. The results show the importance of considering these characteristics together with the clinical information to define the care pathway and the use of resources.
]]>Algorithms doi: 10.3390/a17020074
Authors: Matija Milanic Rok Hren
The Adding-Doubling (AD) algorithm is a general analytical solution of the radiative transfer equation (RTE). AD offers a favorable balance between accuracy and computational efficiency, surpassing other RTE solutions, such as Monte Carlo (MC) simulations, in terms of speed while outperforming approximate solutions like the Diffusion Approximation method in accuracy. While AD algorithms have traditionally been implemented on central processing units (CPUs), this study focuses on leveraging the capabilities of graphics processing units (GPUs) to achieve enhanced computational speed. In terms of processing speed, the GPU AD algorithm showed an improvement by a factor of about 5000 to 40,000 compared to the GPU MC method. The optimal number of threads for this algorithm was found to be approximately 3000. To illustrate the utility of the GPU AD algorithm, the Levenberg–Marquardt inverse solution was used to extract object parameters from optical spectral data of human skin under various hemodynamic conditions. With regards to computational efficiency, it took approximately 5 min to process a 220 × 100 × 61 image (x-axis × y-axis × spectral-axis). The development of the GPU AD algorithm presents an advancement in determining tissue properties compared to other RTE solutions. Moreover, the GPU AD method itself holds the potential to expedite machine learning techniques in the analysis of spectral images.
]]>Algorithms doi: 10.3390/a17020073
Authors: Amalia Moutsopoulou Markos Petousis Georgios E. Stavroulakis Anastasios Pouliezos Nectarios Vidakis
In this study, we created an accurate model for a homogenous smart structure. After modeling multiplicative uncertainty, an ideal robust controller was designed using μ-synthesis and a reduced-order H-infinity Feedback Optimal Output (Hifoo) controller, leading to the creation of an improved uncertain plant. A powerful controller was built using a larger plant that included the nominal model and corresponding uncertainty. The designed controllers demonstrated robust and nominal performance when handling agitated plants. A comparison of the results was conducted. As an example of a general smart structure, the vibration of a collocated piezoelectric actuator and sensor was controlled using two different approaches with strong controller designs. This study presents a comprehensive simulation of the oscillation suppression problem for smart beams. They provide an analytical demonstration of how uncertainty is introduced into the model. The desired outcomes were achieved by utilizing Simulink and MATLAB (v. 8.0) programming tools.
]]>Algorithms doi: 10.3390/a17020072
Authors: Giorgio Lazzarinetti Riccardo Dondi Sara Manzoni Italo Zoppis
Solving combinatorial problems on complex networks represents a primary issue which, on a large scale, requires the use of heuristics and approximate algorithms. Recently, neural methods have been proposed in this context to find feasible solutions for relevant computational problems over graphs. However, such methods have some drawbacks: (1) they use the same neural architecture for different combinatorial problems without introducing customizations that reflects the specificity of each problem; (2) they only use a nodes local information to compute the solution; (3) they do not take advantage of common heuristics or exact algorithms. Following this interest, in this research we address these three main points by designing a customized attention-based mechanism that uses both local and global information from the adjacency matrix to find approximate solutions for the Minimum Vertex Cover Problem. We evaluate our proposal with respect to a fast two-factor approximation algorithm and a widely adopted state-of-the-art heuristic both on synthetically generated instances and on benchmark graphs with different scales. Experimental results demonstrate that, on the one hand, the proposed methodology is able to outperform both the two-factor approximation algorithm and the heuristic on the test datasets, scaling even better than the heuristic with harder instances and, on the other hand, is able to provide a representation of the nodes which reflects the combinatorial structure of the problem.
]]>Algorithms doi: 10.3390/a17020071
Authors: Styliani Tassiopoulou Georgia Koukiou Vassilis Anastassopoulos
In the ever-evolving landscape of tomographic imaging algorithms, this literature review explores a diverse array of themes shaping the field’s progress. It encompasses foundational principles, special innovative approaches, tomographic implementation algorithms, and applications of tomography in medicine, natural sciences, remote sensing, and seismology. This choice is to show off the diversity of tomographic applications and simultaneously the new trends in tomography in recent years. Accordingly, the evaluation of backprojection methods for breast tomographic reconstruction is highlighted. After that, multi-slice fusion takes center stage, promising real-time insights into dynamic processes and advanced diagnosis. Computational efficiency, especially in methods for accelerating tomographic reconstruction algorithms on commodity PC graphics hardware, is also presented. In geophysics, a deep learning-based approach to ground-penetrating radar (GPR) data inversion propels us into the future of geological and environmental sciences. We venture into Earth sciences with global seismic tomography: the inverse problem and beyond, understanding the Earth’s subsurface through advanced inverse problem solutions and pushing boundaries. Lastly, optical coherence tomography is reviewed in basic applications for revealing tiny biological tissue structures. This review presents the main categories of applications of tomography, providing a deep insight into the methods and algorithms that have been developed so far so that the reader who wants to deal with the subject is fully informed.
]]>Algorithms doi: 10.3390/a17020070
Authors: Andra Sandu Ioana Ioanăș Camelia Delcea Margareta-Stela Florescu Liviu-Adrian Cotfas
Fake news is an explosive subject, being undoubtedly among the most controversial and difficult challenges facing society in the present-day environment of technology and information, which greatly affects the individuals who are vulnerable and easily influenced, shaping their decisions, actions, and even beliefs. In the course of discussing the gravity and dissemination of the fake news phenomenon, this article aims to clarify the distinctions between fake news, misinformation, and disinformation, along with conducting a thorough analysis of the most widely read academic papers that have tackled the topic of fake news research using various machine learning techniques. Utilizing specific keywords for dataset extraction from Clarivate Analytics’ Web of Science Core Collection, the bibliometric analysis spans six years, offering valuable insights aimed at identifying key trends, methodologies, and notable strategies within this multidisciplinary field. The analysis encompasses the examination of prolific authors, prominent journals, collaborative efforts, prior publications, covered subjects, keywords, bigrams, trigrams, theme maps, co-occurrence networks, and various other relevant topics. One noteworthy aspect related to the extracted dataset is the remarkable growth rate observed in association with the analyzed subject, indicating an impressive increase of 179.31%. The growth rate value, coupled with the relatively short timeframe, further emphasizes the research community’s keen interest in this subject. In light of these findings, the paper draws attention to key contributions and gaps in the existing literature, providing researchers and decision-makers innovative viewpoints and perspectives on the ongoing battle against the spread of fake news in the age of information.
]]>Algorithms doi: 10.3390/a17020069
Authors: Wenjun Li Xueying Yang Chao Xu Yongjie Yang
In the directed co-graph edge-deletion problem, we are given a directed graph and an integer k, and the question is whether we can delete, at most, k edges so that the resulting graph is a directed co-graph. In this paper, we make two minor contributions. Firstly, we show that the problem is NP-hard. Then, we show that directed co-graphs are fully characterized by eight forbidden structures, each having, at most, six edges. Based on the symmetry properties and several refined observations, we develop a branching algorithm with a running time of O(2.733k), which is significantly more efficient compared to the brute-force algorithm, which has a running time of O(6k).
]]>Algorithms doi: 10.3390/a17020068
Authors: Kefei Zhu Xu Yang Yanbo Zhang Mengkun Liang Jun Wu
With the rising popularity of the Advanced Driver Assistance System (ADAS), there is an increasing demand for more human-like car-following performance. In this paper, we consider the role of heterogeneity in car-following behavior within car-following modeling. We incorporate car-following heterogeneity factors into the model features. We employ the eXtreme Gradient Boosting (XGBoost) method to build the car-following model. The results show that our model achieves optimal performance with a mean squared error of 0.002181, surpassing the model that disregards heterogeneity factors. Furthermore, utilizing model importance analysis, we determined that the cumulative importance score of heterogeneity factors in the model is 0.7262. The results demonstrate the significant impact of heterogeneity factors on car-following behavior prediction and highlight the importance of incorporating heterogeneity factors into car-following models.
]]>Algorithms doi: 10.3390/a17020067
Authors: Marko Đurasević Domagoj Jakobović Stjepan Picek Luca Mariot
The automated design of dispatching rules (DRs) with genetic programming (GP) has become an important research direction in recent years. One of the most important decisions in applying GP to generate DRs is determining the features of the scheduling problem to be used during the evolution process. Unfortunately, there are no clear rules or guidelines for the design or selection of such features, and often the features are simply defined without investigating their influence on the performance of the algorithm. However, the performance of GP can depend significantly on the features provided to it, and a poor or inadequate selection of features for a given problem can result in the algorithm performing poorly. In this study, we examine in detail the features that GP should use when developing DRs for unrelated machine scheduling problems. Different types of features are investigated, and the best combination of these features is determined using two selection methods. The obtained results show that the design and selection of appropriate features are crucial for GP, as they improve the results by about 7% when only the simplest terminal nodes are used without selection. In addition, the results show that it is not possible to outperform more sophisticated manually designed DRs when only the simplest problem features are used as terminal nodes. This shows how important it is to design appropriate composite terminal nodes to produce high-quality DRs.
]]>Algorithms doi: 10.3390/a17020066
Authors: José Antonio López Ortí Francisco José Marco Castillo María José Martínez Usó
In the present paper, we efficiently solve the two-body problem for extreme cases such as those with high eccentricities. The use of numerical methods, with the usual variables, cannot maintain the perihelion passage accurately. In previous articles, we have verified that this problem is treated more adequately through temporal reparametrizations related to the mean anomaly through the partition function. The biparametric family of anomalies, with an appropriate partition function, allows a systematic study of these transformations. In the present work, we consider the elliptical orbit as a meridian section of the ellipsoid of revolution, and the partition function depends on two variables raised to specific parameters. One of the variables is the mean radius of the ellipsoid at the secondary, and the other is the distance to the primary. One parameter regulates the concentration of points in the apoapsis region, and the other produces a symmetrical displacement between the polar and equatorial regions. The three most used geodesy latitude variables are also studied, resulting in one not belonging to the biparametric family. However, it is in the one introduced now, which implies an extension of the biparametric method. The results obtained using the method presented here now allow a causal interpretation of the operation of numerous reparametrizations used in the study of orbital motion.
]]>Algorithms doi: 10.3390/a17020065
Authors: Frank Werner
This is the third edition of a Special Issue of Algorithms; it is of a rather different nature compared to other Special Issues in the journal, which are usually dedicated to a particular subject in the area of algorithms [...]
]]>Algorithms doi: 10.3390/a17020064
Authors: Shweta More Moad Idrissi Haitham Mahmoud A. Taufiq Asyhari
The rapid proliferation of new technologies such as Internet of Things (IoT), cloud computing, virtualization, and smart devices has led to a massive annual production of over 400 zettabytes of network traffic data. As a result, it is crucial for companies to implement robust cybersecurity measures to safeguard sensitive data from intrusion, which can lead to significant financial losses. Existing intrusion detection systems (IDS) require further enhancements to reduce false positives as well as enhance overall accuracy. To minimize security risks, data analytics and machine learning can be utilized to create data-driven recommendations and decisions based on the input data. This study focuses on developing machine learning models that can identify cyber-attacks and enhance IDS system performance. This paper employed logistic regression, support vector machine, decision tree, and random forest algorithms on the UNSW-NB15 network traffic dataset, utilizing in-depth exploratory data analysis, and feature selection using correlation analysis and random sampling to compare model accuracy and effectiveness. The performance and confusion matrix results indicate that the Random Forest model is the best option for identifying cyber-attacks, with a remarkable F1 score of 97.80%, accuracy of 98.63%, and low false alarm rate of 1.36%, and thus should be considered to improve IDS system security.
]]>Algorithms doi: 10.3390/a17020063
Authors: Claudia Cavallaro Carolina Crespi Vincenzo Cutello Mario Pavone Francesco Zito
This paper introduces an agent-based model grounded in the ACO algorithm to investigate the impact of partitioning ant colonies on algorithmic performance. The exploration focuses on understanding the roles of group size and number within a multi-objective optimization context. The model consists of a colony of memory-enhanced ants (ME-ANTS) which, starting from a given position, must collaboratively discover the optimal path to the exit point within a grid network. The colony can be divided into groups of different sizes and its objectives are maximizing the number of ants that exit the grid while minimizing path costs. Three distinct analyses were conducted: an overall analysis assessing colony performance across different-sized groups, a group analysis examining the performance of each partitioned group, and a pheromone distribution analysis discerning correlations between temporal pheromone distribution and ant navigation. From the results, a dynamic correlation emerged between the degree of colony partitioning and solution quality within the ACO algorithm framework.
]]>Algorithms doi: 10.3390/a17020062
Authors: Baskhad Idrisov Tim Schlippe
Our paper compares the correctness, efficiency, and maintainability of human-generated and AI-generated program code. For that, we analyzed the computational resources of AI- and human-generated program code using metrics such as time and space complexity as well as runtime and memory usage. Additionally, we evaluated the maintainability using metrics such as lines of code, cyclomatic complexity, Halstead complexity and maintainability index. For our experiments, we had generative AIs produce program code in Java, Python, and C++ that solves problems defined on the competition coding website leetcode.com. We selected six LeetCode problems of varying difficulty, resulting in 18 program codes generated by each generative AI. GitHub Copilot, powered by Codex (GPT-3.0), performed best, solving 9 of the 18 problems (50.0%), whereas CodeWhisperer did not solve a single problem. BingAI Chat (GPT-4.0) generated correct program code for seven problems (38.9%), ChatGPT (GPT-3.5) and Code Llama (Llama 2) for four problems (22.2%) and StarCoder and InstructCodeT5+ for only one problem (5.6%). Surprisingly, although ChatGPT generated only four correct program codes, it was the only generative AI capable of providing a correct solution to a coding problem of difficulty level hard. In summary, 26 AI-generated codes (20.6%) solve the respective problem. For 11 AI-generated incorrect codes (8.7%), only minimal modifications to the program code are necessary to solve the problem, which results in time savings between 8.9% and even 71.3% in comparison to programming the program code from scratch.
]]>Algorithms doi: 10.3390/a17020061
Authors: Dimitris Fotakis Panagiotis Patsilinakos Eleni Psaroudaki Michalis Xefteris
In this work, we consider the problem of shape-based time-series clustering with the widely used Dynamic Time Warping (DTW) distance. We present a novel two-stage framework based on Sparse Gaussian Modeling. In the first stage, we apply Sparse Gaussian Process Regression and obtain a sparse representation of each time series in the dataset with a logarithmic (in the original length T) number of inducing data points. In the second stage, we apply k-means with DTW Barycentric Averaging (DBA) to the sparsified dataset using a generalization of DTW, which accounts for the fact that each inducing point serves as a representative of many original data points. The asymptotic running time of our Sparse Time-Series Clustering framework is Ω(T2/log2T) times faster than the running time of applying k-means to the original dataset because sparsification reduces the running time of DTW from Θ(T2) to Θ(log2T). Moreover, sparsification tends to smoothen outliers and particularly noisy parts of the original time series. We conduct an extensive experimental evaluation using datasets from the UCR Time-Series Classification Archive, showing that the quality of clustering computed by our Sparse Time-Series Clustering framework is comparable to the clustering computed by the standard k-means algorithm.
]]>Algorithms doi: 10.3390/a17020060
Authors: Ziyi Wang Xinran Li Luoyang Sun Haifeng Zhang Hualin Liu Jun Wang
Efficient yet sufficient exploration remains a critical challenge in reinforcement learning (RL), especially for Markov Decision Processes (MDPs) with vast action spaces. Previous approaches have commonly involved projecting the original action space into a latent space or employing environmental action masks to reduce the action possibilities. Nevertheless, these methods often lack interpretability or rely on expert knowledge. In this study, we introduce a novel method for automatically reducing the action space in environments with discrete action spaces while preserving interpretability. The proposed approach learns state-specific masks with a dual purpose: (1) eliminating actions with minimal influence on the MDP and (2) aggregating actions with identical behavioral consequences within the MDP. Specifically, we introduce a novel concept called Bisimulation Metrics on Actions by States (BMAS) to quantify the behavioral consequences of actions within the MDP and design a dedicated mask model to ensure their binary nature. Crucially, we present a practical learning procedure for training the mask model, leveraging transition data collected by any RL policy. Our method is designed to be plug-and-play and adaptable to all RL policies, and to validate its effectiveness, an integration into two prominent RL algorithms, DQN and PPO, is performed. Experimental results obtained from Maze, Atari, and μRTS2 reveal a substantial acceleration in the RL learning process and noteworthy performance improvements facilitated by the introduced approach.
]]>Algorithms doi: 10.3390/a17020059
Authors: Mahammad Khalid Shaik Vadla Mahima Agumbe Suresh Vimal K. Viswanathan
Understanding customer emotions and preferences is paramount for success in the dynamic product design landscape. This paper presents a study to develop a prediction pipeline to detect the aspect and perform sentiment analysis on review data. The pre-trained Bidirectional Encoder Representation from Transformers (BERT) model and the Text-to-Text Transfer Transformer (T5) are deployed to predict customer emotions. These models were trained on synthetically generated and manually labeled datasets to detect the specific features from review data, then sentiment analysis was performed to classify the data into positive, negative, and neutral reviews concerning their aspects. This research focused on eco-friendly products to analyze the customer emotions in this category. The BERT and T5 models were finely tuned for the aspect detection job and achieved 92% and 91% accuracy, respectively. The best-performing model will be selected, calculating the evaluation metrics precision, recall, F1-score, and computational efficiency. In these calculations, the BERT model outperforms T5 and is chosen as a classifier for the prediction pipeline to predict the aspect. By detecting aspects and sentiments of input data using the pre-trained BERT model, our study demonstrates its capability to comprehend and analyze customer reviews effectively. These findings can empower product designers and research developers with data-driven insights to shape exceptional products that resonate with customer expectations.
]]>Algorithms doi: 10.3390/a17020058
Authors: Wei-Lung Mao Sung-Hua Chen Chun-Yu Kao
Gantry-type dual-axis platforms can be used to move heavy loads or perform precision CNC work. Such gantry systems drive a single axis with two linear motors, and under heavy loads, a high driving force is required. This can generate a pulling force between the drive shafts in the coupling mechanism. In these situations, when a synchronization error becomes too large, mechanisms can become deformed or damaged, leading to damaged equipment, or in industrial settings, an additional power consumption. Effectively and accurately acquiring the synchronized movement of the platform is important to reduce energy consumption and optimize the system. In this study, a fractional-order fuzzy PID controller (FOFPID) using Oustaloup’s recursive filter is used to control a synchronous X–Y gantry-type platform. The optimized controller parameters are obtained by the measurement of control errors in a simulated environment. Four optimization methods are tested and compared: particle swarm optimization, invasive weed optimization, a gray wolf optimizer, and biogeography-based optimization. The systems were tested and compared in order to optimize the control parameters. Each of the four algorithms is simulated on four contour shapes: a circle, bow, heart, and star. The simulations and control scheme of the experiments are implemented using MATLAB, and the reference paths were planned using non-uniform rational B-splines (NURBS). After running the simulations to determine the optimal control parameters, each set of acquired control parameters is also tested and compared in the experiments and the results are recorded. Both the simulations and experiments show good results, and the tracking of the X–Y platform showed improved performance. Two performance indices are used to determine and validate the relative performance of the models and results.
]]>Algorithms doi: 10.3390/a17020057
Authors: Nattakan Supajaidee Nawinda Chutsagulprom Sompop Moonchai
Ordinary kriging (OK) is a popular interpolation method for its ability to simultaneously minimize error variance and deliver statistically optimal and unbiased predictions. In this work, the adaptive moving window kriging with K-means clustering (AMWKK) technique is developed to improve the estimation obtained from the moving window kriging based on the K-means clustering proposed by Abedini et al. This technique specifically addresses the challenge of selecting appropriate windows for target points located near the borders, which can potentially be the source of errors. The AMWKK algorithm introduces a dynamic clustering approach within the moving window kriging, where each target site sequentially serves as a cluster centroid. The OK is then applied within the cluster encompassing the target point, ensuring localized and adaptive interpolation. The proposed method is compared with ordinary kriging and other moving window kriging variant approaches to estimate Thailand’s mean annual pressure and humidity in 2018. The results indicate superior estimation capabilities of the AMWKK approach in terms of distinct quantitative performance statistics. The advantage of using the AMWKK method for spatial interpolation can be attributed to the fact that it facilitates the automatic tuning of the window size at any estimation point. The algorithm is particularly effective when observations in the same cluster as target points are sparse.
]]>Algorithms doi: 10.3390/a17020056
Authors: João Paulo Oliveira Marum H. Conrad Cunningham J. Adam Jones Yi Liu
Two recent studies addressed the problem of reducing transitional turbulence in applications developed in C# on .NET. The first study investigated this problem in desktop and Web GUI applications and the second in virtual and augmented reality applications using the Unity3D game engine. The studies used similar solution approaches, but both were somewhat embedded in the details of their applications and implementation platforms. This paper examines these two families of applications and seeks to extract the common aspects of their problem definitions and solution approaches and codify the problem-solution pair as a new software design pattern. To do so, the paper adopts Wellhausen and Fiesser’s writer’s path methodology and follows it systematically to discover and write the pattern, recording the reasoning at each step. To evaluate the pattern, the paper applies it to an arbitrary C#/.NET GUI application. The resulting design pattern is named Dynamically Coalescing Reactive Chains (DCRC). It enables the approach to transitional turbulence reduction to be reused across a range of related applications, languages, and user interface technologies. The detailed example of the writer’s path can assist future pattern writers in navigating through the complications and subtleties of the pattern-writing process.
]]>Algorithms doi: 10.3390/a17020055
Authors: Péter Hajnal
The binary number system is the basic number representation in computing. We can encode natural numbers with finite 0-1 sequences. The representation of natural numbers is based on this system. However, this poses problems and is technically not perfect. Several attempts have been made to handle integers (signed numbers). We mention only two: the balanced triple number system and the number system with base −2. Our paper introduces new possibilities. We also shed light on the graph theoretical background of the new number systems.
]]>Algorithms doi: 10.3390/a17020054
Authors: Yunkang Du Zuoliang Xu
In this paper, we recover the European option volatility function σ(t) of the underlying asset and the fractional order α of the time fractional derivatives under the time fractional Vasicek model. To address the ill-posed nature of the inverse problem, we employ Tikhonov regularization. The Alternating Direction Multiplier Method (ADMM) is utilized for the simultaneous recovery of the parameter α and the volatility function σ(t). In addition, the existence of a solution to the minimization problem has been demonstrated. Finally, the effectiveness of the proposed approach is verified through numerical simulation and empirical analysis.
]]>Algorithms doi: 10.3390/a17020053
Authors: Abdullahi T. Sulaiman Habeeb Bello-Salau Adeiza J. Onumanyi Muhammed B. Mu’azu Emmanuel A. Adedokun Ahmed T. Salawudeen Abdulfatai D. Adekale
The particle swarm optimization (PSO) algorithm is widely used for optimization purposes across various domains, such as in precision agriculture, vehicular ad hoc networks, path planning, and for the assessment of mathematical test functions towards benchmarking different optimization algorithms. However, because of the inherent limitations in the velocity update mechanism of the algorithm, PSO often converges to suboptimal solutions. Thus, this paper aims to enhance the convergence rate and accuracy of the PSO algorithm by introducing a modified variant, which is based on a hybrid of the PSO and the smell agent optimization (SAO), termed the PSO-SAO algorithm. Our specific objective involves the incorporation of the trailing mode of the SAO algorithm into the PSO framework, with the goal of effectively regulating the velocity updates of the original PSO, thus improving its overall performance. By using the trailing mode, agents are continuously introduced to track molecules with higher concentrations, thus guiding the PSO’s particles towards optimal fitness locations. We evaluated the performance of the PSO-SAO, PSO, and SAO algorithms using a set of 37 benchmark functions categorized into unimodal and non-separable (UN), multimodal and non-separable (MS), and unimodal and separable (US) classes. The PSO-SAO achieved better convergence towards global solutions, performing better than the original PSO in 76% of the assessed functions. Specifically, it achieved a faster convergence rate and achieved a maximum fitness value of −2.02180678324 when tested on the Adjiman test function at a hopping frequency of 9. Consequently, these results underscore the potential of PSO-SAO for solving engineering problems effectively, such as in vehicle routing, network design, and energy system optimization. These findings serve as an initial stride towards the formulation of a robust hyperparameter tuning strategy applicable to supervised machine learning and deep learning models, particularly in the domains of natural language processing and path-loss modeling.
]]>Algorithms doi: 10.3390/a17020052
Authors: Pin-Hung Juan Ja-Ling Wu
In this study, we present a federated learning approach that combines a multi-branch network and the Oort client selection algorithm to improve the performance of federated learning systems. This method successfully addresses the significant issue of non-iid data, a challenge not adequately tackled by the commonly used MFedAvg method. Additionally, one of the key innovations of this research is the introduction of uniformity, a metric that quantifies the disparity in training time amongst participants in a federated learning setup. This novel concept not only aids in identifying stragglers but also provides valuable insights into assessing the fairness and efficiency of the system. The experimental results underscore the merits of the integrated multi-branch network with the Oort client selection algorithm and highlight the crucial role of uniformity in designing and evaluating federated learning systems.
]]>Algorithms doi: 10.3390/a17020051
Authors: Luis M. de Campos Juan M. Fernández-Luna Juan F. Huete Francisco J. Ribadas-Pena Néstor Bolaños
In the context of academic expert finding, this paper investigates and compares the performance of information retrieval (IR) and machine learning (ML) methods, including deep learning, to approach the problem of identifying academic figures who are experts in different domains when a potential user requests their expertise. IR-based methods construct multifaceted textual profiles for each expert by clustering information from their scientific publications. Several methods fully tailored for this problem are presented in this paper. In contrast, ML-based methods treat expert finding as a classification task, training automatic text classifiers using publications authored by experts. By comparing these approaches, we contribute to a deeper understanding of academic-expert-finding techniques and their applicability in knowledge discovery. These methods are tested with two large datasets from the biomedical field: PMSC-UGR and CORD-19. The results show how IR techniques were, in general, more robust with both datasets and more suitable than the ML-based ones, with some exceptions showing good performance.
]]>Algorithms doi: 10.3390/a17020050
Authors: Konstantin Volkov
The opportunities provided by new information technologies, object-oriented programming tools, and modern operating systems for solving boundary value problems in CFD described by partial differential equations are discussed. An approach to organizing vectorized calculations and implementing finite-difference methods for solving boundary value problems in CFD is considered. Vectorization in CFD problems, eliminating nested loops, is ensured through the appropriate data organization and the use of vectorized operations with arrays. The implementation of numerical algorithms with vectorized mesh structures, including access to internal and boundary mesh cells, is discussed. Specific examples are reported and the implementation of the developed computational algorithms is discussed. Despite the fact that the capabilities of the developed algorithms are illustrated by solving benchmark CFD problems, they enable a relatively simple generalization to more complex problems described by three-dimensional equations.
]]>Algorithms doi: 10.3390/a17010049
Authors: Mojtaba Nayyeri Modjtaba Rouhani Hadi Sadoghi Yazdi Marko M. Mäkelä Alaleh Maskooki Yury Nikulin
One of the main disadvantages of the traditional mean square error (MSE)-based constructive networks is their poor performance in the presence of non-Gaussian noises. In this paper, we propose a new incremental constructive network based on the correntropy objective function (correntropy-based constructive neural network (C2N2)), which is robust to non-Gaussian noises. In the proposed learning method, input and output side optimizations are separated. It is proved theoretically that the new hidden node, which is obtained from the input side optimization problem, is not orthogonal to the residual error function. Regarding this fact, it is proved that the correntropy of the residual error converges to its optimum value. During the training process, the weighted linear least square problem is iteratively applied to update the parameters of the newly added node. Experiments on both synthetic and benchmark datasets demonstrate the robustness of the proposed method in comparison with the MSE-based constructive network, the radial basis function (RBF) network. Moreover, the proposed method outperforms other robust learning methods including the cascade correntropy network (CCOEN), Multi-Layer Perceptron based on the Minimum Error Entropy objective function (MLPMEE), Multi-Layer Perceptron based on the correntropy objective function (MLPMCC) and the Robust Least Square Support Vector Machine (RLS-SVM).
]]>Algorithms doi: 10.3390/a17010048
Authors: Tushar Ganguli Edwin K. P. Chong
We present a novel technique for pruning called activation-based pruning to effectively prune fully connected feedforward neural networks for multi-object classification. Our technique is based on the number of times each neuron is activated during model training. We compare the performance of activation-based pruning with a popular pruning method: magnitude-based pruning. Further analysis demonstrated that activation-based pruning can be considered a dimensionality reduction technique, as it leads to a sparse low-rank matrix approximation for each hidden layer of the neural network. We also demonstrate that the rank-reduced neural network generated using activation-based pruning has better accuracy than a rank-reduced network using principal component analysis. We provide empirical results to show that, after each successive pruning, the amount of reduction in the magnitude of singular values of each matrix representing the hidden layers of the network is equivalent to introducing the sum of singular values of the hidden layers as a regularization parameter to the objective function.
]]>Algorithms doi: 10.3390/a17010047
Authors: Wenny Hojas-Mazo Francisco Maciá-Pérez José Vicente Berná Martínez Mailyn Moreno-Espino Iren Lorenzo Fonseca Juan Pavón
Analysing message streams in a dynamic environment is challenging. Various methods and metrics are used to evaluate message classification solutions, but often fail to realistically simulate the actual environment. As a result, the evaluation can produce overly optimistic results, rendering current solution evaluations inadequate for real-world environments. This paper proposes a framework based on the simulation of real-world message streams to evaluate classification solutions. The framework consists of four modules: message stream simulation, processing, classification and evaluation. The simulation module uses techniques and queueing theory to replicate a real-world message stream. The processing module refines the input messages for optimal classification. The classification module categorises the generated message stream using existing solutions. The evaluation module evaluates the performance of the classification solutions by measuring accuracy, precision and recall. The framework can model different behaviours from different sources, such as different spammers with different attack strategies, press media or social network sources. Each profile generates a message stream that is combined into the main stream for greater realism. A spam detection case study is developed that demonstrates the implementation of the proposed framework and identifies latency and message body obfuscation as critical classification quality parameters.
]]>Algorithms doi: 10.3390/a17010046
Authors: Mattia Neroni Massimo Bertolini Angel A. Juan
In automated storage and retrieval systems (AS/RSs), the utilization of intelligent algorithms can reduce the makespan required to complete a series of input/output operations. This paper introduces a simulation optimization algorithm designed to minimize the makespan in a realistic AS/RS commonly found in the steel sector. This system includes weight and quality constraints for the selected items. Our hybrid approach combines discrete event simulation with biased-randomized heuristics. This combination enables us to efficiently address the complex time dependencies inherent in such dynamic scenarios. Simultaneously, it allows for intelligent decision making, resulting in feasible and high-quality solutions within seconds. A series of computational experiments illustrates the potential of our approach, which surpasses an alternative method based on traditional simulated annealing.
]]>Algorithms doi: 10.3390/a17010045
Authors: Yuzhu Zhang Hao Xu
This study investigates the problem of decentralized dynamic resource allocation optimization for ad-hoc network communication with the support of reconfigurable intelligent surfaces (RIS), leveraging a reinforcement learning framework. In the present context of cellular networks, device-to-device (D2D) communication stands out as a promising technique to enhance the spectrum efficiency. Simultaneously, RIS have gained considerable attention due to their ability to enhance the quality of dynamic wireless networks by maximizing the spectrum efficiency without increasing the power consumption. However, prevalent centralized D2D transmission schemes require global information, leading to a significant signaling overhead. Conversely, existing distributed schemes, while avoiding the need for global information, often demand frequent information exchange among D2D users, falling short of achieving global optimization. This paper introduces a framework comprising an outer loop and inner loop. In the outer loop, decentralized dynamic resource allocation optimization has been developed for self-organizing network communication aided by RIS. This is accomplished through the application of a multi-player multi-armed bandit approach, completing strategies for RIS and resource block selection. Notably, these strategies operate without requiring signal interaction during execution. Meanwhile, in the inner loop, the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm has been adopted for cooperative learning with neural networks (NNs) to obtain optimal transmit power control and RIS phase shift control for multiple users, with a specified RIS and resource block selection policy from the outer loop. Through the utilization of optimization theory, distributed optimal resource allocation can be attained as the outer and inner reinforcement learning algorithms converge over time. Finally, a series of numerical simulations are presented to validate and illustrate the effectiveness of the proposed scheme.
]]>Algorithms doi: 10.3390/a17010044
Authors: Ioannis G. Tsoulos V. N. Stavrou
In the current research, we consider the solution of dispersion relations addressed to solid state physics by using artificial neural networks (ANNs). Most specifically, in a double semiconductor heterostructure, we theoretically investigate the dispersion relations of the interface polariton (IP) modes and describe the reststrahlen frequency bands between the frequencies of the transverse and longitudinal optical phonons. The numerical results obtained by the aforementioned methods are in agreement with the results obtained by the recently published literature. Two methods were used to train the neural network: a hybrid genetic algorithm and a modified version of the well-known particle swarm optimization method.
]]>Algorithms doi: 10.3390/a17010043
Authors: Károly Héberger
Background: The development and application of machine learning (ML) methods have become so fast that almost nobody can follow their developments in every detail. It is no wonder that numerous errors and inconsistencies in their usage have also spread with a similar speed independently from the tasks: regression and classification. This work summarizes frequent errors committed by certain authors with the aim of helping scientists to avoid them. Methods: The principle of parsimony governs the train of thought. Fair method comparison can be completed with multicriteria decision-making techniques, preferably by the sum of ranking differences (SRD). Its coupling with analysis of variance (ANOVA) decomposes the effects of several factors. Earlier findings are summarized in a review-like manner: the abuse of the correlation coefficient and proper practices for model discrimination are also outlined. Results: Using an illustrative example, the correct practice and the methodology are summarized as guidelines for model discrimination, and for minimizing the prediction errors. The following factors are all prerequisites for successful modeling: proper data preprocessing, statistical tests, suitable performance parameters, appropriate degrees of freedom, fair comparison of models, and outlier detection, just to name a few. A checklist is provided in a tutorial manner on how to present ML modeling properly. The advocated practices are reviewed shortly in the discussion. Conclusions: Many of the errors can easily be filtered out with careful reviewing. Every authors’ responsibility is to adhere to the rules of modeling and validation. A representative sampling of recent literature outlines correct practices and emphasizes that no error-free publication exists.
]]>Algorithms doi: 10.3390/a17010042
Authors: Deyuan Zhong Liangda Fang Quanlong Guan
Encoding a dictionary into another representation means that all the words can be stored in the dictionary in a more efficient way. In this way, we can complete common operations in dictionaries, such as (1) searching for a word in the dictionary, (2) adding some words to the dictionary, and (3) removing some words from the dictionary, in a shorter time. Binary decision diagrams (BDDs) are one of the most famous representations of such encoding and are widely popular due to their excellent properties. Recently, some people have proposed encoding dictionaries into BDDs and some variants of BDDs and showed that it is feasible. Hence, we further investigate the topic of encoding dictionaries into decision diagrams. Tagged sentential decision diagrams (TSDDs), as one of these variants based on structured decomposition, exploit both the standard and zero-suppressed trimming rules. In this paper, we first introduce how to use Boolean functions to represent dictionary files and then design an algorithm that encodes dictionaries into TSDDs with the help of tries and a decoding algorithm that restores TSDDs to dictionaries. We utilize the help of tries in the encoding algorithm, which greatly accelerates the encoding process. Considering that TSDDs integrate two trimming rules, we believe that using TSDDs to represent dictionaries would be more effective, and the experiments also show this.
]]>Algorithms doi: 10.3390/a17010040
Authors: Stanislav Kirpichenko Lev Utkin Andrei Konstantinov Vladimir Muliukha
A method for estimating the conditional average treatment effect under the condition of censored time-to-event data, called BENK (the Beran Estimator with Neural Kernels), is proposed. The main idea behind the method is to apply the Beran estimator for estimating the survival functions of controls and treatments. Instead of typical kernel functions in the Beran estimator, it is proposed to implement kernels in the form of neural networks of a specific form, called neural kernels. The conditional average treatment effect is estimated by using the survival functions as outcomes of the control and treatment neural networks, which consist of a set of neural kernels with shared parameters. The neural kernels are more flexible and can accurately model a complex location structure of feature vectors. BENK does not require a large dataset for training due to its special way for training networks by means of pairs of examples from the control and treatment groups. The proposed method extends a set of models that estimate the conditional average treatment effect. Various numerical simulation experiments illustrate BENK and compare it with the well-known T-learner, S-learner and X-learner for several types of control and treatment outcome functions based on the Cox models, the random survival forest and the Beran estimator with Gaussian kernels. The code of the proposed algorithms implementing BENK is publicly available.
]]>Algorithms doi: 10.3390/a17010041
Authors: Ken Jom Ho Ender Özcan Peer-Olaf Siebers
Solving multiple objective optimization problems can be computationally intensive even when experiments can be performed with the help of a simulation model. There are many methodologies that can achieve good tradeoffs between solution quality and resource use. One possibility is using an intermediate “model of a model” (metamodel) built on experimental responses from the underlying simulation model and an optimization heuristic that leverages the metamodel to explore the input space more efficiently. However, determining the best metamodel and optimizer pairing for a specific problem is not directly obvious from the problem itself, and not all domains have experimental answers to this conundrum. This paper introduces a discrete multiple objective simulation metamodeling and optimization methodology that allows algorithmic testing and evaluation of four Metamodel-Optimizer (MO) pairs for different problems. For running our experiments, we have implemented a test environment in R and tested four different MO pairs on four different problem scenarios in the Operations Research domain. The results of our experiments suggest that patterns of relative performance between the four MO pairs tested differ in terms of computational time costs for the four problems studied. With additional integration of problems, metamodels and optimizers, the opportunity to identify ex ante the best MO pair to employ for a general problem can lead to a more profitable use of metamodel optimization.
]]>Algorithms doi: 10.3390/a17010039
Authors: Jiao Su Yi An Jialin Wu Kai Zhang
Pedestrian detection has always been a difficult and hot spot in computer vision research. At the same time, pedestrian detection technology plays an important role in many applications, such as intelligent transportation and security monitoring. In complex scenes, pedestrian detection often faces some challenges, such as low detection accuracy and misdetection due to small target sizes and scale variations. To solve these problems, this paper proposes a pedestrian detection network PT-YOLO based on the YOLOv5. The pedestrian detection network PT-YOLO consists of the YOLOv5 network, the squeeze-and-excitation module (SE), the weighted bi-directional feature pyramid module (BiFPN), the coordinate convolution (coordconv) module and the wise intersection over union loss function (WIoU). The SE module in the backbone allows it to focus on the important features of pedestrians and improves accuracy. The weighted BiFPN module enhances the fusion of multi-scale pedestrian features and information transfer, which can improve fusion efficiency. The prediction head design uses the WIoU loss function to reduce the regression error. The coordconv module allows the network to better perceive the location information in the feature map. The experimental results show that the pedestrian detection network PT-YOLO is more accurate compared with other target detection methods in pedestrian detection and can effectively accomplish the task of pedestrian detection in complex scenes.
]]>Algorithms doi: 10.3390/a17010038
Authors: Romain Amyot Noriyuki Kodera Holger Flechsig
Simulation of atomic force microscopy (AFM) computationally emulates experimental scanning of a biomolecular structure to produce topographic images that can be correlated with measured images. Its application to the enormous amount of available high-resolution structures, as well as to molecular dynamics modelling data, facilitates the quantitative interpretation of experimental observations by inferring atomistic information from resolution-limited measured topographies. The computation required to generate a simulated AFM image generally includes the calculation of contacts between the scanning tip and all atoms from the biomolecular structure. However, since only contacts with surface atoms are relevant, a filtering method shall highly improve the efficiency of simulated AFM computations. In this report, we address this issue and present an elegant solution based on graphics processing unit (GPU) computations that significantly accelerates the computation of simulated AFM images. This method not only allows for the visualization of biomolecular structures combined with ultra-fast synchronized calculation and graphical representation of corresponding simulated AFM images (live simulation AFM), but, as we demonstrate, it can also reduce the computational effort during the automatized fitting of atomistic structures into measured AFM topographies by orders of magnitude. Hence, the developed method will play an important role in post-experimental computational analysis involving simulated AFM, including expected applications in machine learning approaches. The implementation is realized in our BioAFMviewer software (ver. 3) package for simulated AFM of biomolecular structures and dynamics.
]]>Algorithms doi: 10.3390/a17010037
Authors: Jiaming Li Ning Xie Tingting Zhao
In recent years, with the rapid advancements in Natural Language Processing (NLP) technologies, large models have become widespread. Traditional reinforcement learning algorithms have also started experimenting with language models to optimize training. However, they still fundamentally rely on the Markov Decision Process (MDP) for reinforcement learning, and do not fully exploit the advantages of language models for dealing with long sequences of problems. The Decision Transformer (DT) introduced in 2021 is the initial effort to completely transform the reinforcement learning problem into a challenge within the NLP domain. It attempts to use text generation techniques to create reinforcement learning trajectories, addressing the issue of finding optimal trajectories. However, the article places the training trajectory data of reinforcement learning directly into a basic language model for training. Its aim is to predict the entire trajectory, encompassing state and reward information. This approach deviates from the reinforcement learning training objective of finding the optimal action. Furthermore, it generates redundant information in the output, impacting the final training effectiveness of the agent. This paper proposes a more reasonable network model structure, the Action-Translator Transformer (ATT), to predict only the next action of the agent. This makes the language model more interpretable for the reinforcement learning problem. We test our model in simulated gaming scenarios and compare it with current mainstream methods in the offline reinforcement learning field. Based on the presented experimental results, our model demonstrates superior performance. We hope that introducing this model will inspire new ideas and solutions for combining language models and reinforcement learning, providing fresh perspectives for offline reinforcement learning research.
]]>Algorithms doi: 10.3390/a17010036
Authors: Zheng Li Xinkai Chen Jiaqing Fu Ning Xie Tingting Zhao
With the development of electronic game technology, the content of electronic games presents a larger number of units, richer unit attributes, more complex game mechanisms, and more diverse team strategies. Multi-agent deep reinforcement learning shines brightly in this type of team electronic game, achieving results that surpass professional human players. Reinforcement learning algorithms based on Q-value estimation often suffer from Q-value overestimation, which may seriously affect the performance of AI in multi-agent scenarios. We propose a multi-agent mutual evaluation method and a multi-agent softmax method to reduce the estimation bias of Q values in multi-agent scenarios, and have tested them in both the particle multi-agent environment and the multi-agent tank environment we constructed. The multi-agent tank environment we have built has achieved a good balance between experimental verification efficiency and multi-agent game task simulation. It can be easily extended for different multi-agent cooperation or competition tasks. We hope that it can be promoted in the research of multi-agent deep reinforcement learning.
]]>Algorithms doi: 10.3390/a17010035
Authors: Alessio Cellupica Marco Cirelli Giovanni Saggio Emanuele Gruppioni Pier Paolo Valentini
In recent years, the boost in the development of hardware and software resources for building virtual reality environments has fuelled the development of tools to support training in different disciplines. The purpose of this work is to discuss a complete methodology and the supporting algorithms to develop a virtual reality environment to train the use of a sensorized upper-limb prosthesis targeted at amputees. The environment is based on the definition of a digital twin of a virtual prosthesis, able to communicate with the sensors worn by the user and reproduce its dynamic behaviour and the interaction with virtual objects. Several training tasks are developed according to standards, including the Southampton Hand Assessment Procedure, and the usability of the entire system is evaluated, too.
]]>Algorithms doi: 10.3390/a17010034
Authors: Mohammad Shokouhifar Mohamad Hasanvand Elaheh Moharamkhani Frank Werner
Heart disease is a global health concern of paramount importance, causing a significant number of fatalities and disabilities. Precise and timely diagnosis of heart disease is pivotal in preventing adverse outcomes and improving patient well-being, thereby creating a growing demand for intelligent approaches to predict heart disease effectively. This paper introduces an ensemble heuristic–metaheuristic feature fusion learning (EHMFFL) algorithm for heart disease diagnosis using tabular data. Within the EHMFFL algorithm, a diverse ensemble learning model is crafted, featuring different feature subsets for each heterogeneous base learner, including support vector machine, K-nearest neighbors, logistic regression, random forest, naive bayes, decision tree, and XGBoost techniques. The primary objective is to identify the most pertinent features for each base learner, leveraging a combined heuristic–metaheuristic approach that integrates the heuristic knowledge of the Pearson correlation coefficient with the metaheuristic-driven grey wolf optimizer. The second objective is to aggregate the decision outcomes of the various base learners through ensemble learning. The performance of the EHMFFL algorithm is rigorously assessed using the Cleveland and Statlog datasets, yielding remarkable results with an accuracy of 91.8% and 88.9%, respectively, surpassing state-of-the-art techniques in heart disease diagnosis. These findings underscore the potential of the EHMFFL algorithm in enhancing diagnostic accuracy for heart disease and providing valuable support to clinicians in making more informed decisions regarding patient care.
]]>Algorithms doi: 10.3390/a17010033
Authors: Azal Ahmad Khan Salman Hussain Rohitash Chandra
Quantum computing has opened up various opportunities for the enhancement of computational power in the coming decades. We can design algorithms inspired by the principles of quantum computing, without implementing in quantum computing infrastructure. In this paper, we present the quantum predator–prey algorithm (QPPA), which fuses the fundamentals of quantum computing and swarm optimization based on a predator–prey algorithm. Our results demonstrate the efficacy of QPPA in solving complex real-parameter optimization problems with better accuracy when compared to related algorithms in the literature. QPPA achieves highly rapid convergence for relatively low- and high-dimensional optimization problems and outperforms selected traditional and advanced algorithms. This motivates the application of QPPA to real-world application problems.
]]>Algorithms doi: 10.3390/a17010032
Authors: Mahbuba Begum Sumaita Binte Shorif Mohammad Shorif Uddin Jannatul Ferdush Tony Jan Alistair Barros Md Whaiduzzaman
Digital multimedia elements such as text, image, audio, and video can be easily manipulated because of the rapid rise of multimedia technology, making data protection a prime concern. Hence, copyright protection, content authentication, and integrity verification are today’s new challenging issues. To address these issues, digital image watermarking techniques have been proposed by several researchers. Image watermarking can be conducted through several transformations, such as discrete wavelet transform (DWT), singular value decomposition (SVD), orthogonal matrix Q and upper triangular matrix R (QR) decomposition, and non-subsampled contourlet transform (NSCT). However, a single transformation cannot simultaneously satisfy all the design requirements of image watermarking, which makes a platform to design a hybrid invisible image watermarking technique in this work. The proposed work combines four-level (4L) DWT and two-level (2L) SVD. The Arnold map initially encrypts the watermark image, and 2L SVD is applied to it to extract the s components of the watermark image. A 4L DWT is applied to the host image to extract the LL sub-band, and then 2L SVD is applied to extract s components that are embedded into the host image to generate the watermarked image. The dynamic-sized watermark maintains a balanced visual impact and non-blind watermarking preserves the quality and integrity of the host image. We have evaluated the performance after applying several intentional and unintentional attacks and found high imperceptibility and improved robustness with enhanced security to the system than existing state-of-the-art methods.
]]>Algorithms doi: 10.3390/a17010031
Authors: Sardar Anisul Haque Mohammad Tanvir Parvez Shahadat Hossain
Matrix–matrix multiplication is of singular importance in linear algebra operations with a multitude of applications in scientific and engineering computing. Data structures for storing matrix elements are designed to minimize overhead information as well as to optimize the operation count. In this study, we utilize the notion of the compact diagonal storage method (CDM), which builds upon the previously developed diagonal storage—an orientation-independent uniform scheme to store the nonzero elements of a range of matrices. This study exploits both these storage schemes and presents efficient GPU-accelerated parallel implementations of matrix multiplication when the input matrices are banded and/or structured sparse. We exploit the data layouts in the diagonal storage schemes to expose a substantial amount of fine-grained parallelism and effectively utilize the GPU shared memory to improve the locality of data access for numerical calculations. Results from an extensive set of numerical experiments with the aforementioned types of matrices demonstrate orders-of-magnitude speedups compared with the sequential performance.
]]>Algorithms doi: 10.3390/a17010030
Authors: Ivan S. Maksymov
Ambiguous optical illusions have been a paradigmatic object of fascination, research and inspiration in arts, psychology and video games. However, accurate computational models of perception of ambiguous figures have been elusive. In this paper, we design and train a deep neural network model to simulate human perception of the Necker cube, an ambiguous drawing with several alternating possible interpretations. Defining the weights of the neural network connection using a quantum generator of truly random numbers, in agreement with the emerging concepts of quantum artificial intelligence and quantum cognition, we reveal that the actual perceptual state of the Necker cube is a qubit-like superposition of the two fundamental perceptual states predicted by classical theories. Our results finds applications in video games and virtual reality systems employed for training of astronauts and operators of unmanned aerial vehicles. They are also useful for researchers working in the fields of machine learning and vision, psychology of perception and quantum–mechanical models of human mind and decision making.
]]>Algorithms doi: 10.3390/a17010029
Authors: Pornrawee Tatit Kiki Adhinugraha David Taniar
Using spatial data in mobile applications has grown significantly, thereby empowering users to explore locations, navigate unfamiliar areas, find transportation routes, employ geomarketing strategies, and model environmental factors. Spatial databases are pivotal in efficiently storing, retrieving, and manipulating spatial data to fulfill users’ needs. Two fundamental spatial query types, k-nearest neighbors (kNN) and range search, enable users to access specific points of interest (POIs) based on their location, which are measured by actual road distance. However, retrieving the nearest POIs using actual road distance can be computationally intensive due to the need to find the shortest distance. Using straight-line measurements could expedite the process but might compromise accuracy. Consequently, this study aims to evaluate the accuracy of the Euclidean distance method in POIs retrieval by comparing it with the road network distance method. The primary focus is determining whether the trade-off between computational time and accuracy is justified, thus employing the Open Source Routing Machine (OSRM) for distance extraction. The assessment encompasses diverse scenarios and analyses factors influencing the accuracy of the Euclidean distance method. The methodology employs a quantitative approach, thereby categorizing query points based on density and analyzing them using kNN and range query methods. Accuracy in the Euclidean distance method is evaluated against the road network distance method. The results demonstrate peak accuracy for kNN queries at k=1, thus exceeding 85% across classes but declining as k increases. Range queries show varied accuracy based on POI density, with higher-density classes exhibiting earlier accuracy increases. Notably, datasets with fewer POIs exhibit unexpectedly higher accuracy, thereby providing valuable insights into spatial query processing.
]]>Algorithms doi: 10.3390/a17010028
Authors: Yiming Fan Meng Wang
Software specifications are of great importance to improve the quality of software. To automatically mine specifications from software systems, some specification mining approaches based on finite-state automatons have been proposed. However, these approaches are inaccurate when dealing with large-scale systems. In order to improve the accuracy of mined specifications, we propose a specification mining approach based on the ordering points to identify the clustering structure clustering algorithm and model checking. In the approach, the neural network model is first used to produce the feature values of states in the traces of the program. Then, according to the feature values, finite-state automatons are generated based on the ordering points to identify the clustering structure clustering algorithm. Further, the finite-state automaton with the highest F-measure is selected. To improve the quality of the finite-state automatons, we refine it based on model checking. The proposed approach was implemented in a tool named MCLSM and experiments, including 13 target classes, were conducted to evaluate its effectiveness. The experimental results show that the average F-measure of finite-state automatons generated by our method reaches 92.19%, which is higher than most related tools.
]]>Algorithms doi: 10.3390/a17010027
Authors: Virgilijus Sakalauskas Dalia Kriksciuniene
The growing popularity of e-commerce has prompted researchers to take a greater interest in deeper understanding online shopping behavior, consumer interest patterns, and the effectiveness of advertising campaigns. This paper presents a fresh approach for targeting high-value e-shop clients by utilizing clickstream data. We propose the new algorithm to measure customer engagement and recognizing high-value customers. Clickstream data is employed in the algorithm to compute a Customer Merit (CM) index that measures the customer’s level of engagement and anticipates their purchase intent. The CM index is evaluated dynamically by the algorithm, examining the customer’s activity level, efficiency in selecting items, and time spent in browsing. It combines tracking customers browsing and purchasing behaviors with other relevant factors: time spent on the website and frequency of visits to e-shops. This strategy proves highly beneficial for e-commerce enterprises, enabling them to pinpoint potential buyers and design targeted advertising campaigns exclusively for high-value customers of e-shops. It allows not only boosts e-shop sales but also minimizes advertising expenses effectively. The proposed method was tested on actual clickstream data from two e-commerce websites and showed that the personalized advertising campaign outperformed the non-personalized campaign in terms of click-through and conversion rate. In general, the findings suggest, that personalized advertising scenarios can be a useful tool for boosting e-commerce sales and reduce advertising cost. By utilizing clickstream data and adopting a targeted approach, e-commerce businesses can attract and retain high-value customers, leading to higher revenue and profitability.
]]>