Next Issue
Volume 16, October
Previous Issue
Volume 16, August
 
 

Algorithms, Volume 16, Issue 9 (September 2023) – 57 articles

Cover Story (view full-size image): An autonomously driven vehicle (a robotic car) was developed based on artificial intelligence using a supervised learning method and computer vision algorithms for dataset creation. A scaled-down robotic car containing only one camera as a sensor was developed.
The classical computer vision algorithm processes the image to determine the optimal direction for the vehicle to follow on the track. With this algorithm, images were grabbed to incorporate a unique dataset. After the dataset was created, several trainings were carried out to reach the final neural network model.
The agent’s performance increased with machine learning methods, and the effectiveness of the proposed approach was additionally demonstrated through experimental results in real robotic cars, which performed better than expected. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
23 pages, 4562 KiB  
Article
Improved YOLOv5-Based Real-Time Road Pavement Damage Detection in Road Infrastructure Management
by Abdullah As Sami, Saadman Sakib, Kaushik Deb and Iqbal H. Sarker
Algorithms 2023, 16(9), 452; https://doi.org/10.3390/a16090452 - 21 Sep 2023
Cited by 4 | Viewed by 1962
Abstract
Deep learning has enabled a straightforward, convenient method of road pavement infrastructure management that facilitates a secure, cost-effective, and efficient transportation network. Manual road pavement inspection is time-consuming and dangerous, making timely road repair difficult. This research showcases You Only Look Once version [...] Read more.
Deep learning has enabled a straightforward, convenient method of road pavement infrastructure management that facilitates a secure, cost-effective, and efficient transportation network. Manual road pavement inspection is time-consuming and dangerous, making timely road repair difficult. This research showcases You Only Look Once version 5 (YOLOv5), the most commonly employed object detection model trained on the latest benchmark Road Damage Dataset, Road Damage Detection 2022 (RDD 2022). The RDD 2022 dataset includes four common types of road pavement damage, namely vertical cracks, horizontal cracks, alligator cracks, and potholes. This paper presents an improved deep neural network model based on YOLOv5 for real-time road pavement damage detection in photographic representations of outdoor road surfaces, making it an indispensable tool for efficient, real-time, and cost-effective road infrastructure management. The YOLOv5 model has been modified to incorporate several techniques that improve its accuracy and generalization performance. These techniques include the Efficient Channel Attention module (ECA-Net), label smoothing, the K-means++ algorithm, Focal Loss, and an additional prediction layer. In addition, a 1.9% improvement in mean average precision (mAP) and a 1.29% increase in F1-Score were attained by the model in comparison to YOLOv5s, with an increment of 1.1 million parameters. Moreover, a 0.11% improvement in mAP and 0.05% improvement in F1 score was achieved by the proposed model compared to YOLOv8s while having 3 million fewer parameters and 12 gigabytes fewer Giga Floating Point Operation per Second (GFlops). Full article
Show Figures

Figure 1

18 pages, 837 KiB  
Article
HyperDE: An Adaptive Hyper-Heuristic for Global Optimization
by Alexandru-Razvan Manescu and Bogdan Dumitrescu
Algorithms 2023, 16(9), 451; https://doi.org/10.3390/a16090451 - 20 Sep 2023
Viewed by 1314
Abstract
In this paper, a novel global optimization approach in the form of an adaptive hyper-heuristic, namely HyperDE, is proposed. As the naming suggests, the method is based on the Differential Evolution (DE) heuristic, which is a well-established optimization approach inspired by the theory [...] Read more.
In this paper, a novel global optimization approach in the form of an adaptive hyper-heuristic, namely HyperDE, is proposed. As the naming suggests, the method is based on the Differential Evolution (DE) heuristic, which is a well-established optimization approach inspired by the theory of evolution. Additionally, two other similar approaches are introduced for comparison and validation, HyperSSA and HyperBES, based on Sparrow Search Algorithm (SSA) and Bald Eagle Search (BES), respectively. The method consists of a genetic algorithm that is adopted as a high-level online learning mechanism, in order to adjust the hyper-parameters and facilitate the collaboration of a homogeneous set of low-level heuristics with the intent of maximizing the performance of the search for global optima. Comparison with the heuristics that the proposed methodologies are based on, along with other state-of-the-art methods, is favorable. Full article
(This article belongs to the Special Issue 2022 and 2023 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

35 pages, 824 KiB  
Article
An Information Theoretic Approach to Privacy-Preserving Interpretable and Transferable Learning
by Mohit Kumar, Bernhard A. Moser, Lukas Fischer and Bernhard Freudenthaler
Algorithms 2023, 16(9), 450; https://doi.org/10.3390/a16090450 - 20 Sep 2023
Viewed by 1053
Abstract
In order to develop machine learning and deep learning models that take into account the guidelines and principles of trustworthy AI, a novel information theoretic approach is introduced in this article. A unified approach to privacy-preserving interpretable and transferable learning is considered for [...] Read more.
In order to develop machine learning and deep learning models that take into account the guidelines and principles of trustworthy AI, a novel information theoretic approach is introduced in this article. A unified approach to privacy-preserving interpretable and transferable learning is considered for studying and optimizing the trade-offs between the privacy, interpretability, and transferability aspects of trustworthy AI. A variational membership-mapping Bayesian model is used for the analytical approximation of the defined information theoretic measures for privacy leakage, interpretability, and transferability. The approach consists of approximating the information theoretic measures by maximizing a lower-bound using variational optimization. The approach is demonstrated through numerous experiments on benchmark datasets and a real-world biomedical application concerned with the detection of mental stress in individuals using heart rate variability analysis. Full article
(This article belongs to the Special Issue Deep Learning Techniques for Computer Security Problems)
Show Figures

Figure 1

19 pages, 7962 KiB  
Article
Design of a Lower Limb Exoskeleton: Robust Control, Simulation and Experimental Results
by E. Anyuli Alvarez Salcido, Daniel Centeno-Barreda, Yukio Rosales, Ricardo Lopéz-Gutiérrez, Sergio Salazar and Rogelio Lozano
Algorithms 2023, 16(9), 449; https://doi.org/10.3390/a16090449 - 20 Sep 2023
Viewed by 1267
Abstract
This paper presents the development of a robust control algorithm to be applied in a knee and ankle joint exoskeleton designed for rehabilitation of flexion/extension movements. The goal of the control law is to follow the trajectory of a straight leg extension routine [...] Read more.
This paper presents the development of a robust control algorithm to be applied in a knee and ankle joint exoskeleton designed for rehabilitation of flexion/extension movements. The goal of the control law is to follow the trajectory of a straight leg extension routine in a sitting position. This routine is commonly used to rehabilitate an injury on an Anterior Cruciate Ligament (ACL) and it is applied to the knee and ankle joints. Moreover, the paper presents the development and implementation of the robotic structure of the ankle joint to integrate it into an exoskeleton for gait rehabilitation. The development of the dynamic model and the implementation of the control algorithm in simulation and experimental tests are presented, showing that the proposed control guarantees the convergence of the tracking error. Full article
Show Figures

Figure 1

14 pages, 2181 KiB  
Article
Neural Network Based Approach to Recognition of Meteor Tracks in the Mini-EUSO Telescope Data
by Mikhail Zotov, Dmitry Anzhiganov, Aleksandr Kryazhenkov, Dario Barghini, Matteo Battisti, Alexander Belov, Mario Bertaina, Marta Bianciotto, Francesca Bisconti, Carl Blaksley, Sylvie Blin, Giorgio Cambiè, Francesca Capel, Marco Casolino, Toshikazu Ebisuzaki, Johannes Eser, Francesco Fenu, Massimo Alberto Franceschi, Alessio Golzio, Philippe Gorodetzky, Fumiyoshi Kajino, Hiroshi Kasuga, Pavel Klimov, Massimiliano Manfrin, Laura Marcelli, Hiroko Miyamoto, Alexey Murashov, Tommaso Napolitano, Hiroshi Ohmori, Angela Olinto, Etienne Parizot, Piergiorgio Picozza, Lech Wiktor Piotrowski, Zbigniew Plebaniak, Guillaume Prévôt, Enzo Reali, Marco Ricci, Giulia Romoli, Naoto Sakaki, Kenji Shinozaki, Christophe De La Taille, Yoshiyuki Takizawa, Michal Vrábel and Lawrence Wienckeadd Show full author list remove Hide full author list
Algorithms 2023, 16(9), 448; https://doi.org/10.3390/a16090448 - 19 Sep 2023
Cited by 1 | Viewed by 1065
Abstract
Mini-EUSO is a wide-angle fluorescence telescope that registers ultraviolet (UV) radiation in the nocturnal atmosphere of Earth from the International Space Station. Meteors are among multiple phenomena that manifest themselves not only in the visible range but also in the UV. We present [...] Read more.
Mini-EUSO is a wide-angle fluorescence telescope that registers ultraviolet (UV) radiation in the nocturnal atmosphere of Earth from the International Space Station. Meteors are among multiple phenomena that manifest themselves not only in the visible range but also in the UV. We present two simple artificial neural networks that allow for recognizing meteor signals in the Mini-EUSO data with high accuracy in terms of a binary classification problem. We expect that similar architectures can be effectively used for signal recognition in other fluorescence telescopes, regardless of the nature of the signal. Due to their simplicity, the networks can be implemented in onboard electronics of future orbital or balloon experiments. Full article
(This article belongs to the Special Issue Machine Learning for Pattern Recognition)
Show Figures

Figure 1

17 pages, 540 KiB  
Article
Carousel Greedy Algorithms for Feature Selection in Linear Regression
by Jiaqi Wang, Bruce Golden and Carmine Cerrone
Algorithms 2023, 16(9), 447; https://doi.org/10.3390/a16090447 - 19 Sep 2023
Viewed by 1101
Abstract
The carousel greedy algorithm (CG) was proposed several years ago as a generalized greedy algorithm. In this paper, we implement CG to solve linear regression problems with a cardinality constraint on the number of features. More specifically, we introduce a default version of [...] Read more.
The carousel greedy algorithm (CG) was proposed several years ago as a generalized greedy algorithm. In this paper, we implement CG to solve linear regression problems with a cardinality constraint on the number of features. More specifically, we introduce a default version of CG that has several novel features. We compare its performance against stepwise regression and more sophisticated approaches using integer programming, and the results are encouraging. For example, CG consistently outperforms stepwise regression (from our preliminary experiments, we see that CG improves upon stepwise regression in 10 of 12 cases), but it is still computationally inexpensive. Furthermore, we show that the approach is applicable to several more general feature selection problems. Full article
(This article belongs to the Special Issue Bio-Inspired Algorithms)
Show Figures

Figure 1

22 pages, 726 KiB  
Article
Implementation Aspects in Regularized Structural Equation Models
by Alexander Robitzsch
Algorithms 2023, 16(9), 446; https://doi.org/10.3390/a16090446 - 18 Sep 2023
Cited by 3 | Viewed by 1184
Abstract
This article reviews several implementation aspects in estimating regularized single-group and multiple-group structural equation models (SEM). It is demonstrated that approximate estimation approaches that rely on a differentiable approximation of non-differentiable penalty functions perform similarly to the coordinate descent optimization approach of regularized [...] Read more.
This article reviews several implementation aspects in estimating regularized single-group and multiple-group structural equation models (SEM). It is demonstrated that approximate estimation approaches that rely on a differentiable approximation of non-differentiable penalty functions perform similarly to the coordinate descent optimization approach of regularized SEMs. Furthermore, using a fixed regularization parameter can sometimes be superior to an optimal regularization parameter selected by the Bayesian information criterion when it comes to the estimation of structural parameters. Moreover, the widespread penalty functions of regularized SEM implemented in several R packages were compared with the estimation based on a recently proposed penalty function in the Mplus software. Finally, we also investigate the performance of a clever replacement of the optimization function in regularized SEM with a smoothed differentiable approximation of the Bayesian information criterion proposed by O’Neill and Burke in 2023. The findings were derived through two simulation studies and are intended to guide the practical implementation of regularized SEM in future software pieces. Full article
(This article belongs to the Special Issue Mathematical Models and Their Applications IV)
Show Figures

Figure 1

13 pages, 4198 KiB  
Article
Automated Segmentation of Optical Coherence Tomography Images of the Human Tympanic Membrane Using Deep Learning
by Thomas P. Oghalai, Ryan Long, Wihan Kim, Brian E. Applegate and John S. Oghalai
Algorithms 2023, 16(9), 445; https://doi.org/10.3390/a16090445 - 17 Sep 2023
Viewed by 1361
Abstract
Optical Coherence Tomography (OCT) is a light-based imaging modality that is used widely in the diagnosis and management of eye disease, and it is starting to become used to evaluate for ear disease. However, manual image analysis to interpret the anatomical and pathological [...] Read more.
Optical Coherence Tomography (OCT) is a light-based imaging modality that is used widely in the diagnosis and management of eye disease, and it is starting to become used to evaluate for ear disease. However, manual image analysis to interpret the anatomical and pathological findings in the images it provides is complicated and time-consuming. To streamline data analysis and image processing, we applied a machine learning algorithm to identify and segment the key anatomical structure of interest for medical diagnostics, the tympanic membrane. Using 3D volumes of the human tympanic membrane, we used thresholding and contour finding to locate a series of objects. We then applied TensorFlow deep learning algorithms to identify the tympanic membrane within the objects using a convolutional neural network. Finally, we reconstructed the 3D volume to selectively display the tympanic membrane. The algorithm was able to correctly identify the tympanic membrane properly with an accuracy of ~98% while removing most of the artifacts within the images, caused by reflections and signal saturations. Thus, the algorithm significantly improved visualization of the tympanic membrane, which was our primary objective. Machine learning approaches, such as this one, will be critical to allowing OCT medical imaging to become a convenient and viable diagnostic tool within the field of otolaryngology. Full article
(This article belongs to the Special Issue Machine Learning for Pattern Recognition)
Show Figures

Figure 1

18 pages, 362 KiB  
Article
Nonsmooth Optimization-Based Hyperparameter-Free Neural Networks for Large-Scale Regression
by Napsu Karmitsa, Sona Taheri, Kaisa Joki, Pauliina Paasivirta, Adil M. Bagirov and Marko M. Mäkelä
Algorithms 2023, 16(9), 444; https://doi.org/10.3390/a16090444 - 14 Sep 2023
Cited by 1 | Viewed by 1112
Abstract
In this paper, a new nonsmooth optimization-based algorithm for solving large-scale regression problems is introduced. The regression problem is modeled as fully-connected feedforward neural networks with one hidden layer, piecewise linear activation, and the L1-loss functions. A modified version of the [...] Read more.
In this paper, a new nonsmooth optimization-based algorithm for solving large-scale regression problems is introduced. The regression problem is modeled as fully-connected feedforward neural networks with one hidden layer, piecewise linear activation, and the L1-loss functions. A modified version of the limited memory bundle method is applied to minimize this nonsmooth objective. In addition, a novel constructive approach for automated determination of the proper number of hidden nodes is developed. Finally, large real-world data sets are used to evaluate the proposed algorithm and to compare it with some state-of-the-art neural network algorithms for regression. The results demonstrate the superiority of the proposed algorithm as a predictive tool in most data sets used in numerical experiments. Full article
(This article belongs to the Special Issue Machine Learning Algorithms for Big Data Analysis)
Show Figures

Figure 1

34 pages, 581 KiB  
Review
Tensor-Based Approaches for Nonlinear and Multilinear Systems Modeling and Identification
by Gérard Favier and Alain Kibangou
Algorithms 2023, 16(9), 443; https://doi.org/10.3390/a16090443 - 14 Sep 2023
Viewed by 1226
Abstract
Nonlinear (NL) and multilinear (ML) systems play a fundamental role in engineering and science. Over the last two decades, active research has been carried out on exploiting the intrinsically multilinear structure of input–output signals and/or models in order to develop more efficient identification [...] Read more.
Nonlinear (NL) and multilinear (ML) systems play a fundamental role in engineering and science. Over the last two decades, active research has been carried out on exploiting the intrinsically multilinear structure of input–output signals and/or models in order to develop more efficient identification algorithms. This has been achieved using the notion of tensors, which are the central objects in multilinear algebra, giving rise to tensor-based approaches. The aim of this paper is to review such approaches for modeling and identifying NL and ML systems using input–output data, with a reminder of the tensor operations and decompositions needed to render the presentation as self-contained as possible. In the case of NL systems, two families of models are considered: the Volterra models and block-oriented ones. Volterra models, frequently used in numerous fields of application, have the drawback to be characterized by a huge number of coefficients contained in the so-called Volterra kernels, making their identification difficult. In order to reduce this parametric complexity, we show how Volterra systems can be represented by expanding high-order kernels using the parallel factor (PARAFAC) decomposition or generalized orthogonal basis (GOB) functions, which leads to the so-called Volterra–PARAFAC, and Volterra–GOB models, respectively. The extended Kalman filter (EKF) is presented to estimate the parameters of a Volterra–PARAFAC model. Another approach to reduce the parametric complexity consists in using block-oriented models such as those of Wiener, Hammerstein and Wiener–Hammerstein. With the purpose of estimating the parameters of such models, we show how the Volterra kernels associated with these models can be written under the form of structured tensor decompositions. In the last part of the paper, the notion of tensor systems is introduced using the Einstein product of tensors. Discrete-time memoryless tensor-input tensor-output (TITO) systems are defined by means of a relation between an Nth-order tensor of input signals and a Pth-order tensor of output signals via a (P+N)th-order transfer tensor. Such systems generalize the standard memoryless multi-input multi-output (MIMO) system to the case where input and output data define tensors of order higher than two. The case of a TISO system is then considered assuming the system transfer is a rank-one Nth-order tensor viewed as a global multilinear impulse response (IR) whose parameters are estimated using the weighted least-squares (WLS) method. A closed-form solution is proposed for estimating each individual IR associated with each mode-n subsystem. Full article
(This article belongs to the Special Issue Mathematical Modelling in Engineering and Human Behaviour)
Show Figures

Figure 1

19 pages, 14760 KiB  
Article
A Plant Disease Classification Algorithm Based on Attention MobileNet V2
by Huan Wang, Shi Qiu, Huping Ye and Xiaohan Liao
Algorithms 2023, 16(9), 442; https://doi.org/10.3390/a16090442 - 13 Sep 2023
Cited by 2 | Viewed by 1491
Abstract
Plant growth is inevitably affected by diseases, and one effective method of disease detection is through the observation of leaf changes. To solve the problem of disease detection in complex backgrounds, where the distinction between plant diseases is hindered by large intra-class differences [...] Read more.
Plant growth is inevitably affected by diseases, and one effective method of disease detection is through the observation of leaf changes. To solve the problem of disease detection in complex backgrounds, where the distinction between plant diseases is hindered by large intra-class differences and small inter-class differences, a complete plant-disease recognition process is proposed. The process was tested through experiments and research into traditional and deep features. In the face of difficulties related to plant-disease classification in complex backgrounds, the advantages of strong interpretability of traditional features and great robustness of deep features are fully utilized, and include the following components: (1) The OSTU algorithm based on the naive Bayes model is proposed to focus on where leaves are located and remove interference from complex backgrounds. (2) A multi-dimensional feature model is introduced in an interpretable manner from the perspective of traditional features to obtain leaf characteristics. (3) A MobileNet V2 network with a dual attention mechanism is proposed to establish a model that operates in both spatial and channel dimensions at the network level to facilitate plant-disease recognition. In the Plant Village open database test, the results demonstrated an average SEN of 94%, greater than other algorithms by 12.6%. Full article
(This article belongs to the Special Issue Algorithms for Smart Cities)
Show Figures

Figure 1

16 pages, 611 KiB  
Article
Placement of IoT Microservices in Fog Computing Systems: A Comparison of Heuristics
by Claudia Canali, Caterina Gazzotti, Riccardo Lancellotti and Felice Schena
Algorithms 2023, 16(9), 441; https://doi.org/10.3390/a16090441 - 13 Sep 2023
Cited by 2 | Viewed by 1264
Abstract
In the last few years, fog computing has been recognized as a promising approach to support modern IoT applications based on microservices. The main characteristic of this application involve the presence of geographically distributed sensors or mobile end users acting as sources of [...] Read more.
In the last few years, fog computing has been recognized as a promising approach to support modern IoT applications based on microservices. The main characteristic of this application involve the presence of geographically distributed sensors or mobile end users acting as sources of data. Relying on a cloud computing approach may not represent the most suitable solution in these scenario due to the non-negligible latency between data sources and distant cloud data centers, which may represent an issue in cases involving real-time and latency-sensitive IoT applications. Placing certain tasks, such as preprocessing or data aggregation, in a layer of fog nodes close to sensors or end users may help to decrease the response time of IoT applications as well as the traffic towards the cloud data centers. However, the fog scenario is characterized by a much more complex and heterogeneous infrastructure compared to a cloud data center, where the computing nodes and the inter-node connecting are more homogeneous. As a consequence, the the problem of efficiently placing microservices over distributed fog nodes requires novel and efficient solutions. In this paper, we address this issue by proposing and comparing different heuristics for placing the application microservices over the nodes of a fog infrastructure. We test the performance of the proposed heuristics and their ability to minimize application response times and satisfy the Service Level Agreement across a wide set of operating conditions in order to understand which approach is performs the best depending on the IoT application scenario. Full article
Show Figures

Figure 1

18 pages, 6534 KiB  
Article
Activation Function Dynamic Averaging as a Technique for Nonlinear 2D Data Denoising in Distributed Acoustic Sensors
by Artem T. Turov, Fedor L. Barkov, Yuri A. Konstantinov, Dmitry A. Korobko, Cesar A. Lopez-Mercado and Andrei A. Fotiadi
Algorithms 2023, 16(9), 440; https://doi.org/10.3390/a16090440 - 13 Sep 2023
Cited by 5 | Viewed by 1591
Abstract
This work studies the application of low-cost noise reduction algorithms for the data processing of distributed acoustic sensors (DAS). It presents an improvement of the previously described methodology using the activation function of neurons, which enhances the speed of data processing and the [...] Read more.
This work studies the application of low-cost noise reduction algorithms for the data processing of distributed acoustic sensors (DAS). It presents an improvement of the previously described methodology using the activation function of neurons, which enhances the speed of data processing and the quality of event identification, as well as reducing spatial distortions. The possibility of using a cheaper radiation source in DAS setups is demonstrated. Optimal algorithms’ combinations are proposed for different types of the events recorded. The criterion for evaluating the effectiveness of algorithm performance was an increase in the signal-to-noise ratio (SNR). The finest effect achieved with a combination of algorithms provided an increase in SNR of 10.8 dB. The obtained results can significantly expand the application scope of DAS. Full article
(This article belongs to the Collection Feature Papers in Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

18 pages, 7928 KiB  
Article
Automation of Winglet Wings Geometry Generation for Its Application in TORNADO
by Ángel Antonio Rodríguez-Sevillano, Rafael Bardera-Mora, Alejandra López-Cuervo-Alcaraz, Daniel Anguita-Mazón, Juan Carlos Matías-García, Estela Barroso-Barderas and Jaime Fernández-Antón
Algorithms 2023, 16(9), 439; https://doi.org/10.3390/a16090439 - 12 Sep 2023
Viewed by 1029
Abstract
The paper outlines an algorithm for the rapid aerodynamic evaluation of winglet geometries using the TORNADO Vortex Lattice Method. It is a very useful tool to obtain a first approximation of the aerodynamic properties and for performing an optimization of the geometry design. [...] Read more.
The paper outlines an algorithm for the rapid aerodynamic evaluation of winglet geometries using the TORNADO Vortex Lattice Method. It is a very useful tool to obtain a first approximation of the aerodynamic properties and for performing an optimization of the geometry design. The TORNADO tool is used to systematically calculate the aerodynamic characteristics of various wings with wingtip devices. The fast response of the aerodynamic models allows obtaining a set of results in a remarkably short time. Therefore, the development of an algorithm to generate wing geometries with great ease and complex shapes is of vital importance for the mentioned optimization process. The basic outline of the algorithm, the equations defining the wing geometries, and the results for unconventional wingtip devices, such as blended winglets and spiroid winglets, are presented. Finally, this algorithm allows designing a procedure to study the improvement of aerodynamic properties (lift, induced drag, and moment). Some examples are included to illustrate the capabilities of the algorithm. Full article
Show Figures

Figure 1

26 pages, 35873 KiB  
Article
Quantitative and Qualitative Comparison of Decision-Map Techniques for Explaining Classification Models
by Yu Wang, Alister Machado and Alexandru Telea
Algorithms 2023, 16(9), 438; https://doi.org/10.3390/a16090438 - 11 Sep 2023
Viewed by 1170
Abstract
Visualization techniques for understanding and explaining machine learning models have gained significant attention. One such technique is the decision map, which creates a 2D depiction of the decision behavior of classifiers trained on high-dimensional data. While several decision map techniques have been proposed [...] Read more.
Visualization techniques for understanding and explaining machine learning models have gained significant attention. One such technique is the decision map, which creates a 2D depiction of the decision behavior of classifiers trained on high-dimensional data. While several decision map techniques have been proposed recently, such as Decision Boundary Maps (DBMs), Supervised Decision Boundary Maps (SDBMs), and DeepView (DV), there is no framework for comprehensively evaluating and comparing these techniques. In this paper, we propose such a framework by combining quantitative metrics and qualitative assessment. We apply our framework to DBM, SDBM, and DV using a range of both synthetic and real-world classification techniques and datasets. Our results show that none of the evaluated decision-map techniques consistently outperforms the others in all measured aspects. Separately, our analysis exposes several previously unknown properties and limitations of decision-map techniques. To support practitioners, we also propose a workflow for selecting the most appropriate decision-map technique for given datasets, classifiers, and requirements of the application at hand. Full article
(This article belongs to the Special Issue Supervised and Unsupervised Classification Algorithms (2nd Edition))
Show Figures

Figure 1

32 pages, 1458 KiB  
Article
Design of PIDDα Controller for Robust Performance of Process Plants
by Muhammad Amir Fawwaz, Kishore Bingi, Rosdiazli Ibrahim, P. Arun Mozhi Devan and B. Rajanarayan Prusty
Algorithms 2023, 16(9), 437; https://doi.org/10.3390/a16090437 - 11 Sep 2023
Cited by 4 | Viewed by 1292
Abstract
Managing industrial processes in real-time is challenging due to the nonlinearity and sensitivity of these processes. This unpredictability can cause delays in the regulation of these processes. The PID controller family is commonly used in these situations, but their performance is inadequate in [...] Read more.
Managing industrial processes in real-time is challenging due to the nonlinearity and sensitivity of these processes. This unpredictability can cause delays in the regulation of these processes. The PID controller family is commonly used in these situations, but their performance is inadequate in systems and surroundings with varying set-points, longer dead times, external noises, and disturbances. Therefore, this research has developed a novel controller structure for PIDDα that incorporates the second derivative term from PIDD2 while exclusively using fractional order parameters for the second derivative term. The controllers’ robust performance has been evaluated on four simulation plants: first order, second order with time delay, third-order magnetic levitation systems, and fourth-order automatic voltage regulation systems. The controllers’ performance has also been evaluated on experimental models of pressure and flow processes. The proposed controller exhibits the least overshoot among all the systems tested. The overshoot for the first-order systems is 9.63%, for the third-order magnetic levitation system, it is 12.82%, and for the fourth-order automatic voltage regulation system, it is only 0.19%. In the pressure process plant, the overshoot is only 4.83%. All controllers for the second-order systems have a time delay, while the flow process plant has no overshoot. The proposed controller demonstrates superior settling times in various systems. For first-order systems, the settling time is 14.26 s, while in the pressure process plant, the settling time is 8.9543 s. Similarly, the proposed controllers for the second-order system with a time delay and the flow process plant have the same settling time of 46.0495 s. In addition, the proposed controller results in the lowest rise time for three different systems. The rise time is only 0.0075 s for the third-order magnetic levitation system, while the fourth-order automatic voltage regulation system has a rise time of 0.0232 s. Finally, for the flow process plant, the proposed controller has the least rise time of 25.7819 s. Thus, in all the cases, the proposed controller results in a more robust controller structure that provides the desired performance of a regular PIDD2 controller, offering better dynamic responses, shorter settling times, faster rise times, and reduced overshoot. Based on the analysis, it is evident that PIDDα outperforms both PID and FOPID control techniques due to its ability to produce a more robust control signal. Full article
Show Figures

Figure 1

10 pages, 352 KiB  
Article
A Multi-Objective Degree-Based Network Anonymization Method
by Ola N. Halawi, Faisal N. Abu-Khzam and Sergio Thoumi
Algorithms 2023, 16(9), 436; https://doi.org/10.3390/a16090436 - 11 Sep 2023
Viewed by 833
Abstract
Enormous amounts of data collected from social networks or other online platforms are being published for the sake of statistics, marketing, and research, among other objectives. The consequent privacy and data security concerns have motivated the work on degree-based data anonymization. In this [...] Read more.
Enormous amounts of data collected from social networks or other online platforms are being published for the sake of statistics, marketing, and research, among other objectives. The consequent privacy and data security concerns have motivated the work on degree-based data anonymization. In this paper, we propose and study a new multi-objective anonymization approach that generalizes the known degree anonymization problem and attempts at improving it as a more realistic model for data security/privacy. Our suggested model guarantees a convenient privacy level, based on modifying the degrees in a way that respects some given local restrictions, per node, such that the total modifications at the global level (in the whole graph/network) are bounded by some given value. The corresponding multi-objective graph realization approach is formulated and solved using Integer Linear Programming to obtain an optimum solution. Our thorough experimental studies provide empirical evidence of the effectiveness of the new approach, by specifically showing that the introduced anonymization algorithm has a negligible effect on the way nodes are clustered, thereby preserving valuable network information while significantly improving data privacy. Full article
(This article belongs to the Special Issue 2022 and 2023 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

16 pages, 4794 KiB  
Article
Explainable Artificial Intelligence Method (ParaNet+) Localises Abnormal Parathyroid Glands in Scintigraphic Scans of Patients with Primary Hyperparathyroidism
by Dimitris J. Apostolopoulos, Ioannis D. Apostolopoulos, Nikolaos D. Papathanasiou, Trifon Spyridonidis and George S. Panayiotakis
Algorithms 2023, 16(9), 435; https://doi.org/10.3390/a16090435 - 11 Sep 2023
Cited by 1 | Viewed by 1126
Abstract
The pre-operative localisation of abnormal parathyroid glands (PG) in parathyroid scintigraphy is essential for suggesting treatment and assisting surgery. Human experts examine the scintigraphic image outputs. An assisting diagnostic framework for localisation reduces the workload of physicians and can serve educational purposes. Former [...] Read more.
The pre-operative localisation of abnormal parathyroid glands (PG) in parathyroid scintigraphy is essential for suggesting treatment and assisting surgery. Human experts examine the scintigraphic image outputs. An assisting diagnostic framework for localisation reduces the workload of physicians and can serve educational purposes. Former studies from the authors suggested a successful deep learning model, but it produced many false positives. Between 2010 and 2020, 648 participants were enrolled in the Department of Nuclear Medicine of the University Hospital of Patras, Greece. An innovative modification of the well-known VGG19 network (ParaNet+) is proposed to classify scintigraphic images into normal and abnormal classes. The Grad-CAM++ algorithm is applied to localise the abnormal PGs. An external dataset of 100 patients imaged at the same department who underwent parathyroidectomy in 2021 and 2022 was used for evaluation. ParaNet+ agreed with the human readers, showing 0.9861 on a patient-level and 0.8831 on a PG-level basis under a 10-fold cross-validation on the training set of 648 participants. Regarding the external dataset, the experts identified 93 of 100 abnormal patient cases and 99 of 118 surgically excised abnormal PGs. The human-reader false-positive rate (FPR) was 10% on a PG basis. ParaNet+ identified 99/100 abnormal cases and 103/118 PGs, with an 11.2% FPR. The model achieved higher sensitivity on both patient and PG bases than the human reader (99.0% vs. 93% and 87.3% vs. 83.9%, respectively), with comparable FPRs. Deep learning can assist in detecting and localising abnormal PGs in scintigraphic scans of patients with primary hyperparathyroidism and can be adapted to the everyday routine. Full article
(This article belongs to the Special Issue Artificial Intelligence for Medical Imaging)
Show Figures

Figure 1

22 pages, 719 KiB  
Article
Computing the Matrix Logarithm with the Romberg Integration Method
by Javier Ibáñez, José M. Alonso, Emilio Defez, Pedro Alonso-Jordá and Jorge Sastre
Algorithms 2023, 16(9), 434; https://doi.org/10.3390/a16090434 - 9 Sep 2023
Viewed by 1079
Abstract
The matrix logarithm function has applicability in many engineering and science fields. Improvements in its calculation, from the point of view of both accuracy and/or execution time, have a direct impact on these disciplines. This paper describes a new numerical algorithm devoted to [...] Read more.
The matrix logarithm function has applicability in many engineering and science fields. Improvements in its calculation, from the point of view of both accuracy and/or execution time, have a direct impact on these disciplines. This paper describes a new numerical algorithm devoted to matrix logarithm computation and using the Romberg integration method, together with the inverse scaling and squaring technique. This novel method was implemented and compared with three different state-of-the-art codes, all based on Padé approximation. The experimental results, under a heterogeneous matrix test battery, showed that the new method was numerically stable, with an elapsed time midway among the other codes, and it generally offered a higher accuracy. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms)
Show Figures

Figure 1

42 pages, 6014 KiB  
Review
A Review of Methods and Applications for a Heart Rate Variability Analysis
by Suraj Kumar Nayak, Bikash Pradhan, Biswaranjan Mohanty, Jayaraman Sivaraman, Sirsendu Sekhar Ray, Jolanta Wawrzyniak, Maciej Jarzębski and Kunal Pal
Algorithms 2023, 16(9), 433; https://doi.org/10.3390/a16090433 - 9 Sep 2023
Cited by 1 | Viewed by 2485
Abstract
Heart rate variability (HRV) has emerged as an essential non-invasive tool for understanding cardiac autonomic function over the last few decades. This can be attributed to the direct connection between the heart’s rhythm and the activity of the sympathetic and parasympathetic nervous systems. [...] Read more.
Heart rate variability (HRV) has emerged as an essential non-invasive tool for understanding cardiac autonomic function over the last few decades. This can be attributed to the direct connection between the heart’s rhythm and the activity of the sympathetic and parasympathetic nervous systems. The cost-effectiveness and ease with which one may obtain HRV data also make it an exciting and potential clinical tool for evaluating and identifying various health impairments. This article comprehensively describes a range of signal decomposition techniques and time-series modeling methods recently used in HRV analyses apart from the conventional HRV generation and feature extraction methods. Various weight-based feature selection approaches and dimensionality reduction techniques are summarized to assess the relevance of each HRV feature vector. The popular machine learning-based HRV feature classification techniques are also described. Some notable clinical applications of HRV analyses, like the detection of diabetes, sleep apnea, myocardial infarction, cardiac arrhythmia, hypertension, renal failure, psychiatric disorders, ANS Activity of Patients Undergoing Weaning from Mechanical Ventilation, and monitoring of fetal distress and neonatal critical care, are discussed. The latest research on the effect of external stimuli (like consuming alcohol) on autonomic nervous system (ANS) activity using HRV analyses is also summarized. The HRV analysis approaches summarized in our article can help future researchers to dive deep into their potential diagnostic applications. Full article
Show Figures

Figure 1

16 pages, 235 KiB  
Perspective
Sparks of Artificial General Recommender (AGR): Experiments with ChatGPT
by Guo Lin and Yongfeng Zhang
Algorithms 2023, 16(9), 432; https://doi.org/10.3390/a16090432 - 8 Sep 2023
Cited by 2 | Viewed by 1231
Abstract
This study investigates the feasibility of developing an Artificial General Recommender (AGR), facilitated by recent advancements in Large Language Models (LLMs). An AGR comprises both conversationality and universality to engage in natural dialogues and generate recommendations across various domains. We propose ten fundamental [...] Read more.
This study investigates the feasibility of developing an Artificial General Recommender (AGR), facilitated by recent advancements in Large Language Models (LLMs). An AGR comprises both conversationality and universality to engage in natural dialogues and generate recommendations across various domains. We propose ten fundamental principles that an AGR should adhere to, each with its corresponding testing protocol. We proceed to assess whether ChatGPT, a sophisticated LLM, can comply with the proposed principles by engaging in recommendation-oriented dialogues with the model while observing its behavior. Our findings demonstrate the potential for ChatGPT to serve as an AGR, though several limitations and areas for improvement are identified. Full article
(This article belongs to the Special Issue New Trends in Algorithms for Intelligent Recommendation Systems)
14 pages, 5860 KiB  
Article
Regularized Contrastive Masked Autoencoder Model for Machinery Anomaly Detection Using Diffusion-Based Data Augmentation
by Esmaeil Zahedi, Mohamad Saraee, Fatemeh Sadat Masoumi and Mohsen Yazdinejad
Algorithms 2023, 16(9), 431; https://doi.org/10.3390/a16090431 - 8 Sep 2023
Cited by 1 | Viewed by 1265
Abstract
Unsupervised anomalous sound detection, especially self-supervised methods, plays a crucial role in differentiating unknown abnormal sounds of machines from normal sounds. Self-supervised learning can be divided into two main categories: Generative and Contrastive methods. While Generative methods mainly focus on reconstructing data, Contrastive [...] Read more.
Unsupervised anomalous sound detection, especially self-supervised methods, plays a crucial role in differentiating unknown abnormal sounds of machines from normal sounds. Self-supervised learning can be divided into two main categories: Generative and Contrastive methods. While Generative methods mainly focus on reconstructing data, Contrastive learning methods refine data representations by leveraging the contrast between each sample and its augmented version. However, existing Contrastive learning methods for anomalous sound detection often have two main problems. The first one is that they mostly rely on simple augmentation techniques, such as time or frequency masking, which may introduce biases due to the limited diversity of real-world sounds and noises encountered in practical scenarios (e.g., factory noises combined with machine sounds). The second issue is dimension collapsing, which leads to learning a feature space with limited representation. To address the first shortcoming, we suggest a diffusion-based data augmentation method that employs ChatGPT and AudioLDM. Also, to address the second concern, we put forward a two-stage self-supervised model. In the first stage, we introduce a novel approach that combines Contrastive learning and masked autoencoders to pre-train on the MIMII and ToyADMOS2 datasets. This combination allows our model to capture both global and local features, leading to a more comprehensive representation of the data. In the second stage, we refine the audio representations for each machine ID by employing supervised Contrastive learning to fine-tune the pre-trained model. This process enhances the relationship between audio features originating from the same machine ID. Experiments show that our method outperforms most of the state-of-the-art self-supervised learning methods. Our suggested model achieves an average AUC and pAUC of 94.39% and 87.93% on the DCASE 2020 Challenge Task2 dataset, respectively. Full article
Show Figures

Figure 1

21 pages, 4300 KiB  
Article
Indoor Scene Recognition: An Attention-Based Approach Using Feature Selection-Based Transfer Learning and Deep Liquid State Machine
by Ranjini Surendran, Ines Chihi, J. Anitha and D. Jude Hemanth
Algorithms 2023, 16(9), 430; https://doi.org/10.3390/a16090430 - 8 Sep 2023
Cited by 1 | Viewed by 1486
Abstract
Scene understanding is one of the most challenging areas of research in the fields of robotics and computer vision. Recognising indoor scenes is one of the research applications in the category of scene understanding that has gained attention in recent years. Recent developments [...] Read more.
Scene understanding is one of the most challenging areas of research in the fields of robotics and computer vision. Recognising indoor scenes is one of the research applications in the category of scene understanding that has gained attention in recent years. Recent developments in deep learning and transfer learning approaches have attracted huge attention in addressing this challenging area. In our work, we have proposed a fine-tuned deep transfer learning approach using DenseNet201 for feature extraction and a deep Liquid State Machine model as the classifier in order to develop a model for recognising and understanding indoor scenes. We have included fuzzy colour stacking techniques, colour-based segmentation, and an adaptive World Cup optimisation algorithm to improve the performance of our deep model. Our proposed model would dedicatedly assist the visually impaired and blind to navigate in the indoor environment and completely integrate into their day-to-day activities. Our proposed work was implemented on the NYU depth dataset and attained an accuracy of 96% for classifying the indoor scenes. Full article
(This article belongs to the Special Issue Algorithms for Image Processing and Machine Vision)
Show Figures

Figure 1

17 pages, 6009 KiB  
Article
Bayesian Opportunities for Brain–Computer Interfaces: Enhancement of the Existing Classification Algorithms and Out-of-Domain Detection
by Egor I. Chetkin, Sergei L. Shishkin and Bogdan L. Kozyrskiy
Algorithms 2023, 16(9), 429; https://doi.org/10.3390/a16090429 - 8 Sep 2023
Viewed by 1117
Abstract
Bayesian neural networks (BNNs) are effective tools for a variety of tasks that allow for the estimation of the uncertainty of the model. As BNNs use prior constraints on parameters, they are better regularized and less prone to overfitting, which is a serious [...] Read more.
Bayesian neural networks (BNNs) are effective tools for a variety of tasks that allow for the estimation of the uncertainty of the model. As BNNs use prior constraints on parameters, they are better regularized and less prone to overfitting, which is a serious issue for brain–computer interfaces (BCIs), where typically only small training datasets are available. Here, we tested, on the BCI Competition IV 2a motor imagery dataset, if the performance of the widely used, effective neural network classifiers EEGNet and Shallow ConvNet can be improved by turning them into BNNs. Accuracy indeed was higher, at least for a BNN based on Shallow ConvNet with two of three tested prior distributions. We also assessed if BNN-based uncertainty estimation could be used as a tool for out-of-domain (OOD) data detection. The OOD detection worked well only in certain participants; however, we expect that further development of the method may make it work sufficiently well for practical applications. Full article
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing)
Show Figures

Figure 1

16 pages, 2271 KiB  
Article
Physics-Informed Neural Networks for the Heat Equation with Source Term under Various Boundary Conditions
by Brett Bowman, Chad Oian, Jason Kurz, Taufiquar Khan, Eddie Gil and Nick Gamez
Algorithms 2023, 16(9), 428; https://doi.org/10.3390/a16090428 - 7 Sep 2023
Cited by 1 | Viewed by 1603
Abstract
Modeling of physical processes as partial differential equations (PDEs) is often carried out with computationally expensive numerical solvers. A common, and important, process to model is that of laser interaction with biological tissues. Physics-informed neural networks (PINNs) have been used to model many [...] Read more.
Modeling of physical processes as partial differential equations (PDEs) is often carried out with computationally expensive numerical solvers. A common, and important, process to model is that of laser interaction with biological tissues. Physics-informed neural networks (PINNs) have been used to model many physical processes, though none have demonstrated an approximation involving a source term in a PDE, which modeling laser-tissue interactions requires. In this work, a numerical solver for simulating tissue interactions with lasers was surrogated using PINNs while testing various boundary conditions, one with a radiative source term involved. Models were tested using differing activation function combinations in their architectures for comparison. The best combinations of activation functions were different for cases with and without a source term, and R2 scores and average relative errors for the predictions of the best PINN models indicate that it is an accurate surrogate model for corresponding solvers. PINNs appear to be valid replacements for numerical solvers for one-dimensional tissue interactions with electromagnetic radiation. Full article
Show Figures

Figure 1

11 pages, 4847 KiB  
Article
Optimal Confidence Regions for Weibull Parameters and Quantiles under Progressive Censoring
by Arturo J. Fernández
Algorithms 2023, 16(9), 427; https://doi.org/10.3390/a16090427 - 6 Sep 2023
Viewed by 800
Abstract
Confidence regions for the Weibull parameters with minimum areas among all those based on the Conditionality Principle are constructed using an equivalent diffuse Bayesian approach. The process is valid for scenarios involving standard failure and progressive censorship, and complete data. Optimal conditional confidence [...] Read more.
Confidence regions for the Weibull parameters with minimum areas among all those based on the Conditionality Principle are constructed using an equivalent diffuse Bayesian approach. The process is valid for scenarios involving standard failure and progressive censorship, and complete data. Optimal conditional confidence sets for two Weibull quantiles are also derived. Simulation-based algorithms are provided for computing the smallest-area regions with fixed confidence levels. Importantly, the proposed confidence sets satisfy the Sufficiency, Likelihood and Conditionality Principles in contrast to the unconditional regions based on maximum likelihood estimators and other insufficient statistics. The suggested perspective can be applied to parametric estimation and hypothesis testing, as well as to the determination of minimum-size confidence sets for other invariantly estimable functions of the Weibull parameters. A dataset concerning failure times of an insulating fluid is studied for illustrative and comparative purposes. Full article
(This article belongs to the Special Issue Mathematical Models and Their Applications IV)
Show Figures

Figure 1

19 pages, 3395 KiB  
Article
A Framework for Determining Collision Likelihood Using Continuous Friction Values in a Connected Vehicle Environment
by Qian Xie and Tae J. Kwon
Algorithms 2023, 16(9), 426; https://doi.org/10.3390/a16090426 - 6 Sep 2023
Cited by 1 | Viewed by 891
Abstract
Jurisdictions currently provide information on winter road conditions through qualitative descriptors like bare and fully snow-covered. Ideally, these descriptors are meant to warn drivers beforehand about hazardous roads. In practice, however, discerning between safe and unsafe roads is sometimes unclear due to intermediate [...] Read more.
Jurisdictions currently provide information on winter road conditions through qualitative descriptors like bare and fully snow-covered. Ideally, these descriptors are meant to warn drivers beforehand about hazardous roads. In practice, however, discerning between safe and unsafe roads is sometimes unclear due to intermediate RSC classes covering too wide a range of conditions. This study aims at solving this safety ambiguity issue by proposing a framework for predicting collision likelihood within a road segment. The proposed framework converts road surface images into friction coefficients, which are then converted into continuous measurements through an interpolator. To find the best-performing interpolator, we evaluated geostatistical, machine learning, and hybrid interpolators. It was found that ordinary kriging had the lowest estimation error and was the least sensitive to changes in distance between measurements. After developing an interpolator, collision likelihood models were developed for segment lengths ranging from 0.5 km to 20 km. We chose the 6.5 km model based on its accuracy and intuitiveness. This model had 76.9% accuracy and included friction and AADT as predictors. It was also estimated that if the proposed framework were implemented in an environment with connected vehicles and intelligent transportation systems, it would offer significant safety improvements. Full article
(This article belongs to the Special Issue Optimization Algorithms in Logistics, Transportation, and SCM)
Show Figures

Figure 1

3 pages, 160 KiB  
Editorial
“Multi-Objective and Multi-Level Optimization: Algorithms and Applications”: Foreword by the Guest Editor
by Massimiliano Caramia
Algorithms 2023, 16(9), 425; https://doi.org/10.3390/a16090425 - 5 Sep 2023
Cited by 1 | Viewed by 1107
Abstract
Decision making in real-world applications frequently calls for taking into account multiple goals to come up with viable solutions [...] Full article
21 pages, 1090 KiB  
Article
Reddit CrosspostNet—Studying Reddit Communities with Large-Scale Crosspost Graph Networks
by Jan Sawicki, Maria Ganzha, Marcin Paprzycki and Yutaka Watanobe
Algorithms 2023, 16(9), 424; https://doi.org/10.3390/a16090424 - 4 Sep 2023
Cited by 1 | Viewed by 1656
Abstract
As the largest open social medium on the Internet, Reddit is widely studied in the scientific literature. Due to its structured form and division into topical subfora (subreddits), conducted research often concerns connections and interactions between users and/or whole, subreddit-structure-based communities. Overall, the [...] Read more.
As the largest open social medium on the Internet, Reddit is widely studied in the scientific literature. Due to its structured form and division into topical subfora (subreddits), conducted research often concerns connections and interactions between users and/or whole, subreddit-structure-based communities. Overall, the relations between communities are most often studied by applying graph networks, with various creation algorithms. In this work, a novel approach is proposed to build and understand the structure of Reddit. It is based on crossposts—posts that appeared on one subreddit and then were crossposted to another. After capturing one year of crossposts, a directed weighted graph network, using seven million posts from over 10,000 of the most popular subreddits, has been created. Using graph network algorithms, its characteristics are captured and compared to similar studies. We identify the information “sinks” and “sources”—the most active crossposting subreddits. Moreover, we obtained graph network metrics: the degree (modeled with the Power Law), clustering, community detection algorithms, and connected components structure network are compared to previous studies on Reddit network(s), yielding consistent, but also novel results. Finally, the relations between extensively studied subreddits (e.g., r/AITA, r/Parenting, r/politics) and new ones, which were not accounted for in previous research, opening new paths for data-driven studies, are summarized. Full article
(This article belongs to the Special Issue Graph Algorithms for Social Network Analysis)
Show Figures

Figure 1

15 pages, 3671 KiB  
Article
Elevating Univariate Time Series Forecasting: Innovative SVR-Empowered Nonlinear Autoregressive Neural Networks
by Juan D. Borrero and Jesus Mariscal
Algorithms 2023, 16(9), 423; https://doi.org/10.3390/a16090423 - 2 Sep 2023
Cited by 1 | Viewed by 1223
Abstract
Efforts across diverse domains like economics, energy, and agronomy have focused on developing predictive models for time series data. A spectrum of techniques, spanning from elementary linear models to intricate neural networks and machine learning algorithms, has been explored to achieve accurate forecasts. [...] Read more.
Efforts across diverse domains like economics, energy, and agronomy have focused on developing predictive models for time series data. A spectrum of techniques, spanning from elementary linear models to intricate neural networks and machine learning algorithms, has been explored to achieve accurate forecasts. The hybrid ARIMA-SVR model has garnered attention due to its fusion of a foundational linear model with error correction capabilities. However, its use is limited to stationary time series data, posing a significant challenge. To overcome these limitations and drive progress, we propose the innovative NAR–SVR hybrid method. Unlike its predecessor, this approach breaks free from stationarity and linearity constraints, leading to improved model performance solely through historical data exploitation. This advancement significantly reduces the time and computational resources needed for precise predictions, a critical factor in univariate economic time series forecasting. We apply the NAR–SVR hybrid model in three scenarios: Spanish berry daily yield data from 2018 to 2021, daily COVID-19 cases in three countries during 2020, and the daily Bitcoin price time series from 2015 to 2020. Through extensive comparative analyses with other time series prediction models, our results substantiate that our novel approach consistently outperforms its counterparts. By transcending stationarity and linearity limitations, our hybrid methodology establishes a new paradigm for univariate time series forecasting, revolutionizing the field and enhancing predictive capabilities across various domains as highlighted in this study. Full article
(This article belongs to the Special Issue Machine Learning for Time Series Analysis)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop