Previous Issue
Volume 17, October
 
 

Algorithms, Volume 17, Issue 11 (November 2024) – 11 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 426 KiB  
Article
Local Convergence Study for an Iterative Scheme with a High Order of Convergence
by Eulalia Martínez and Arleen Ledesma
Algorithms 2024, 17(11), 481; https://doi.org/10.3390/a17110481 (registering DOI) - 25 Oct 2024
Abstract
In this paper, we address a key issue in Numerical Functional Analysis: to perform the local convergence analysis of a fourth order of convergence iterative method in Banach spaces, examining conditions on the operator and its derivatives near the solution to ensure convergence. [...] Read more.
In this paper, we address a key issue in Numerical Functional Analysis: to perform the local convergence analysis of a fourth order of convergence iterative method in Banach spaces, examining conditions on the operator and its derivatives near the solution to ensure convergence. Moreover, this approach provides a local convergence ball, within which initial estimates lead to guaranteed convergence with details about the radii of domain of convergence and estimates on error bounds. Next, we perform a comparative study of the Computational Efficiency Index (CEI) between the analyzed scheme and some known iterative methods of fourth order of convergence. Our ultimate goal is to use these theoretical findings to address practical problems in engineering and technology. Full article
19 pages, 1851 KiB  
Article
Nonlinear Optimization and Adaptive Heuristics for Solving Irregular Object Packing Problems
by János D. Pintér, Ignacio Castillo and Frank J. Kampas
Algorithms 2024, 17(11), 480; https://doi.org/10.3390/a17110480 - 25 Oct 2024
Abstract
We review and present several challenging model classes arising in the context of finding optimized object packings (OP). Except for the smallest and/or simplest general OP model instances, it is not possible to find their exact (closed-form) solution. Most OP problem instances become [...] Read more.
We review and present several challenging model classes arising in the context of finding optimized object packings (OP). Except for the smallest and/or simplest general OP model instances, it is not possible to find their exact (closed-form) solution. Most OP problem instances become increasingly difficult to handle even numerically, as the number of packed objects increases. Specifically, here we consider classes of general OP problems that can be formulated in the framework of nonlinear optimization. Research experience demonstrates that—in addition to utilizing general-purpose nonlinear optimization solver engines—the insightful exploitation of problem-specific heuristics can improve the quality of numerical solutions. We discuss scalable OP problem classes aimed at packing general circles, spheres, ellipses, and ovals, with numerical (conjectured) solutions of non-trivial model instances. In addition to their practical relevance, these models and their various extensions can also serve as constrained global optimization test challenges. Full article
(This article belongs to the Special Issue Facility Layout Optimization: Bridging Theory and Practice)
Show Figures

Figure 1

18 pages, 2464 KiB  
Article
Indetermsoft-Set-Based D* Extra Lite Framework for Resource Provisioning in Cloud Computing
by Bhargavi Krishnamurthy and Sajjan G. Shiva
Algorithms 2024, 17(11), 479; https://doi.org/10.3390/a17110479 - 25 Oct 2024
Abstract
Cloud computing is an immensely complex, huge-scale, and highly diverse computing platform that allows the deployment of highly resource-constrained scientific and personal applications. Resource provisioning in cloud computing is difficult because of the uncertainty associated with it in terms of dynamic elasticity, rapid [...] Read more.
Cloud computing is an immensely complex, huge-scale, and highly diverse computing platform that allows the deployment of highly resource-constrained scientific and personal applications. Resource provisioning in cloud computing is difficult because of the uncertainty associated with it in terms of dynamic elasticity, rapid performance change, large-scale virtualization, loosely coupled applications, the elastic escalation of user demands, etc. Hence, there is a need to develop an intelligent framework that allows effective resource provisioning under uncertainties. The Indetermsoft set is a promising mathematical model that is an extension of the traditional soft set that is designed to handle uncertain forms of data. The D* extra lite algorithm is a dynamic heuristic algorithm that makes use of the history of knowledge from past search experience to arrive at decisions. In this paper, the D* extra lite algorithm is enabled with the Indetermsoft set to perform proficient resource provisioning under uncertainty. The experimental results show that the performance of the proposed algorithm is found to be promising in performance metrics such as power consumption, resource utilization, total execution time, and learning rate. The expected value analysis also validated the experimental results obtained. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

16 pages, 318 KiB  
Article
List-Based Threshold Accepting Algorithm with Improved Neighbor Operator for 0–1 Knapsack Problem
by Liangcheng Wu, Kai Lin, Xiaoyu Lin and Juan Lin
Algorithms 2024, 17(11), 478; https://doi.org/10.3390/a17110478 - 25 Oct 2024
Abstract
The list-based threshold accepting (LBTA) algorithm is a sophisticated local search method that utilizes a threshold list to streamline the parameter tuning process in the traditional threshold accepting (TA) algorithm. This paper proposes an enhanced local search version of the LBTA algorithm specifically [...] Read more.
The list-based threshold accepting (LBTA) algorithm is a sophisticated local search method that utilizes a threshold list to streamline the parameter tuning process in the traditional threshold accepting (TA) algorithm. This paper proposes an enhanced local search version of the LBTA algorithm specifically tailored for solving the 0–1 knapsack problem (0–1 KP). To maintain a dynamic threshold list, a feasible threshold updating strategy is designed to accept adaptive modifications during the search process. In addition, the algorithm incorporates an improved bit-flip operator designed to generate a neighboring solution with a controlled level of disturbance, thereby fostering exploration within the solution space. Each trial solution produced by this operator undergoes a repair phase using a hybrid greedy repair operator that incorporates both density-based and value-based add operator to facilitate optimization. The LBTA algorithm’s performance was evaluated against several state-of-the-art metaheuristic approaches on a series of large-scale instances. The simulation results demonstrate that the LBTA algorithm outperforms or is competitive with other leading metaheuristics in the field. Full article
Show Figures

Figure 1

19 pages, 1596 KiB  
Article
Investigating Brain Responses to Transcutaneous Electroacupuncture Stimulation: A Deep Learning Approach
by Tahereh Vasei, Harshil Gediya, Maryam Ravan, Anand Santhanakrishnan, David Mayor and Tony Steffert
Algorithms 2024, 17(11), 477; https://doi.org/10.3390/a17110477 - 24 Oct 2024
Abstract
This study investigates the neurophysiological effects of transcutaneous electroacupuncture stimulation (TEAS) on brain activity, using advanced machine learning techniques. This work analyzed the electroencephalograms (EEG) of 48 study participants, in order to analyze the brain’s response to different TEAS frequencies (2.5, 10, 80, [...] Read more.
This study investigates the neurophysiological effects of transcutaneous electroacupuncture stimulation (TEAS) on brain activity, using advanced machine learning techniques. This work analyzed the electroencephalograms (EEG) of 48 study participants, in order to analyze the brain’s response to different TEAS frequencies (2.5, 10, 80, and sham at 160 pulses per second (pps)) across 48 participants through pre-stimulation, during-stimulation, and post-stimulation phases. Our approach introduced several novel aspects. EEGNet, a convolutional neural network specifically designed for EEG signal processing, was utilized in this work, achieving over 95% classification accuracy in detecting brain responses to various TEAS frequencies. Additionally, the classification accuracies across the pre-stimulation, during-stimulation, and post-stimulation phases remained consistently high (above 92%), indicating that EEGNet effectively captured the different time-based brain responses across different stimulation phases. Saliency maps were applied to identify the most critical EEG electrodes, potentially reducing the number needed without sacrificing accuracy. A phase-based analysis was conducted to capture time-based brain responses throughout different stimulation phases. The robustness of EEGNet was assessed across demographic and clinical factors, including sex, age, and psychological states. Additionally, the responsiveness of different EEG frequency bands to TEAS was investigated. The results demonstrated that EEGNet excels in classifying EEG signals with high accuracy, underscoring its effectiveness in reliably classifying EEG responses to TEAS and enhancing its applicability in clinical and therapeutic settings. Notably, gamma band activity showed the highest sensitivity to TEAS, suggesting significant effects on higher cognitive functions. Saliency mapping revealed that a subset of electrodes (Fp1, Fp2, Fz, F7, F8, T3, T4) could achieve accurate classification, indicating potential for more efficient EEG setups. Full article
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing (2nd Edition))
37 pages, 5770 KiB  
Article
A Review on Resource-Constrained Embedded Vision Systems-Based Tiny Machine Learning for Robotic Applications
by Miguel Beltrán-Escobar, Teresa E. Alarcón, Jesse Y. Rumbo-Morales, Sonia López, Gerardo Ortiz-Torres and Felipe D. J. Sorcia-Vázquez
Algorithms 2024, 17(11), 476; https://doi.org/10.3390/a17110476 - 24 Oct 2024
Abstract
The evolution of low-cost embedded systems is growing exponentially; likewise, their use in robotics applications aims to achieve critical task execution by implementing sophisticated control and computer vision algorithms. We review the state-of-the-art strategies available for Tiny Machine Learning (TinyML) implementation to provide [...] Read more.
The evolution of low-cost embedded systems is growing exponentially; likewise, their use in robotics applications aims to achieve critical task execution by implementing sophisticated control and computer vision algorithms. We review the state-of-the-art strategies available for Tiny Machine Learning (TinyML) implementation to provide a complete overview using various existing embedded vision and control systems. Our discussion divides the article into four critical aspects that high-cost and low-cost embedded systems must include to execute real-time control and image processing tasks, applying TinyML techniques: Hardware Architecture, Vision System, Power Consumption, and Embedded Software Platform development environment. The advantages and disadvantages of the reviewed systems are presented. Subsequently, the perspectives of them for the next ten years are present. A basic TinyML implementation for embedded vision application using three low-cost embedded systems, Raspberry Pi Pico, ESP32, and Arduino Nano 33 BLE Sense, is presented for performance analysis. Full article
(This article belongs to the Special Issue AI and Computational Methods in Engineering and Science)
Show Figures

Figure 1

15 pages, 1321 KiB  
Article
ACT-FRCNN: Progress Towards Transformer-Based Object Detection
by Sukana Zulfqar, Zenab Elgamal, Muhammad Azam Zia, Abdul Razzaq, Sami Ullah and Hussain Dawood
Algorithms 2024, 17(11), 475; https://doi.org/10.3390/a17110475 - 23 Oct 2024
Abstract
Maintaining a high input resolution is crucial for more complex tasks like detection or segmentation to ensure that models can adequately identify and reflect fine details in the output. This study aims to reduce the computation costs associated with high-resolution input by using [...] Read more.
Maintaining a high input resolution is crucial for more complex tasks like detection or segmentation to ensure that models can adequately identify and reflect fine details in the output. This study aims to reduce the computation costs associated with high-resolution input by using a variant of transformer, known as the Adaptive Clustering Transformer (ACT). The proposed model is named ACT-FRCNN. Which integrates ACT with a Faster Region-Based Convolution Neural Network (FRCNN) for a detection task head. In this paper, we proposed a method to improve the detection framework, resulting in better performance for out-of-domain images, improved object identification, and reduced dependence on non-maximum suppression. The ACT-FRCNN represents a significant step in the application of transformer models to challenging visual tasks like object detection, laying the foundation for future work using transformer models. The performance of ACT-FRCNN was evaluated on a variety of well-known datasets including BSDS500, NYUDv2, and COCO. The results indicate that ACT-FRCNN reduces over-detection errors and improves the detection of large objects. The findings from this research have practical implications for object detection and other computer vision tasks. Full article
(This article belongs to the Special Issue Algorithms for Feature Selection (2nd Edition))
Show Figures

Figure 1

17 pages, 9452 KiB  
Article
GLMI: An Efficient Spatiotemporal Index Leveraging Geohash and Piecewise Linear Models for Optimized Query Performance
by Kun Chen, Gang Liu, Genshen Chen, Zhengping Weng and Qiyu Chen
Algorithms 2024, 17(11), 474; https://doi.org/10.3390/a17110474 - 22 Oct 2024
Abstract
Spatiotemporal big data contain information in multiple dimensions such as space and time. Spatiotemporal data have the characteristics of large volume, intricate spatiotemporal relationship, and uneven spatiotemporal distribution. Index structure is one of the most important technologies used to improve system data analysis [...] Read more.
Spatiotemporal big data contain information in multiple dimensions such as space and time. Spatiotemporal data have the characteristics of large volume, intricate spatiotemporal relationship, and uneven spatiotemporal distribution. Index structure is one of the most important technologies used to improve system data analysis and workload. However, it is difficult to dynamically adjust with data density, resulting in increased maintenance costs and retrieval complexity. At the same time, maintaining the proximity of spatiotemporal data in spatial or temporal dimensions is crucial for efficient spatiotemporal analysis. To address these challenges, this paper proposes a learned index method, GLMI (Geohash and piecewise linear model-based index for spatiotemporal data). GLMI uses dynamic space partitioning based on the Hilbert curve to reduce the impact of data skew on index performance. In the time dimension, a piecewise linear model was constructed using the ShrinkingCone algorithm, and a buffer was designed to support the fast writing of spatiotemporal data. Compared with the current mainstream traditional high-dimensional indexes and the ZM index, GLMI has a smaller space consumption and shorter construction time compared to high-dimensional learned indexes on real traffic itinerary and trajectory record datasets. Meanwhile, GLMI also has an advantage in query efficiency. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

20 pages, 11204 KiB  
Article
Estimating the Spectral Response of Eight-Band MSFA One-Shot Cameras Using Deep Learning
by Pierre Gouton, Kacoutchy Jean Ayikpa and Diarra Mamadou
Algorithms 2024, 17(11), 473; https://doi.org/10.3390/a17110473 - 22 Oct 2024
Abstract
Eight-band one-shot MSFA (multispectral filter array) cameras are innovative technologies used to capture multispectral images by capturing multiple spectral bands simultaneously. They thus make it possible to collect detailed information on the spectral properties of the observed scenes economically. These cameras are widely [...] Read more.
Eight-band one-shot MSFA (multispectral filter array) cameras are innovative technologies used to capture multispectral images by capturing multiple spectral bands simultaneously. They thus make it possible to collect detailed information on the spectral properties of the observed scenes economically. These cameras are widely used for object detection, material analysis, and agronomy. The evolution of one-shot MSFA cameras from 8 to 32 bands makes obtaining much more detailed spectral data possible, which is crucial for applications requiring delicate and precise analysis of the spectral properties of the observed scenes. Our study aims to develop models based on deep learning to estimate the spectral response of this type of camera and provide images close to the spectral properties of objects. First, we prepare our experiment data by projecting them to reflect the characteristics of our camera. Next, we harness the power of deep super-resolution neural networks, such as very deep super-resolution (VDSR), Laplacian pyramid super-resolution networks (LapSRN), and deeply recursive convolutional networks (DRCN), which we adapt to approximate the spectral response. These models learn the complex relationship between 8-band multispectral data from the camera and 31-band multispectral data from the multi-object database, enabling accurate and efficient conversion. Finally, we evaluate the images’ quality using metrics such as loss function, PSNR, and SSIM. The model evaluation revealed that DRCN outperforms others in crucial performance. DRCN achieved the lowest loss with 0.0047 and stood out in image quality metrics, with a PSNR of 25.5059, SSIM of 0.8355, and SAM of 0.13215, indicating better preservation of details and textures. Additionally, DRCN showed the lowest RMSE 0.05849 and MAE 0.0415 values, confirming its ability to minimize reconstruction errors more effectively than VDSR and LapSRN. Full article
(This article belongs to the Special Issue Machine Learning for Pattern Recognition (2nd Edition))
Show Figures

Figure 1

26 pages, 3654 KiB  
Article
An Innovative Enhanced JAYA Algorithm for the Optimization of Continuous and Discrete Problems
by Jalal Jabbar Bairooz and Farhad Mardukhi
Algorithms 2024, 17(11), 472; https://doi.org/10.3390/a17110472 - 22 Oct 2024
Abstract
Metaheuristic algorithms have gained popularity in the past decade due to their remarkable ability to address various optimization challenges. Among these, the JAYA algorithm has emerged as a recent contender that demonstrates strong performance across different optimization problems, largely attributed to its simplicity. [...] Read more.
Metaheuristic algorithms have gained popularity in the past decade due to their remarkable ability to address various optimization challenges. Among these, the JAYA algorithm has emerged as a recent contender that demonstrates strong performance across different optimization problems, largely attributed to its simplicity. However, real-world problems have become increasingly complex in today’s era, creating a demand for more robust and effective solutions to tackle these intricate challenges and achieve outstanding results. This article proposes an enhanced JAYA (EJAYA) method that addresses its inherent shortcomings, resulting in improved convergence and search capabilities when dealing with diverse problems. The current study evaluates the performance of the proposed optimization methods on both continuous and discontinuous problems. Initially, EJAYA is applied to solve 20 prominent test functions and is validated by comparison with other contemporary algorithms in the literature, including moth–flame optimization, particle swarm optimization, the dragonfly algorithm, and the sine–cosine algorithm. The effectiveness of the proposed approach in discrete scenarios is tested using feature selection and compared to existing optimization strategies. Evaluations across various scenarios demonstrate that the proposed enhancements significantly improve the JAYA algorithm’s performance, facilitating escape from local minima, achieving faster convergence, and expanding the search capabilities. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

19 pages, 6376 KiB  
Article
Deep Learning Approach for Arm Fracture Detection Based on an Improved YOLOv8 Algorithm
by Gerardo Meza, Deepak Ganta and Sergio Gonzalez Torres
Algorithms 2024, 17(11), 471; https://doi.org/10.3390/a17110471 - 22 Oct 2024
Abstract
Artificial intelligence (AI)-assisted computer vision is an evolving field in medical imaging. However, accuracy and precision suffer when using the existing AI models for small, easy-to-miss objects such as bone fractures, which affects the models’ applicability and effectiveness in a clinical setting. The [...] Read more.
Artificial intelligence (AI)-assisted computer vision is an evolving field in medical imaging. However, accuracy and precision suffer when using the existing AI models for small, easy-to-miss objects such as bone fractures, which affects the models’ applicability and effectiveness in a clinical setting. The proposed integration of the Hybrid-Attention (HA) mechanism into the YOLOv8 architecture offers a robust solution to improve accuracy, reliability, and speed in medical imaging applications. Experimental results demonstrate that our HA-modified YOLOv8 models achieve a 20% higher Mean Average Precision (mAP 50) and improved processing speed in arm fracture detection. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

Previous Issue
Back to TopTop