Previous Issue
Volume 17, November
 
 

Algorithms, Volume 17, Issue 12 (December 2024) – 59 articles

Cover Story (view full-size image): Simulating the melting of ice layers is a complex problem, as the computational domains—represented by the red region for water and the blue region for ice—evolve step by step. Rather than employing the conventional finite element method, this work proposes an optimization-based finite difference discretization. The key advantage of this approach lies in its ability to vectorize the assembly procedure for the discretization matrix, significantly reducing computational time at each simulation step. The finite difference scheme extends the standard nine-point Laplacian approximation for rectangular meshes. Using an optimization process based on the least-squares solution of overdetermined systems, the method calculates finite difference weights for general quadrilateral meshes. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
33 pages, 9160 KiB  
Article
Optimized Analytical–Numerical Procedure for Ultrasonic Sludge Treatment for Agricultural Use
by Filippo Laganà, Salvatore A. Pullano, Giovanni Angiulli and Mario Versaci
Algorithms 2024, 17(12), 592; https://doi.org/10.3390/a17120592 (registering DOI) - 21 Dec 2024
Abstract
This paper presents an integrated approach based on physical–mathematical models and numerical simulations to optimize sludge treatment using ultrasound. The main objective is to improve the efficiency of the purification system by reducing the weight and moisture of the purification sludge, therefore ensuring [...] Read more.
This paper presents an integrated approach based on physical–mathematical models and numerical simulations to optimize sludge treatment using ultrasound. The main objective is to improve the efficiency of the purification system by reducing the weight and moisture of the purification sludge, therefore ensuring regulatory compliance and environmental sustainability. A coupled temperature–humidity model, formulated by partial differential equations, describes materials’ thermal and water evolution during treatment. The numerical resolution, implemented by the finite element method (FEM), allows the simulation of the system behavior and the optimization of the operating parameters. Experimental results confirm that ultrasonic treatment reduces the moisture content of sludge by up to 20% and improves its stability, making it suitable for agricultural applications or further treatment. Functional controls of sonication and the reduction of water content in the sludge correlate with the obtained results. Ultrasound treatment has been shown to decrease the specific weight of the sludge sample both in pretreatment and treatment, therefore improving stabilization. In various experimental conditions, the weight of the sludge is reduced by a maximum of about 50%. Processed sludge transforms waste into a resource for the agricultural sector. Treatment processes have been optimized with low-energy operating principles. Additionally, besides utilizing energy-harvesting technology, plant operating processes have been optimized, accounting for approximately 55% of the consumption due to the aeration of active sludge. In addition, an extended analysis of ultrasonic wave propagation is proposed. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 3rd Edition)
25 pages, 1511 KiB  
Article
Iterative Application of UMAP-Based Algorithms for Fully Synthetic Healthcare Tabular Data Generation
by Carla Lázaro and Cecilio Angulo
Algorithms 2024, 17(12), 591; https://doi.org/10.3390/a17120591 (registering DOI) - 21 Dec 2024
Abstract
Building on a previously developed partially synthetic data generation algorithm utilizing data visualization techniques, this study extends the novel algorithm to generate fully synthetic tabular healthcare data. In this enhanced form, the algorithm serves as an alternative to conventional methods based on Generative [...] Read more.
Building on a previously developed partially synthetic data generation algorithm utilizing data visualization techniques, this study extends the novel algorithm to generate fully synthetic tabular healthcare data. In this enhanced form, the algorithm serves as an alternative to conventional methods based on Generative Adversarial Networks (GANs) or Variational Autoencoders (VAEs). By iteratively applying the original methodology, the adapted algorithm employs UMAP (Uniform Manifold Approximation and Projection), a dimensionality reduction technique, to validate generated samples through low-dimensional clustering. This approach has been successfully applied to three healthcare domains: prostate cancer, breast cancer, and cardiovascular disease. The generated synthetic data have been rigorously evaluated for fidelity and utility. Results show that the UMAP-based algorithm outperforms GAN- and VAE-based generation methods across different scenarios. In fidelity assessments, it achieved smaller maximum distances between the cumulative distribution functions of real and synthetic data for different attributes. In utility evaluations, the UMAP-based synthetic datasets enhanced machine learning model performance, particularly in classification tasks. In conclusion, this method represents a robust solution for generating secure, high-quality synthetic healthcare data, effectively addressing data scarcity challenges. Full article
14 pages, 274 KiB  
Article
Linear Matrix Inequality-Based Design of Structured Sparse Feedback Controllers for Sensor and Actuator Networks
by Yuta Kawano, Koichi Kobayashi and Yuh Yamashita
Algorithms 2024, 17(12), 590; https://doi.org/10.3390/a17120590 (registering DOI) - 21 Dec 2024
Abstract
A sensor and actuator network (SAN) is a control system where many sensors and actuators are connected through a communication network. In a SAN with redundant sensors and actuators, it is important to consider choosing sensors and actuators used in control design. Depending [...] Read more.
A sensor and actuator network (SAN) is a control system where many sensors and actuators are connected through a communication network. In a SAN with redundant sensors and actuators, it is important to consider choosing sensors and actuators used in control design. Depending on applications, it is also important to consider not only the choice of sensors/actuators but also that of communication channels in which some sensors/actuators are connected. In this paper, based on a linear matrix inequality (LMI) technique, we propose a design method for structured sparse feedback controllers. An LMI technique is one of the fundamental tools in systems and control theory. First, the sparse reconstruction problems for vectors and matrices are summarized. Next, two design problems are formulated, and an LMI-based solution method is proposed. Finally, two numerical examples are presented to show the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Optimization Methods for Advanced Manufacturing)
38 pages, 5759 KiB  
Article
Hybrid Arctic Puffin Algorithm for Solving Design Optimization Problems
by Hussam N. Fakhouri, Mohannad S. Alkhalaileh, Faten Hamad, Najem N. Sirhan and Sandi N. Fakhouri
Algorithms 2024, 17(12), 589; https://doi.org/10.3390/a17120589 - 20 Dec 2024
Abstract
This study presents an innovative hybrid evolutionary algorithm that combines the Arctic Puffin Optimization (APO) algorithm with the JADE dynamic differential evolution framework. The APO algorithm, inspired by the foraging patterns of Arctic puffins, demonstrates certain challenges, including a tendency to converge prematurely [...] Read more.
This study presents an innovative hybrid evolutionary algorithm that combines the Arctic Puffin Optimization (APO) algorithm with the JADE dynamic differential evolution framework. The APO algorithm, inspired by the foraging patterns of Arctic puffins, demonstrates certain challenges, including a tendency to converge prematurely at local minima, a slow rate of convergence, and an insufficient equilibrium between the exploration and exploitation processes. To mitigate these drawbacks, the proposed hybrid approach incorporates the dynamic features of JADE, which enhances the exploration–exploitation trade-off through adaptive parameter control and the use of an external archive. By synergizing the effective search mechanisms modeled after the foraging behavior of Arctic puffins with JADE’s advanced dynamic strategies, this integration significantly improves global search efficiency and accelerates the convergence process. The effectiveness of APO-JADE is demonstrated through benchmark tests against well-known IEEE CEC 2022 unimodal and multimodal functions, showing superior performance over 32 compared optimization algorithms. Additionally, APO-JADE is applied to complex engineering design problems, including the optimization of engineering structures and mechanisms, revealing its practical utility in navigating challenging, multi-dimensional search spaces typically encountered in real-world engineering problems. The results confirm that APO-JADE outperformed all of the compared optimizers, effectively addressing the challenges of unknown and complex search areas in engineering design optimization. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

16 pages, 6457 KiB  
Article
Intelligent Fault Diagnosis for Rotating Mechanical Systems: An Improved Multiscale Fuzzy Entropy and Support Vector Machine Algorithm
by Yuxin Pan, Yinsheng Chen, Xihong Fei, Kang Wang, Tian Fang and Jing Wang
Algorithms 2024, 17(12), 588; https://doi.org/10.3390/a17120588 - 20 Dec 2024
Abstract
Rotating mechanical systems (RMSs) are widely applied in various industrial fields. Intelligent fault diagnosis technology plays a significant role in improving the reliability and safety of industrial equipment. A new algorithm based on improved multiscale fuzzy entropy and support vector machine (IMFE-SVM) is [...] Read more.
Rotating mechanical systems (RMSs) are widely applied in various industrial fields. Intelligent fault diagnosis technology plays a significant role in improving the reliability and safety of industrial equipment. A new algorithm based on improved multiscale fuzzy entropy and support vector machine (IMFE-SVM) is proposed for the automatic diagnosis of various fault types in elevator rotating mechanical systems. First, the empirical mode decomposition (EMD) method is utilized to construct a decomposition model of the vibration data for the extraction of relevant parameters related to the fault feature. Secondly, the improved multiscale fuzzy entropy (IMFE) model is employed, where the scale factor of the multiscale fuzzy entropy (MFE) is extended to multiple subsequences to resolve the problem of insufficient coarse granularity in the traditional MFE. Subsequently, linear discriminant analysis (LDA) is applied to reduce the dimensionality of the extracted features in order to overcome the problem of feature redundancy. Finally, a support vector machine (SVM) model is utilized to construct the optimal hyperplane for the diagnosis of fault types. Experimental results indicate that the proposed method outperforms other state-of-the-art methods in the fault diagnosis of elevator systems. Full article
Show Figures

Figure 1

22 pages, 638 KiB  
Article
Unfolded Algorithms for Deep Phase Retrieval
by Naveed Naimipour, Shahin Khobahi, Mojtaba Soltanalian, Haleh Safavi and Harry C. Shaw
Algorithms 2024, 17(12), 587; https://doi.org/10.3390/a17120587 - 20 Dec 2024
Abstract
Exploring the idea of phase retrieval has been intriguing researchers for decades due to its appearance in a wide range of applications. The task of a phase retrieval algorithm is typically to recover a signal from linear phase-less measurements. In this paper, we [...] Read more.
Exploring the idea of phase retrieval has been intriguing researchers for decades due to its appearance in a wide range of applications. The task of a phase retrieval algorithm is typically to recover a signal from linear phase-less measurements. In this paper, we approach the problem by proposing a hybrid model-based, data-driven deep architecture referred to as Unfolded Phase Retrieval (UPR), which exhibits significant potential in improving the performance of state-of-the-art data-driven and model-based phase retrieval algorithms. The proposed method benefits from the versatility and interpretability of well-established model-based algorithms while simultaneously benefiting from the expressive power of deep neural networks. In particular, our proposed model-based deep architecture is applied to the conventional phase retrieval problem (via the incremental reshaped Wirtinger flow algorithm) and the sparse phase retrieval problem (via the sparse truncated amplitude flow algorithm), showing immense promise in both cases. Furthermore, we consider a joint design of the sensing matrix and the signal processing algorithm and utilize the deep unfolding technique in the process. Our numerical results illustrate the effectiveness of such hybrid model-based and data-driven frameworks and showcase the untapped potential of data-aided methodologies to enhance existing phase retrieval algorithms. Full article
(This article belongs to the Special Issue Machine Learning for Edge Computing)
Show Figures

Figure 1

20 pages, 8596 KiB  
Article
Data Assimilated Atmospheric Forecasts for Digital Twin of the Ocean Applications: A Case Study in the South Aegean, Greece
by Antonios Parasyris, Vassiliki Metheniti, George Alexandrakis, Georgios V. Kozyrakis and Nikolaos A. Kampanis
Algorithms 2024, 17(12), 586; https://doi.org/10.3390/a17120586 - 20 Dec 2024
Abstract
This study investigated advancements in atmospheric forecasting by integrating real-time observational data into the Weather Research and Forecasting (WRF) model through the WRF-Data Assimilation (WRF-DA) framework. By refining atmospheric models, we aimed to improve regional high-resolution wave and hydrodynamic forecasts essential for environmental [...] Read more.
This study investigated advancements in atmospheric forecasting by integrating real-time observational data into the Weather Research and Forecasting (WRF) model through the WRF-Data Assimilation (WRF-DA) framework. By refining atmospheric models, we aimed to improve regional high-resolution wave and hydrodynamic forecasts essential for environmental management. Focused on southern Greece, including Crete, the study applied a 3D-Var assimilation technique within WRF, downscaling forecasting data from the Global Forecast System (GFS) to resolutions of 9 km and 3 km. The results showed a 4.7% improvement in wind speed predictions, with significant gains during forecast hours 26–72, enhancing model accuracy across METAR validation locations. These results underscore the positive impact of the integration of additional observational data on model accuracy. This study also highlights the utility of refined atmospheric models for real-world applications through their use in forcing ocean circulation and wave models and subsequent Digital Twin of the Ocean applications. Two such applications—optimal ship routing to minimize CO2 emissions and oil spill trajectory forecasting to mitigate marine pollution—demonstrate the practical utility of improved models through what-if scenarios in easily deployable, containerized formats. Full article
Show Figures

Figure 1

18 pages, 726 KiB  
Article
A Hybrid FMEA-ROC-CoCoSo Approach for Improved Risk Assessment and Reduced Complexity in Failure Mode Prioritization
by Vitor Anes and António Abreu
Algorithms 2024, 17(12), 585; https://doi.org/10.3390/a17120585 - 19 Dec 2024
Viewed by 135
Abstract
This paper proposes a novel hybrid model that integrates failure mode and effects analysis (FMEA), Rank Order Centroid (ROC), and Combined Compromise Solution (CoCoSo) to improve risk assessment and prioritization of failure modes. A case study in the healthcare sector will be conducted [...] Read more.
This paper proposes a novel hybrid model that integrates failure mode and effects analysis (FMEA), Rank Order Centroid (ROC), and Combined Compromise Solution (CoCoSo) to improve risk assessment and prioritization of failure modes. A case study in the healthcare sector will be conducted to validate the effectiveness of the proposed model. ROC is used to assign weights to the FMEA criteria (severity, occurrence, and detectability). CoCoSo is then applied to create a robust ranking of failure modes by considering multiple criteria simultaneously. The results of the case study show that the hybrid FMEA-ROC-CoCoSo model improves the accuracy and objectivity of risk prioritization. It effectively identifies critical failure modes, outperforming traditional FMEA. The hybrid approach not only improves risk management decision making, leading to better mitigation strategies and higher system reliability, but also reduces the complexity typically found in FMEA hybrid models. This model provides a more comprehensive risk assessment tool suitable for application in different industries. Full article
(This article belongs to the Special Issue Algorithms for Feature Selection (3rd Edition))
Show Figures

Figure 1

21 pages, 5152 KiB  
Article
GAGAN: Enhancing Image Generation Through Hybrid Optimization of Genetic Algorithms and Deep Convolutional Generative Adversarial Networks
by Despoina Konstantopoulou, Paraskevi Zacharia, Michail Papoutsidakis, Helen C. Leligou and Charalampos Patrikakis
Algorithms 2024, 17(12), 584; https://doi.org/10.3390/a17120584 - 19 Dec 2024
Viewed by 190
Abstract
Generative Adversarial Networks (GANs) are highly effective for generating realistic images, yet their training can be unstable due to challenges such as mode collapse and oscillatory convergence. In this paper, we propose a novel hybrid optimization method that integrates Genetic Algorithms (GAs) to [...] Read more.
Generative Adversarial Networks (GANs) are highly effective for generating realistic images, yet their training can be unstable due to challenges such as mode collapse and oscillatory convergence. In this paper, we propose a novel hybrid optimization method that integrates Genetic Algorithms (GAs) to improve the training process of Deep Convolutional GANs (DCGANs). Specifically, GAs are used to evolve the discriminator’s weights, complementing the gradient-based learning typically employed in GANs. The proposed GAGAN model is trained on the CelebA dataset, using 2000 images, to generate 128 × 128 images, with the generator learning to produce realistic faces from random latent vectors. The discriminator, which classifies images as real or fake, is optimized not only through standard backpropagation, but also through a GA framework that evolves its weights via crossover, mutation, and selection processes. This hybrid method aims to enhance convergence stability and boost image quality by balancing local search from gradient-based methods with the global search capabilities of GAs. Experiments show that the proposed approach reduces generator loss and improves image fidelity, demonstrating that evolutionary algorithms can effectively complement deep learning techniques. This work opens new avenues for optimizing GAN training and enhancing performance in generative models. Full article
(This article belongs to the Special Issue Algorithms for Image Processing and Machine Vision)
Show Figures

Figure 1

23 pages, 1520 KiB  
Article
Data Augmentation for Voiceprint Recognition Using Generative Adversarial Networks
by Yao-San Lin, Hung-Yu Chen, Mei-Ling Huang and Tsung-Yu Hsieh
Algorithms 2024, 17(12), 583; https://doi.org/10.3390/a17120583 - 18 Dec 2024
Viewed by 182
Abstract
Voiceprint recognition systems often face challenges related to limited and diverse datasets, which hinder their performance and generalization capabilities. This study proposes a novel approach that integrates generative adversarial networks (GANs) for data augmentation and convolutional neural networks (CNNs) with mel-frequency cepstral coefficients [...] Read more.
Voiceprint recognition systems often face challenges related to limited and diverse datasets, which hinder their performance and generalization capabilities. This study proposes a novel approach that integrates generative adversarial networks (GANs) for data augmentation and convolutional neural networks (CNNs) with mel-frequency cepstral coefficients (MFCCs) for voiceprint classification. Experimental results demonstrate that the proposed methodology improves recognition accuracy by up to 15% in low-resource scenarios. The optimal ratio of real-to-GAN-generated samples was determined to be 3:2, which balanced dataset diversity and model performance. In specific cases, the model achieved an accuracy of 96.6%, showcasing its effectiveness in capturing unique voice characteristics while mitigating overfitting. These results highlight the potential of combining GAN-augmented data and CNN-based classification to enhance voiceprint recognition in diverse and resource-constrained environments. Full article
(This article belongs to the Special Issue Machine Learning for Pattern Recognition (2nd Edition))
Show Figures

Figure 1

15 pages, 10325 KiB  
Article
Integrating Large Language Models and Optimization in Semi- Structured Decision Making: Methodology and a Case Study
by Gianpaolo Ghiani, Gianluca Solazzo and Gianluca Elia
Algorithms 2024, 17(12), 582; https://doi.org/10.3390/a17120582 - 16 Dec 2024
Viewed by 286
Abstract
Semi-structured decisions, which fall between highly structured and unstructured decision types, rely on human intuition and experience for the final choice, while using data and analytical models to generate tentative solutions. These processes are traditionally iterative and time-consuming, requiring cycles of data gathering, [...] Read more.
Semi-structured decisions, which fall between highly structured and unstructured decision types, rely on human intuition and experience for the final choice, while using data and analytical models to generate tentative solutions. These processes are traditionally iterative and time-consuming, requiring cycles of data gathering, analysis, and option evaluation. In this study, we propose a novel framework that integrates Large Language Models (LLMs) with optimization techniques to streamline such decision-making processes. In our approach, LLMs leverage their capabilities in data interpretation, common-sense reasoning, and mathematical modeling to assist decision makers by reducing cognitive load. They achieve this by automating aspects of information processing and option evaluation, while preserving human oversight as a crucial component of the final decision-making process. Another significant strength of our framework lies in its potential to drive the evolution of a new generation of decision support systems (DSSs). Unlike traditional systems that rely on rigid and inflexible interfaces, our approach enables users to express their preferences in a more natural, intuitive, and adaptable manner, substantially enhancing both usability and accessibility. A case study on last-mile delivery system design in a smart city demonstrates the practical application of this framework. The results suggest that our approach has the potential to simplify the decision-making process and improve efficiency by reducing cognitive load, enhancing user experience, and facilitating more intuitive interactions. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

14 pages, 1758 KiB  
Article
Unsupervised Temporal Adaptation in Skeleton-Based Human Action Recognition
by Haitao Tian and Pierre Payeur
Algorithms 2024, 17(12), 581; https://doi.org/10.3390/a17120581 - 16 Dec 2024
Viewed by 223
Abstract
With deep learning approaches, the fundamental assumption of data availability can be severely compromised when a model trained on a source domain is transposed to a target application domain where data are unlabeled, making supervised fine-tuning mostly impossible. To overcome this limitation, the [...] Read more.
With deep learning approaches, the fundamental assumption of data availability can be severely compromised when a model trained on a source domain is transposed to a target application domain where data are unlabeled, making supervised fine-tuning mostly impossible. To overcome this limitation, the present work introduces an unsupervised temporal-domain adaptation framework for human action recognition from skeleton-based data that combines Contrastive Prototype Learning (CPL) and Temporal Adaptation Modeling (TAM), with the aim of transferring the knowledge learned from a source domain to an unlabeled target domain. The CPL strategy, inspired by recent success in contrastive learning applied to skeleton data, learns a compact temporal representation from the source domain, from which the TAM strategy leverages the capacity for self-training to adapt the representation to a target application domain using pseudo-labels. The research demonstrates that simultaneously solving CPL and TAM effectively enables the training of a generalizable human action recognition model that is adaptive to both domains and overcomes the requirement of a large volume of labeled skeleton data in the target domain. Experiments are conducted on multiple large-scale human action recognition datasets such as NTU RGB+D, PKU MMD, and Northwestern–UCLA to comprehensively evaluate the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Machine Learning Models and Algorithms for Image Processing)
Show Figures

Figure 1

34 pages, 3047 KiB  
Article
Stability Analysis and Experimental Validation of Standard Proportional-Integral-Derivative Control in Bilateral Teleoperators with Time-Varying Delays
by Marco A. Arteaga, Evert J. Guajardo-Benavides and Pablo Sánchez-Sánchez
Algorithms 2024, 17(12), 580; https://doi.org/10.3390/a17120580 - 16 Dec 2024
Viewed by 360
Abstract
The control of bilateral teleoperation systems with time-varying delays is a challenging problem that is frequently addressed with advanced control techniques. Widely known controllers, like Proportional-Derivative (PD) and Proportional-Integral-Derivative (PID), are seldom employed independently and are typically combined with other approaches, or at [...] Read more.
The control of bilateral teleoperation systems with time-varying delays is a challenging problem that is frequently addressed with advanced control techniques. Widely known controllers, like Proportional-Derivative (PD) and Proportional-Integral-Derivative (PID), are seldom employed independently and are typically combined with other approaches, or at least with gravity compensation. This work aims to address a gap in the analysis of bilateral systems by demonstrating that the standard PID control law alone can achieve regulation in these systems when a human operator moves any of the robots while exchanging delayed positions. Experimental results are consistent with the theoretical analysis. Additionally, to illustrate the high degree of robustness of the standard PID, further experiments are conducted in constrained motion, both with and without force feedback. Full article
(This article belongs to the Special Issue Algorithms for PID Controller 2024)
Show Figures

Graphical abstract

16 pages, 4711 KiB  
Article
A Multi-Agent Centralized Strategy Gradient Reinforcement Learning Algorithm Based on State Transition
by Lei Sheng, Honghui Chen and Xiliang Chen
Algorithms 2024, 17(12), 579; https://doi.org/10.3390/a17120579 - 15 Dec 2024
Viewed by 509
Abstract
The prevalent utilization of deterministic strategy algorithms in Multi-Agent Deep Reinforcement Learning (MADRL) for collaborative tasks has posed a significant challenge in achieving stable and high-performance cooperative behavior. Addressing the need for the balanced exploration and exploitation of multi-agent ant robots within a [...] Read more.
The prevalent utilization of deterministic strategy algorithms in Multi-Agent Deep Reinforcement Learning (MADRL) for collaborative tasks has posed a significant challenge in achieving stable and high-performance cooperative behavior. Addressing the need for the balanced exploration and exploitation of multi-agent ant robots within a partially observable continuous action space, this study introduces a multi-agent centralized strategy gradient algorithm grounded in a local state transition mechanism. In order to solve this challenge, the algorithm learns local state and local state-action representation from local observations and action values, thereby establishing a “local state transition” mechanism autonomously. As the input of the actor network, the automatically extracted local observation representation reduces the input state dimension, enhances the local state features closely related to the local state transition, and promotes the agent to use the local state features that affect the next observation state. To mitigate non-stationarity and reliability assignment issues in multi-agent environments, a centralized critic network evaluates the current joint strategy. The proposed algorithm, NST-FACMAC, is evaluated alongside other multi-agent deterministic strategy algorithms in a continuous control simulation environment using a multi-agent ant robot. The experimental results indicate accelerated convergence and higher average reward values in cooperative multi-agent ant simulation environments. Notably, in four simulated environments named Ant-v2 (2 × 4), Ant-v2 (2 × 4d), Ant-v2 (4 × 2), and Manyant (2 × 3), the algorithm demonstrates performance improvements of approximately 1.9%, 4.8%, 11.9%, and 36.1%, respectively, compared to the best baseline algorithm. These findings underscore the algorithm’s effectiveness in enhancing the stability of multi-agent ant robot control within dynamic environments. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

14 pages, 392 KiB  
Article
Applying Recommender Systems to Predict Personalized Film Age Ratings for Parents
by Harris Papadakis, Paraskevi Fragopoulou and Costas Panagiotakis
Algorithms 2024, 17(12), 578; https://doi.org/10.3390/a17120578 - 14 Dec 2024
Viewed by 298
Abstract
A motion picture content rating system categorizes a film based on its appropriateness for various audiences, considering factors such as portrayals of sex, violence, substance abuse, profanity, and other elements typically considered unsuitable for children or adolescents. This rating is usually coupled with [...] Read more.
A motion picture content rating system categorizes a film based on its appropriateness for various audiences, considering factors such as portrayals of sex, violence, substance abuse, profanity, and other elements typically considered unsuitable for children or adolescents. This rating is usually coupled with a minimum desired age that the film is suitable for. In this work, we apply recommender systems to predict personalized film age ratings for parents. According to the proposed methodology, we reduce the personalized film age prediction problem to the classic item recommendation problem by applying a recommender system for each age film category. The recommender systems generate recommendations for each film age category. Finally, these recommendations are combined to provide the final age recommendation for the parent (user). The proposed methodology was applied to state-of-the-art recommender systems. In addition, we used them as baselines for comparing the direct application of a recommender system to the age prediction problem. This was achieved by treating each film as an item and assigning the given age as its rating. The experimental results highlight the efficiency of the proposed system when applied to a well-known real-world dataset. Full article
(This article belongs to the Special Issue Algorithms in Data Classification (2nd Edition))
Show Figures

Graphical abstract

15 pages, 497 KiB  
Article
A Complex Network Epidemiological Approach for Infectious Disease Spread Control with Time-Varying Connections
by Alma Y. Alanis, Gustavo Munoz-Gomez, Nancy F. Ramirez, Oscar D. Sanchez and Jesus G. Alvarez
Algorithms 2024, 17(12), 577; https://doi.org/10.3390/a17120577 - 14 Dec 2024
Viewed by 340
Abstract
This work introduces an impulsive neural control algorithm designed to mitigate the spread of epidemic diseases. The objective of this paper is the development of a vaccination strategy based on a PIN-type impulsive controller based on an online-trained neural identifier to control the [...] Read more.
This work introduces an impulsive neural control algorithm designed to mitigate the spread of epidemic diseases. The objective of this paper is the development of a vaccination strategy based on a PIN-type impulsive controller based on an online-trained neural identifier to control the spread of infectious diseases under a complex network approach with time-varying connections where each node represents a population of individuals whose dynamics are defined by the MSEIR epidemiological model. Considering an unknown model of the system, a neural identifier is designed that provides a nonlinear model for the complex network trained through an extended Kalman filter algorithm. Simulation results are presented by applying the proposed control scheme for a complex network parameterized as infectious diseases. Full article
Show Figures

Graphical abstract

26 pages, 1131 KiB  
Article
Perfect Roman Domination: Aspects of Enumeration and Parameterization
by Kevin Mann and Henning Fernau
Algorithms 2024, 17(12), 576; https://doi.org/10.3390/a17120576 - 14 Dec 2024
Viewed by 261
Abstract
Perfect Roman Dominating Functions and Unique Response Roman Dominating Functions are two ways to translate perfect code into the framework of Roman Dominating Functions. We also consider the enumeration of minimal Perfect Roman Dominating Functions and show a tight relation to minimal Roman [...] Read more.
Perfect Roman Dominating Functions and Unique Response Roman Dominating Functions are two ways to translate perfect code into the framework of Roman Dominating Functions. We also consider the enumeration of minimal Perfect Roman Dominating Functions and show a tight relation to minimal Roman Dominating Functions. Furthermore, we consider the complexity of the underlying decision problems Perfect Roman Domination and Unique Response Roman Domination on special graph classes. For instance, split graphs are the first graph class for which Unique Response Roman Domination is polynomial-time solvable, while Perfect Roman Domination is NP-complete. Beyond this, we give polynomial-time algorithms for Perfect Roman Domination on interval graphs and for both decision problems on cobipartite graphs. However, both problems are NP-complete on chordal bipartite graphs. We show that both problems are W[1]-complete if parameterized by solution size and FPT if parameterized by the dual parameter or by clique width. Full article
(This article belongs to the Special Issue Selected Algorithmic Papers from IWOCA 2024)
Show Figures

Figure 1

28 pages, 895 KiB  
Article
Computational Analysis of Parallel Techniques for Nonlinear Biomedical Engineering Problems
by Mudassir Shams and Bruno Carpentieri
Algorithms 2024, 17(12), 575; https://doi.org/10.3390/a17120575 - 14 Dec 2024
Viewed by 234
Abstract
In this study, we develop new efficient parallel techniques for solving both distinct and multiple roots of nonlinear problems at the same time. The parallel techniques represent an innovative contribution to the discipline, with local convergence of the ninth order. Theoretical research shows [...] Read more.
In this study, we develop new efficient parallel techniques for solving both distinct and multiple roots of nonlinear problems at the same time. The parallel techniques represent an innovative contribution to the discipline, with local convergence of the ninth order. Theoretical research shows the rapid convergence and effectiveness of the proposed parallel schemes. To assess the suggested scheme’s stability and consistency, we look at certain biomedical engineering applications, such as osteoporosis in Chinese women, blood rheology, and differential equations. Overall, detailed analyses of convergence behavior, memory utilization, computational time, and percentage computational efficiency show that the novel parallel techniques outperform the traditional methods. The proposed methods would be more suitable for large-scale computational problems in biomedical applications due to their advantages in memory efficiency, CPU time, and error reduction. Full article
(This article belongs to the Section Parallel and Distributed Algorithms)
Show Figures

Figure 1

13 pages, 7944 KiB  
Article
Research on Intelligent Identification Method for Pantograph Positioning and Skateboard Structural Anomalies Based on Improved YOLO v8 Algorithm
by Ruihong Zhou, Baokang Xiang, Long Wu, Yanli Hu, Litong Dou and Kaifeng Huang
Algorithms 2024, 17(12), 574; https://doi.org/10.3390/a17120574 - 14 Dec 2024
Viewed by 331
Abstract
The abnormal structural state of the pantograph skateboard is a significant and highly concerning issue that has a significant impact on the safety of high-speed railway operation. In order to obtain real-time information on the abnormal state of the skateboard in advance, an [...] Read more.
The abnormal structural state of the pantograph skateboard is a significant and highly concerning issue that has a significant impact on the safety of high-speed railway operation. In order to obtain real-time information on the abnormal state of the skateboard in advance, an intelligent defect identification model suitable to be used as a monitoring device for the pantograph skateboard was designed using a computer vision-based intelligent detection technology for pantograph skateboard defects, combined with an improved YOLO v8 model and traditional image processing algorithms such as edge extraction. The results show that the anomaly detection algorithm for the pantograph sliding plate structure has good robustness, maintaining recognition accuracy of 90% or above in complex scenes, and the average runtime is 12.32 ms. Railway field experiments have proven that the intelligent recognition model meets the actual detection requirements of railway sites and has strong practical application value. Full article
Show Figures

Figure 1

34 pages, 5924 KiB  
Article
A Multi-Strategy Improved Honey Badger Algorithm for Engineering Design Problems
by Tao Han, Tingting Li, Quanzeng Liu, Yourui Huang and Hongping Song
Algorithms 2024, 17(12), 573; https://doi.org/10.3390/a17120573 - 13 Dec 2024
Viewed by 263
Abstract
A multi-strategy improved honey badger algorithm (MIHBA) is proposed to address the problem that the honey badger algorithm may fall into local optimum and premature convergence when dealing with complex optimization problems. By introducing Halton sequences to initialize the population, the diversity of [...] Read more.
A multi-strategy improved honey badger algorithm (MIHBA) is proposed to address the problem that the honey badger algorithm may fall into local optimum and premature convergence when dealing with complex optimization problems. By introducing Halton sequences to initialize the population, the diversity of the population is enhanced, and premature convergence is effectively avoided. The dynamic density factor of water waves is added to improve the search efficiency of the algorithm in the solution space. Lens opposition learning based on the principle of lens imaging is also introduced to enhance the ability of the algorithm to get rid of local optimums. MIHBA achieves the best ranking in 23 test functions and 4 engineering design problems. The improvement of this paper improves the convergence speed and accuracy of the algorithm, enhances the adaptability and solving ability of the algorithm to complex functions, and provides new ideas for solving complex engineering design problems. Full article
Show Figures

Figure 1

19 pages, 419 KiB  
Article
Fair and Transparent Student Admission Prediction Using Machine Learning Models
by George Raftopoulos, Gregory Davrazos and Sotiris Kotsiantis
Algorithms 2024, 17(12), 572; https://doi.org/10.3390/a17120572 - 13 Dec 2024
Viewed by 267
Abstract
Student admission prediction is a crucial aspect of academic planning, offering insights into enrollment trends, resource allocation, and institutional growth. However, traditional methods often lack the ability to address fairness and transparency, leading to potential biases and inequities in the decision-making process. This [...] Read more.
Student admission prediction is a crucial aspect of academic planning, offering insights into enrollment trends, resource allocation, and institutional growth. However, traditional methods often lack the ability to address fairness and transparency, leading to potential biases and inequities in the decision-making process. This paper explores the development and evaluation of machine learning models designed to predict student admissions while prioritizing fairness and interpretability. We employ a diverse set of algorithms, including Logistic Regression, Decision Trees, and ensemble methods, to forecast admission outcomes based on academic, demographic, and extracurricular features. Experimental results on real-world datasets highlight the effectiveness of the proposed models in achieving competitive predictive performance while adhering to fairness metrics such as demographic parity and equalized odds. Our findings demonstrate that machine learning can not only enhance the accuracy of admission predictions but also support equitable access to education by promoting transparency and accountability in automated systems. Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms and Generative AI in Education)
Show Figures

Graphical abstract

16 pages, 361 KiB  
Article
Stroke Dataset Modeling: Comparative Study of Machine Learning Classification Methods
by Kalina Kitova, Ivan Ivanov and Vincent Hooper
Algorithms 2024, 17(12), 571; https://doi.org/10.3390/a17120571 - 13 Dec 2024
Viewed by 340
Abstract
Stroke prediction is a vital research area due to its significant implications for public health. This comparative study offers a detailed evaluation of algorithmic methodologies and outcomes from three recent prominent studies on stroke prediction. Ivanov et al. tackled issues of imbalanced datasets [...] Read more.
Stroke prediction is a vital research area due to its significant implications for public health. This comparative study offers a detailed evaluation of algorithmic methodologies and outcomes from three recent prominent studies on stroke prediction. Ivanov et al. tackled issues of imbalanced datasets and algorithmic bias using deep learning techniques, achieving notable results with a 98% accuracy and a 97% recall rate. They utilized resampling methods to balance the classes and advanced imputation techniques to handle missing data, underscoring the critical role of data preprocessing in enhancing the performance of Support Vector Machines (SVMs). Hassan et al. addressed missing data and class imbalance using multiple imputations and the Synthetic Minority Oversampling Technique (SMOTE). They developed a Dense Stacking Ensemble (DSE) model with over 96% accuracy. Their results underscore the efficiency of ensemble learning techniques and imputation for handling imbalanced datasets in stroke prediction. Bathla et al. employed various classifiers and feature selection techniques, including SMOTE, for class balancing. Their Random Forest (RF) classifier, combined with Feature Importance (FI) selection, achieved an accuracy of 97.17%, illustrating the positive impact of RF and relevant feature selection on model performance. A comparative analysis indicated that Ivanov et al.’s method achieved the highest accuracy rate. However, the studies collectively highlight that the choice of models and techniques for stroke prediction should be tailored to the specific characteristics of the dataset used. This study emphasizes the importance of effective data management and model selection in enhancing predictive performance. Full article
(This article belongs to the Special Issue Algorithms in Data Classification (2nd Edition))
Show Figures

Figure 1

37 pages, 1236 KiB  
Article
A Systematic Approach to Portfolio Optimization: A Comparative Study of Reinforcement Learning Agents, Market Signals, and Investment Horizons
by Francisco Espiga-Fernández, Álvaro García-Sánchez and Joaquín Ordieres-Meré
Algorithms 2024, 17(12), 570; https://doi.org/10.3390/a17120570 - 12 Dec 2024
Viewed by 511
Abstract
This paper presents a systematic exploration of deep reinforcement learning (RL) for portfolio optimization and compares various agent architectures, such as the DQN, DDPG, PPO, and SAC. We evaluate these agents’ performance across multiple market signals, including OHLC price data and technical indicators, [...] Read more.
This paper presents a systematic exploration of deep reinforcement learning (RL) for portfolio optimization and compares various agent architectures, such as the DQN, DDPG, PPO, and SAC. We evaluate these agents’ performance across multiple market signals, including OHLC price data and technical indicators, while incorporating different rebalancing frequencies and historical window lengths. This study uses six major financial indices and a risk-free asset as the core instruments. Our results show that CNN-based feature extractors, particularly with longer lookback periods, significantly outperform MLP models, providing superior risk-adjusted returns. DQN and DDPG agents consistently surpass market benchmarks, such as the S&P 500, in annualized returns. However, continuous rebalancing leads to higher transaction costs and slippage, making periodic rebalancing a more efficient approach to managing risk. This research offers valuable insights into the adaptability of RL agents to dynamic market conditions, proposing a robust framework for future advancements in financial machine learning. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

6 pages, 179 KiB  
Editorial
Recent Progress in Data-Driven Intelligent Modeling and Optimization Algorithms for Industrial Processes
by Sheng Du, Zixin Huang, Li Jin and Xiongbo Wan
Algorithms 2024, 17(12), 569; https://doi.org/10.3390/a17120569 - 12 Dec 2024
Viewed by 298
Abstract
This editorial discusses recent progress in data-driven intelligent modeling and optimization algorithms for industrial processes. With the advent of Industry 4.0, the amalgamation of sophisticated data analytics, machine learning, and artificial intelligence has become pivotal, unlocking new horizons in production efficiency, sustainability, and [...] Read more.
This editorial discusses recent progress in data-driven intelligent modeling and optimization algorithms for industrial processes. With the advent of Industry 4.0, the amalgamation of sophisticated data analytics, machine learning, and artificial intelligence has become pivotal, unlocking new horizons in production efficiency, sustainability, and quality assurance. Contributions to this Special Issue highlight innovative research in advancements in work-sampling data analysis, data-driven process choreography discovery, intelligent ship scheduling for maritime rescue, process variability monitoring, hybrid optimization algorithms for economic emission dispatches, and intelligent controlled oscillations in smart structures. These studies collectively contribute to the body of knowledge on data-driven intelligent modeling and optimization, offering practical solutions and theoretical frameworks to address complex industrial challenges. Full article
20 pages, 3280 KiB  
Article
A Robust Heuristics for the Online Job Shop Scheduling Problem
by Hugo Zupan, Niko Herakovič and Janez Žerovnik
Algorithms 2024, 17(12), 568; https://doi.org/10.3390/a17120568 - 12 Dec 2024
Viewed by 333
Abstract
The job shop scheduling problem (JSSP) is a popular NP-hard problem in combinatorial optimization, due to its theoretical appeal and its importance in applications. In practical applications, the online version is much closer to the needs of smart manufacturing in Industry 4.0 and [...] Read more.
The job shop scheduling problem (JSSP) is a popular NP-hard problem in combinatorial optimization, due to its theoretical appeal and its importance in applications. In practical applications, the online version is much closer to the needs of smart manufacturing in Industry 4.0 and 5.0. Here, the online version of the job shop scheduling problem is solved by a heuristics that governs local queues at the machines. This enables a distributed implementation, i.e., a digital twin can be maintained by local processors which can result in high speed real time operation. The heuristics at the level of probabilistic rules for running the local queues is experimentally shown to provide the solutions of quality that is within acceptable approximation ratios to the best known solutions obtained by the best online algorithms. The probabilistic rule defines a model which is not unlike the spin glass models that are closely related to quantum computing. Major advances of the approach are the inherent parallelism and its robustness, promising natural and likely successful application to other variations of JSSP. Experimental results show that the heuristics, although designed for solving the online version, can provide near-optimal and often even optimal solutions for many benchmark instances of the offline version of JSSP. It is also demonstrated that the best solutions of the new heuristics clearly improve over the results obtained by heuristics based on standard dispatching rules. Of course, there is a trade-off between better computational time and the quality of the results in terms of makespan criteria. Full article
(This article belongs to the Special Issue Scheduling: Algorithms and Real-World Applications)
Show Figures

Figure 1

25 pages, 2552 KiB  
Systematic Review
Primary Methods and Algorithms in Artificial-Intelligence-Based Dental Image Analysis: A Systematic Review
by Talal Bonny, Wafaa Al Nassan, Khaled Obaideen, Tamer Rabie, Maryam Nooman AlMallahi and Swati Gupta
Algorithms 2024, 17(12), 567; https://doi.org/10.3390/a17120567 - 11 Dec 2024
Viewed by 505
Abstract
Artificial intelligence (AI) has garnered significant attention in recent years for its potential to revolutionize healthcare, including dentistry. However, despite the growing body of literature on AI-based dental image analysis, challenges such as the integration of AI into clinical workflows, variability in dataset [...] Read more.
Artificial intelligence (AI) has garnered significant attention in recent years for its potential to revolutionize healthcare, including dentistry. However, despite the growing body of literature on AI-based dental image analysis, challenges such as the integration of AI into clinical workflows, variability in dataset quality, and the lack of standardized evaluation metrics remain largely underexplored. This systematic review aims to address these gaps by assessing the extent to which AI technologies have been integrated into dental specialties, with a specific focus on their applications in dental imaging. A comprehensive review of the literature was conducted, selecting relevant studies through electronic searches from Scopus, Google Scholar, and PubMed databases, covering publications from 2018 to 2023. A total of 52 articles were systematically analyzed to evaluate the diverse approaches of machine learning (ML) and deep learning (DL) in dental imaging. This review reveals that AI has become increasingly prevalent, with researchers predominantly employing convolutional neural networks (CNNs) for detection and diagnosis tasks. Pretrained networks demonstrate strong performance in many scenarios, while ML techniques have shown growing utility in estimation and classification. Key challenges identified include the need for larger, annotated datasets and the translation of research outcomes into clinical practice. The findings underscore AI’s potential to significantly advance diagnostic support, particularly for non-specialist dentists, improving patient care and clinical efficiency. AI-driven software can enhance diagnostic accuracy, facilitate data sharing, and support collaboration among dental professionals. Future developments are anticipated to enable patient-specific optimization of restoration designs and implant placements, leveraging personalized data such as dental history, tissue type, and bone thickness to achieve better outcomes. Full article
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing (2nd Edition))
Show Figures

Figure 1

15 pages, 647 KiB  
Article
Anchor-Based Method for Inter-Domain Mobility Management in Software-Defined Networking
by Akichy Adon Jean Rodrigue Kanda, Amanvon Ferdinand Atta, Zacrada Françoise Odile Trey, Michel Babri and Ahmed Dooguy Kora
Algorithms 2024, 17(12), 566; https://doi.org/10.3390/a17120566 - 11 Dec 2024
Viewed by 382
Abstract
Recently, there has been an explosive growth in wireless devices capable of connecting to the Internet and utilizing various services anytime, anywhere, often while on the move. In the realm of the Internet, such devices are called mobile nodes. When these devices are [...] Read more.
Recently, there has been an explosive growth in wireless devices capable of connecting to the Internet and utilizing various services anytime, anywhere, often while on the move. In the realm of the Internet, such devices are called mobile nodes. When these devices are in motion or traverse different domains while communicating, effective mobility management becomes essential to ensure the continuity of their services. Software-defined networking (SDN), a new paradigm in networking, offers numerous possibilities for addressing the challenges of mobility management. By decoupling the control and data planes, SDN enables greater flexibility and adaptability, making them a powerful framework for solving mobility-related issues. However, communication can still be momentarily disrupted due to frequent changes in IP addresses, a drop in radio signals, or configuration issues associated with gateways. Therefore, this paper introduces Routage Inter-domains in SDN (RI-SDN), a novel anchor-based routing method designed for inter-domain mobility in SDN architectures. The method identifies a suitable anchor domain, a critical intermediary domain that contributes to reducing delays during data transfer because it is the closest domain (i.e., node) to the destination. Once the anchor domain is identified, the best routing path is determined as the route with the smallest metric, incorporating elements such as bandwidth, flow operations, and the number of domain hops. Simulation results demonstrate significant improvements in data transfer delay and handover latency compared to existing methods. By leveraging SDN’s potential, RI-SDN presents a robust and innovative solution for real-world scenarios requiring reliable mobility management. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

21 pages, 2687 KiB  
Article
A Random PRIM Based Algorithm for Interpretable Classification and Advanced Subgroup Discovery
by Rym Nassih and Abdelaziz Berrado
Algorithms 2024, 17(12), 565; https://doi.org/10.3390/a17120565 - 10 Dec 2024
Viewed by 378
Abstract
Machine-learning algorithms have made significant strides, achieving high accuracy in many applications. However, traditional models often need large datasets, as they typically peel substantial portions of the data in each iteration, complicating the development of a classifier without sufficient data. In critical fields [...] Read more.
Machine-learning algorithms have made significant strides, achieving high accuracy in many applications. However, traditional models often need large datasets, as they typically peel substantial portions of the data in each iteration, complicating the development of a classifier without sufficient data. In critical fields like healthcare, there is a growing need to identify and analyze small yet significant subgroups within data. To address these challenges, we introduce a novel classifier based on the patient rule-induction method (PRIM), a subgroup-discovery algorithm. PRIM finds rules by peeling minimal data at each iteration, enabling the discovery of highly relevant regions. Unlike traditional classifiers, PRIM requires experts to select input spaces manually. Our innovation transforms PRIM into an interpretable classifier by starting with random input space selections for each class, then pruning rules using metarules, and finally selecting definitive rules for the classifier. Tested against popular algorithms such as random forest, logistic regression, and XG-Boost, our random PRIM-based classifier (R-PRIM-Cl) demonstrates comparable robustness, superior interpretability, and the ability to handle categorical and numeric variables. It discovers more rules in certain datasets, making it especially valuable in fields where understanding the model’s decision-making process is as important as its predictive accuracy. Full article
(This article belongs to the Special Issue Algorithms in Data Classification (2nd Edition))
Show Figures

Figure 1

39 pages, 25059 KiB  
Article
Exploratory Study of a Green Function Based Solver for Nonlinear Partial Differential Equations
by Pablo Solano-López, Jorge Saavedra and Raúl Molina
Algorithms 2024, 17(12), 564; https://doi.org/10.3390/a17120564 - 10 Dec 2024
Viewed by 392
Abstract
This work explores the numerical translation of the weak or integral solution of nonlinear partial differential equations into a numerically efficient, time-evolving scheme. Specifically, we focus on partial differential equations separable into a quasilinear term and a nonlinear one, with the former defining [...] Read more.
This work explores the numerical translation of the weak or integral solution of nonlinear partial differential equations into a numerically efficient, time-evolving scheme. Specifically, we focus on partial differential equations separable into a quasilinear term and a nonlinear one, with the former defining the Green function of the problem. Utilizing the Green function under a short-time approximation, it becomes possible to derive the integral solution of the problem by breaking it into three integral terms: the propagation of initial conditions and the contributions of the nonlinear and boundary terms. Accordingly, we follow this division to describe and separately analyze the resulting algorithm. To ensure low interpolation error and accurate numerical Green functions, we adapt a piecewise interpolation collocation method to the integral scheme, optimizing the positioning of grid points near the boundary region. At the same time, we employ a second-order quadrature method in time to efficiently implement the nonlinear terms. Validation of both adapted methodologies is conducted by applying them to problems with known analytical solution, as well as to more challenging, norm-preserving problems such as the Burgers equation and the soliton solution of the nonlinear Schrödinger equation. Finally, the boundary term is derived and validated using a series of test cases that cover the range of possible scenarios for boundary problems within the introduced methodology. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

21 pages, 391 KiB  
Article
Issues on a 2–Dimensional Quadratic Sub–Problem and Its Applications in Nonlinear Programming: Trust–Region Methods (TRMs) and Linesearch Based Methods (LBMs)
by Giovanni Fasano, Christian Piermarini and Massimo Roma
Algorithms 2024, 17(12), 563; https://doi.org/10.3390/a17120563 - 9 Dec 2024
Viewed by 359
Abstract
This paper analyses the solution of a specific quadratic sub-problem, along with its possible applications, within both constrained and unconstrained Nonlinear Programming frameworks. We give evidence that this sub–problem may appear in a number of Linesearch Based Methods (LBM) schemes, and to some [...] Read more.
This paper analyses the solution of a specific quadratic sub-problem, along with its possible applications, within both constrained and unconstrained Nonlinear Programming frameworks. We give evidence that this sub–problem may appear in a number of Linesearch Based Methods (LBM) schemes, and to some extent it reveals a close analogy with the solution of trust–region sub–problems. Namely, we refer to a two-dimensional structured quadratic problem, where five linear inequality constraints are included. Finally, we detail how to compute an exact global solution of our two-dimensional quadratic sub-problem, exploiting first order Karush-Khun-Tucker (KKT) conditions. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 2nd Edition)
Show Figures

Figure 1

Previous Issue
Back to TopTop