Intelligence Optimization Algorithms and Applications

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Computational and Applied Mathematics".

Deadline for manuscript submissions: 31 December 2024 | Viewed by 2105

Special Issue Editors


E-Mail Website
Guest Editor
School of Mechanical and Electric Engineering, Guangzhou University, Guangzhou 510006, China
Interests: swarm intelligent optimization theory and application; large-scale complex system control and optimization; robot system navigation; path planning optimization
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou 450001, China
Interests: computational intelligence; evolutionary computation; constrained optimization; machine learning; dynamic optimization

Special Issue Information

Dear Colleagues,

The new generation of artificial intelligence is driving the rapid development of industry, with advanced software and algorithms gradually integrating into various aspects of life. Large-scale and ultra-large-scale systems have emerged, with increasingly fine configurations and complex processes, which urgently require optimization. However, precise optimization methods are difficult to handle, forcing the research of more effective, reasonable, and feasible heuristic methods and swarm intelligence optimization methods, which attract significant attention and have been a useful tool for solving mathematic theory and real-world problems. Intelligence optimization algorithms can ignore the problems’ characters and obtain feasible solutions quickly. Therefore, they are widely used in mechanism optimization design, mechanism motion trajectory planning, workshop layout optimization, optimization related to machining process technology planning, and so on.

However, some key challenges need to be solved for intelligence optimization algorithms. In theory, there is no strict proof about the approximation ability of this method. In application, it may fail to obtain feasible solutions due to the complexity of problems. Furthermore, an algorithm that suits all types of problems is difficult to design due to their diversity.

Thus, in this Special Issue, we expect articles on topics including (but not limited to):

  • Novel algorithms that have advantages in convergence speed, accuracy, and robustness compared with existing ones and their applications in engineering, processing, scheduling, planning, and so on.
  • Combination of advanced technologies and intelligence optimization algorithms, for example, determining hyper parameters of algorithms with GAN, or extracting features in LLM with algorithms.
  • Reviews including comparison, analysis, and evaluation of different algorithms or problems.
  • Articles that focus on the derivation of an algorithm’s approximation ability, the relationship between algorithms and problems, and other mathematical and theoretical study.
  • Mechanism optimization design, mechanism motion trajectory planning, workshop layout optimization, and optimization related to machining process technology planning.

Dr. Haibin Ouyang
Dr. Kunjie Yu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • optimization
  • intelligence algorithms
  • dynamic optimization
  • multimodal optimization
  • constrained optimization
  • large-scale optimization and application
  • path-planning
  • distribution optimization
  • mechanism optimization design
  • network optimization
  • transfer learning
  • evolutionary optimization
  • neural architecture optimization
  • federated learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

55 pages, 9004 KiB  
Article
Improved Snake Optimizer Using Sobol Sequential Nonlinear Factors and Different Learning Strategies and Its Applications
by Wenda Zheng, Yibo Ai and Weidong Zhang
Mathematics 2024, 12(11), 1708; https://doi.org/10.3390/math12111708 - 30 May 2024
Viewed by 561
Abstract
The Snake Optimizer (SO) is an advanced metaheuristic algorithm for solving complicated real-world optimization problems. However, despite its advantages, the SO faces certain challenges, such as susceptibility to local optima and suboptimal convergence performance in cases involving discretized, high-dimensional, and multi-constraint problems. To [...] Read more.
The Snake Optimizer (SO) is an advanced metaheuristic algorithm for solving complicated real-world optimization problems. However, despite its advantages, the SO faces certain challenges, such as susceptibility to local optima and suboptimal convergence performance in cases involving discretized, high-dimensional, and multi-constraint problems. To address these problems, this paper presents an improved version of the SO, known as the Snake Optimizer using Sobol sequential nonlinear factors and different learning strategies (SNDSO). Firstly, using Sobol sequences to generate better distributed initial populations helps to locate the global optimum solution faster. Secondly, the use of nonlinear factors based on the inverse tangent function to control the exploration and exploitation phases effectively improves the exploitation capability of the algorithm. Finally, introducing learning strategies improves the population diversity and reduces the probability of the algorithm falling into the local optimum trap. The effectiveness of the proposed SNDSO in solving discretized, high-dimensional, and multi-constraint problems is validated through a series of experiments. The performance of the SNDSO in tackling high-dimensional numerical optimization problems is first confirmed by using the Congress on Evolutionary Computation (CEC) 2015 and CEC2017 test sets. Then, twelve feature selection problems are used to evaluate the effectiveness of the SNDSO in discretized scenarios. Finally, five real-world technical multi-constraint optimization problems are employed to evaluate the performance of the SNDSO in high-dimensional and multi-constraint domains. The experiments show that the SNDSO effectively overcomes the challenges of discretization, high dimensionality, and multi-constraint problems and outperforms superior algorithms. Full article
(This article belongs to the Special Issue Intelligence Optimization Algorithms and Applications)
Show Figures

Graphical abstract

15 pages, 6572 KiB  
Article
Combining Autoregressive Integrated Moving Average Model and Gaussian Process Regression to Improve Stock Price Forecast
by Shiying Tu, Jiehu Huang, Huailong Mu, Juan Lu and Ying Li
Mathematics 2024, 12(8), 1187; https://doi.org/10.3390/math12081187 - 15 Apr 2024
Viewed by 868
Abstract
Stock market performance is one key indicator of the economic condition of a country, and stock price forecasting is important for investments and financial risk management. However, the inherent nonlinearity and complexity in stock price movements imply that simple conventional modeling techniques are [...] Read more.
Stock market performance is one key indicator of the economic condition of a country, and stock price forecasting is important for investments and financial risk management. However, the inherent nonlinearity and complexity in stock price movements imply that simple conventional modeling techniques are not adequate for stock price forecasting. In this paper, we present a hybrid model (ARIMA + GPRC) which combines the autoregressive integrated moving average (ARIMA) model and Gaussian process regression (GPR) with a combined covariance function (GPRC). The proposed hybrid model can account for both the linearity and nonlinearity in stock price movements. Based on daily data on three stocks listed on the Shanghai Stock Exchange (SSE), it is found that GPRC outperforms GPR with a single covariance function. Further, the proposed hybrid model is compared with the ARIMA model, artificial neural network (ANN), and GPRC model. Based on the forecasting trend and the statistical performance of the four models, the ARIMA + GPRC model is found to be the dominant model for stock price forecasting and can significantly improve forecasting performance. Full article
(This article belongs to the Special Issue Intelligence Optimization Algorithms and Applications)
Show Figures

Figure 1

Back to TopTop