Next Article in Journal
On Extended Beta Function and Related Inequalities
Previous Article in Journal
Modelling the Dependence between a Wiener Process and Its Running Maxima and Running Minima Processes
Previous Article in Special Issue
3D Vase Design Based on Interactive Genetic Algorithm and Enhanced XGBoost Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Particle Swarm Optimization Algorithm Based on Variable Neighborhood Search

1
College of Systems Engineering, National University of Defense Technology, Changsha 410073, China
2
Cainiao Network, Hangzhou 311100, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2024, 12(17), 2708; https://doi.org/10.3390/math12172708
Submission received: 7 August 2024 / Revised: 23 August 2024 / Accepted: 27 August 2024 / Published: 30 August 2024

Abstract

:
Various metaheuristic algorithms inspired by nature have been designed to deal with a variety of practical optimization problems. As an excellent metaheuristic algorithm, the improved particle swarm optimization algorithm based on grouping (IPSO) has strong global search capabilities. However, it lacks a strong local search ability and the ability to solve constrained discrete optimization problems. This paper focuses on improving these two aspects of the IPSO algorithm. Based on IPSO, we propose an improved particle swarm optimization algorithm based on variable neighborhood search (VN-IPSO) and design a 0-1 integer programming solution with constraints. In the experiment, the performance of the VN-IPSO algorithm is fully tested and analyzed using 23 classic benchmark functions (continuous optimization), 6 knapsack problems (discrete optimization), and 10 CEC2017 composite functions (complex functions). The results show that the VN-IPSO algorithm wins 18 first places in the classic benchmark function test set, including 6 first places in the solutions for seven unimodal test functions, indicating a good local search ability. In solving the six knapsack problems, it wins four first places, demonstrating the effectiveness of the 0-1 integer programming constraint solution and the excellent solution ability of VN-IPSO in discrete optimization problems. In the test of 10 composite functions, VN-IPSO wins first place four times and ranks the first in the comprehensive ranking, demonstrating its excellent solving ability for complex functions.

1. Introduction

As a random approximate optimization technology, metaheuristic algorithms use a search strategy inspired by natural laws and human society to solve optimization problems [1,2]. Research on metaheuristic algorithms can be traced back to 1957. Fraser initially proposed concepts such as the genetic algorithm (GA) based on the theory of evolution [3]. The concept of metaheuristic algorithms was proposed by Fred Glover in 1986 [4]. Subsequently, metaheuristic algorithms developed rapidly, and scholars continued to propose metaheuristic algorithms with various characteristics. Inspired by the foraging behavior of ants, Colorni et al. proposed the ant colony optimization algorithm in 1991 and successfully applied it to solve the TSP problem [5]. The idea of ant colony optimization algorithm combined with fuzzy C-means clustering is applied to the problem of the collaborative multi-task reallocation of heterogeneous UAVs [6]. In 1995, Kennedy and Eberhart were inspired by the foraging behavior of bird flocks and jointly proposed particle swarm optimization (PSO) [7]. Abbass et al. were inspired by the reproductive behavior of bees and proposed a bee colony optimization algorithm in 2001 [8]. Karaboga et al. were inspired by the honey-collecting mechanism of bees and proposed an artificial bee colony optimization algorithm [9]. Inspired by the phototaxis of bacteria, Pan et al. proposed a bacterial phototaxis optimization algorithm [10]. Tang et al. proposed an improved artificial electric field algorithm (I-AEFA) and applied it to robot three-dimensional path planning [11].
The particle swarm optimization algorithm is one of the most well-known metaheuristic algorithms. It has received widespread attention in academia and industry. The improvement of the particle swarm optimization algorithm mainly includes three aspects: population initialization, parameter adaptation, and hybrid optimization. Haupt et al. pointed out that the solution accuracy and convergence speed of metaheuristics would be affected by the initial population. The higher the diversity of the population, the stronger the global optimization ability of the particle swarm optimization algorithm [12]. In order to improve the diversity of the population, scholars have designed many population initialization strategies, such as population initialization strategies based on opposition-based learning [13] and chaos mapping [14]. The reverse learning strategy is mainly used to expand the algorithm search area. It has been successfully applied in swarm intelligence optimization algorithms such as the whale optimization algorithm [15] and gray wolf optimization algorithm [16]. Qin et al. introduced adaptive inertia weights to improve the solution accuracy and convergence speed [17]. Chauhan et al. proposed three nonlinear strategies to select inertia weights to improve the quality of the algorithm solution [18]. Sekyere et al. applied nonlinear adaptive dynamic inertia weights to improve the exploration capability of the algorithm [19]. Hybrid optimization combines other optimization methods with the particle swarm optimization algorithm to overcome the limitations of various algorithms and improve the overall performance of the algorithm. Li et al. designed a search mechanism for the simulated annealing algorithm in the particle swarm optimization algorithm, which effectively improved its exploit capability [20]. Blas et al. applied different evolution algorithms to improve the optimization mechanism of the particle swarm optimization algorithm and enhanced the algorithm’s optimization ability [21]. Zhang et al. proposed an improved hybrid algorithm based on particle swarm optimization and gray wolf optimization to combine their advantages and then apply it to clustering problems [22]. In order to improve the diversity of the initial population, this paper adopts a population initialization strategy based on reverse learning. Parameter adjustment and adaptability mainly improve the speed update operator in the particle swarm optimization algorithm and coordinate the exploration and development capabilities of the algorithm by adjusting the parameters in the speed update operator.
The application of the particle swarm optimization algorithm is very extensive. Li et al. proposed a particle swarm optimization algorithm based on fast density peak clustering and successfully applied it to dynamic optimization problems [23]. Lu et al. designed a multi-level particle swarm optimization algorithm and successfully applied it to market-driven workflow scheduling problems on heterogeneous cloud resources with deadline constraints [24]. Aljohani et al. combined random forest and particle swarm optimization algorithms, used the particle swarm optimization algorithm to eliminate redundant features, and successfully and efficiently realized pothole detection on the road [25].
In order to further improve the performance of the particle swarm optimization algorithm, we focused on enhancing the exploration of the particle swarm optimization algorithm to improve its global optimization ability and proposed an improved particle swarm optimization algorithm based on grouping (IPSO) in 2023 [26]. However, the local development ability of the algorithm still has room for improvement and does not have the ability of constrained optimization and discrete optimization. In order to further improve the performance and application value of the IPSO algorithm, this paper focuses on improving the local development ability of the IPSO algorithm and its ability to solve discrete problems with constraints. The contribution of this study lies in two aspects. First, on the basis of the IPSO algorithm, we propose a local development strategy for variable neighborhood search to obtain a new algorithm, VN-IPSO, which improves the algorithm’s optimization ability in solving multi-peak functions and composite functions. Second, we design a constrained 0-1 integer programming solver with constraints, which enables VN-IPSO to solve constrained discrete optimization problems. This solver is also suitable for other metaheuristic algorithms to expand their ability to solve discrete problems with constraints.
The rest of this paper is organized as follows. Section 2 introduces the optimization and neighborhood definitions. In Section 3, we describe the improved particle swarm optimization algorithm based on variable neighborhood search. Section 4 introduces the experiment results and analysis. Finally, Section 5 summarizes this paper and describes future research.

2. Definitions and Preliminaries

Optimization is generally understood as the process of exploring all potential values of a problem to find the best solution [27]. Specifically, it is to maximize or minimize a multidimensional function within a given set of constraints, which can be expressed as follows [28]:
m i n i m i z e   f ( X ) ,   s . t .   g ( X ) 0 ,   X R n
where X = ( x 1 , x 2 , , x n ) is an n-dimensional solution, R n is the domain, f ( X ) is the objective function, and g ( X ) 0 is the constraint condition.
A neighborhood is a set of candidate solutions defined by the operator, which contains a set of possible solutions in a specific problem space, which are relatively close to the current solution. Operation operators are strategies or methods adopted for a given solution, which are used to generate new solutions in the neighborhood. These operators can be realized by various means, such as modifying some parameters, exchanging elements, or applying specific rules so as to explore possible improvement directions and provide broader choices for solving optimization problems. By using different operators, operators can effectively traverse the whole solution space and improve the chances of finding high-quality solutions.
Define 1 
(Neighborhoods [29]). A neighborhood is a set of candidate solutions defined by an operator.
Define 2 
(Operators [29]). An operation operator is an operation or method of operation that generates a neighbor solution based on a given solution. For the solution  X 0 , define an operator  f , and the set of all  f ( X 0 )  is a neighborhood, as shown in Figure 1.
The variable neighborhood search algorithm is based on a variety of different neighborhood structures, alternating between different neighborhood structures for searching; its core idea is to use a small neighborhood for rapid improvement and a large neighborhood for deep optimization. The process of the variable neighborhood search algorithm is as follows: (1) when there is no solution better than the current solution in the current neighborhood search, search the next larger neighborhood; and (2) when there is a solution better than the current solution in the current neighborhood search, update the current solution in time, and start the search again from the first neighborhood based on the updated current solution, as shown in Figure 2.
The knapsack problem is that, given a series of items, each item has two attributes, price and volume. When the total volume is limited, items must be chosen to make the total price of the items in the bag the highest [30]. The knapsack problem is an NP-Complete problem, and the correctness of its solution can only be verified by a deterministic Turing machine in polynomial time. The knapsack problem is widely used in real-world scenarios, such as packing problems, portfolio investment problems with limited funds, warehousing problems, etc. The knapsack problem can be expressed as shown in Equation (2).
max j J w j × x j ,   s . t . j J v i j × x j V i ,     i I
Among them, J represents the number of items, I represents the number of backpacks; w j represents the price of the j t h item; V i represents the limited volume of the i   t h backpack; v i j represents the volume occupied by the j   t h item when it is put into the i   t h backpack; and x j { 0 , 1 } represents whether the j   t h item is selected, if so, it is 1, otherwise it is 0.

3. Methods

We propose an improved particle swarm optimization algorithm based on variable neighborhood search on the basis of the IPSO algorithm and design a set of 0-1 integer programming solutions with constraints. The flow diagram for the complete work is shown in Figure 3. We introduce the idea of variable neighborhood search and specially design a local development strategy based on variable neighborhood search, which aims to deeply develop the current optimal particles in each iteration. We design large neighborhood and small neighborhood search operators, Operator1 and Operator2, to improve the local scalability of the IPSO algorithm. We combine IPSO with a local development strategy based on variable neighborhood search to form VN-IPSO, which has strong exploration and development capabilities to solve continuous optimization functions. Additionally, we propose a coding scheme based on 0-1 integer programming to expand the algorithm’s ability to address constrained discrete problems, using the sigmoid function as the mapping scheme.

3.1. Local Development Strategy Based on Variable Neighborhood Search

In [26], a large number of experiments were carried out to verify and analyze the performance of the IPSO algorithm. The IPSO algorithm has extremely strong global exploration and global optimization capabilities, but its local development capabilities still have room for improvement. Therefore, it is worthwhile to improve the local search capabilities of the IPSO algorithm and then develop its convergence accuracy and comprehensive solution performance. In order to improve the local development capabilities of the IPSO algorithm, this paper considers introducing the idea of variable neighborhood search and designs a local development strategy based on variable neighborhood search, as shown in Algorithm 1.
Algorithm 1 Local development strategy based on variable neighborhood search.
1:Input  N , X , f , dim, objf(), u b , l b , steps
2:Define two neighborhoods, o p e r a t o r 1 ( ) and o p e r a t o r 2 ( )
3:Define the IPSO method
4:for  i = 1 to N do
5:  Use the IPSO algorithm to obtain the current local optimal particle gbest and fitness value f
6: X 1 , f 1 = o p e r a t o r 1 ( g b e s t )
7:  if  f 1 < f :
8:    gbest = X 1
9:     f = f 1
10:  continue
11: X 2 , f 2 = o p e r a t o r 2 ( g b e s t )
12:  if  f 2 < f :
13:    gbest = X 2
14:       f = f 2
15:  continue
16:end for
17:Output gbest, f
The local development strategy based on variable neighborhood search constructs a variable neighborhood search operator by designing multiple neighborhood search operators in a targeted manner. After each round of iteration, the variable neighborhood search is used to start the search from the current global optimal particle. Once a particle better than the current global optimal particle is found, the search is immediately stopped and the search information is returned to update the current global optimal particle and enter the next round of iteration; if a particle better than the current global optimal particle is not found after executing the entire neighborhood space, the next round of iteration is directly entered. This adds more local development operations in each round of iteration, which can theoretically improve the local development capability of the PSO algorithm.
We design two neighborhoods to improve the local expansion capability of the IPSO algorithm: the small neighborhood quick exploitation operator (Operator1) and the large neighborhood deep exploration operator (Operator2). Operator1 plays the role of rapidly improving the small neighborhood, undertakes the task of local refined search development, and strives to discover the useful information around the current particle; Operator2 plays the role of the deep optimization of the large neighborhood, undertakes the task of local rough search development, and complements Operator1 to prevent the entire search process from falling into the local optimum, thereby improving the overall search capability.
(1)
Small neighborhood quick exploitation operator
The purpose of designing Operator1 is to conduct a refined search around the current global optimal particle and mine the useful information around the current global optimal particle, as shown in Algorithm 2. Therefore, when designing the operator of Operator1, the movement amplitude is very small. First, a moving step list is defined, s t e p s = [ 0.1 , 0.01 , 0.001 , 0.0001 ] . Secondly, each dimension of the current solution is traversed, each dimension is moved in turn according to the step length in the moving step list to obtain a new candidate solution, and the fitness value of the new candidate solution f 1 is immediately calculated. If f 1 > f , the search continues; if f 1 < f , the search is stopped, and the new candidate solution information is recorded. Finally, if a new candidate solution is found, the relevant information is returned.
Algorithm 2 Small neighborhood quick exploitation operator.
1:Input  X , f , dim, objf(), u b , l b , steps
2:for i = 1 to dim do
3:  for j = 1 to len(steps) do
4:     X = X
5:     X ( i ) = X ( i ) + s t e p s [ j ]
6:     X = n u m p y . c l i p ( X , l b , u b )
7:     f 1 = o b j f ( X )
8:     X = X
9:     X ( i ) = X ( i ) s t e p s [ j ]
10:     X = n u m p y . c l i p ( X , l b , u b )
11:     f 2 = o b j f ( X )
12:    Select the improved neighbor solution X that appears for the first time in the neighborhood
13:   end for
14:end for
15:Output  X
(2)
Large neighborhood deep exploration operator
The purpose of designing Operator2 is to conduct a larger search around the current global optimal particle, perform deep optimization, prevent the entire search process from falling into the local optimum, and improve the algorithm’s ability to jump out of the local optimum, as shown in Algorithm 3. Therefore, when designing the operator of Operator2, its search range is larger and more random. Firstly, we define the search step size. Secondly, we define the large neighborhood deep exploration operator, which contains two functions. The first is the selection function, selecting the mutation position of the current solution; the second is the mutation function, mutating the mutation point of the current solution and generating a neighbor solution. The search is terminated and the neighbor solution information is returned if an improved neighbor solution is found; otherwise, the search process continues.
Algorithm 3 Large neighborhood deep exploration operator.
1:Input  X , f , dim, objf(), u b , l b , n u m s
2:for  i = 1 to n u m s do
3:   X = X
4:   i n d e x = randint(1, dim)
5:   X [ i n d e x ] = rand * (ub − lb) + lb
6:   X = clip ( X , lb, ub)
7:   f = objf ( X )
8:  Find the first occurrence of a better neighbor solution X
9:end for
10:Output  X

3.2. The Improved Particle Swarm Optimization Algorithm Based on Variable Neighborhood Search

In Section 3.2.1, we describe the complete process and key steps of the VN-IPSO algorithm. Then, in Section 3.2.2, we give a coding scheme to expand the application scope of the VN-IPSO algorithm, from solving continuous problems to solving discrete problems.

3.2.1. The Flow of the VN-IPSO Algorithm

Like the IPSO algorithm, each particle has two properties, namely, velocity and geometric position. In each iteration process, on the one hand, the local development strategy based on variable neighborhood search is used to deeply optimize the current global optimal solution; on the other hand, each particle updates its velocity and position successively by Equations (3) and (4) to constantly approach the optimal position. Figure 3 shows the process of the VN-IPSO algorithm. The concrete implementation steps of the improved particle swarm optimization algorithm based on variable neighborhood search are as follows.
  • Population initialization
Firstly, the initial velocities and geometric positions of N particles are randomly generated in the solution space. V i ( 0 ) represents the initial velocity of the i t h particle, which is a D-dimensional vector. X 1 i ( 0 ) represents the initial position of the i t h particle in population X1.
V i ( 0 ) = ( v 1 ( 0 ) , v 2 ( 0 ) , , v D ( 0 ) ) , i = 1 , 2 , , N
X 1 i ( 0 ) = ( x 1 ( 0 ) , x 2 ( 0 ) , , x D ( 0 ) ) , i = 1 , 2 , , N
Secondly, the inverse point N of the initial geometric position of X 2 particles is obtained. Finally, from the particle set composed of N particles X 1 and N opposite points X 2 , N particles with the best fitness value are selected as the initial population, which is denoted as X .
2.
Grouping
We take X = ( X 1 ( 0 ) , X 2 ( 0 ) , , X N ( 0 ) ) as the data set (the dimension of the data set is: N × d i m ), use the K-Means algorithm to group the N particles in the initial population, and set the group number K = max ( i n t ( N 40 ) , 1 ) . Then, we obtain X = ( X k 1 ( 0 ) , X k 2 ( 0 ) , , X k i ( 0 ) , , X k N ( 0 ) ) , where X k i ( 0 ) means that the particle i belongs to the k t h group.
3.
Choosing
Firstly, we find the optimal position P k i ( p i 1 , p i 2 , , p i n ) found by each particle; secondly, we find the optimal position P k g ( p i 1 , p i 2 , , p i n ) found by each group; and finally, we find the optimal position P g ( t ) { P k 1 ( t ) , P k 2 ( t ) , , P k N ( t ) } found by the current population so far.
4.
Local development search
The purpose of this step is to locally develop the current optimal particle P g ( t ) of the population obtained in Step (3), thereby improving the overall development capability and convergence speed. According to the variable neighborhood search algorithm process shown in Figure 2, the two operators Operator1 and Operator2 are executed in sequence on the current optimal particle P g ( t ) of the population. If a better particle P g ( t ) is found during the process, the search is stopped immediately, and the current optimal particle of the population is updated to P g ( t ) = P g ( t ) .
5.
Update operator
For each particle, we update the speed and position:
V i ( t + 1 ) = ω × V i ( t ) + c 1 × r 1 ( t ) ( P k i ( t ) X i ( t ) ) + c 2 × r 2 ( t ) ( P k g ( t ) X i ( t ) ) + c 3 × r 3 ( t ) ( P g ( t ) X i ( t ) )
X i ( t + 1 ) = X i ( t ) + α × V i ( t + 1 )
where ω is the inertia factor; c 1 , c 2 , and c 3 are acceleration constants; and r 1 ( t ) , r 2 ( t ) , and r 3 ( t ) are independent random numbers between 0 and 1 at time t .
6.
Termination inspection
If X ( t + 1 ) has produced an approximate solution that meets the required accuracy or the number of iterations T , we stop the calculation and output the current optimal position of the population.
X ( t + 1 ) = ( X 1 ( t + 1 ) , X 2 ( t + 1 ) , , X N ( t + 1 ) )
Otherwise, t + 1 , and we go to Step (3). The process of the enhanced improved particle swarm optimization algorithm based on variable neighborhood search is shown in Figure 4.

3.2.2. Coding Scheme for Discrete Optimization

VN-IPSO can only be used to solve continuous optimization problems, not discrete optimization problems. In this section, the coding scheme is studied to establish the mapping relationship between the particle element values of each dimension and 0-1, so that VN-IPSO can be applied to solve 0-1 integer programming problems. In this paper, the Sigmoid function is modified to perform de-mapping, as shown in (8) [31].
x i b i n a r y = { 1 , x i u b 1 , l b < x i < u b , r a n d 1 1 + e x i 0 , l b < x i < u b , r a n d > 1 1 + e x i 0 , x i l b
where x i represents the element value of the particle’s i dimension; r a n d represents the random number within [0,1]; and u b and l b are the real numbers within [0,1]. Simulation experiments are carried out on groups of u b and l b , and the results show that u b = 0.8 and l b = 0.2 . The time mapping effect is better. The corresponding relationship between x i b i n a r y and x i of the particle dimensions is shown in Figure 5. The red particle indicates that the value of the particle in this dimension is mapped to 1, and the black particle indicates that the value of the particle in this dimension is mapped to 0. when x i 0.2 , the value in the dimension is 0. When 0.2 < x i < 0.8 , the value in this dimension is taken as 1 with the probability p 1 = 1 1 + e x i and 0 with the probability p 2 = 1 1 1 + e x i . If x i 0.8 is greater than or equal to 0.8, the value in this dimension is 1. Formula (8) guarantees that the probability of taking 1 in this dimension increases with an increase in x i , and it has a certain randomness.
In the process of the algorithm solving the 0-1 integer programming problem, the mapping between x i b i n a r y and x i only occurs during particle evaluation and the optimal solution output. When evaluating the particle, the values of each dimension of the particle are mapped and the corresponding evaluation function value is calculated.

4. Results

To evaluate the performance of the VN-IPSO algorithm and the effectiveness of the CBPS scheme, we compare VN-IPSO with some excellent comparative algorithms to solve 23 classic benchmark functions [32], 6 knapsack problem benchmark sets [33], and 10 CEC2017 composite functions [34]. In Section 4.1, the 23 classic benchmark functions, 6 knapsack problem benchmark sets, 10 CEC2017 composite functions, and the comparison algorithms are described. In Section 4.2, the solution process of the enhanced improved particle swarm optimization algorithm based on variable neighborhood search is qualitatively analyzed from the perspectives of search history (population position changes), the trajectory of the first particle, the average fitness of the population, and convergence. In Section 4.3, the results data of each algorithm on the 23 benchmark functions and 6 knapsack problem benchmark sets are analyzed in detail. The population size and maximum number of iterations of all optimization algorithms are set to 200 and 1500, respectively; other parameter settings are consistent with the sources of each algorithm. Each algorithm is run independently 10 times on each benchmark problem, and the average performance of the algorithm is used as the result data.

4.1. Benchmark Functions and Comparison Algorithms

The benchmark functions used in this section are divided into two parts. The first includes 23 classic benchmark functions and 6 knapsack problem benchmark sets. The former is a continuous optimization problem, and the latter is a discrete optimization problem with constraints. The second part includes the CEC2017 composite functions, which belongs to the complex function-solving problem. Through the two parts of the test problem, the performance of the VN-IPSO algorithm is comprehensively evaluated. The Friedman test [35], as a non-parametric test method, can detect significant differences in the performances of several algorithms, and the average ranking of each algorithm on the benchmark function can be obtained through its use. Among the 23 benchmark functions, F1–F7 are unimodal optimization problems, and F8–F23 are multimodal optimization problems. The unimodal function is generally used to test the development ability of the algorithm, while the multi-modal function is generally used to test the exploration ability of the algorithm. For the six knapsack problem benchmark sets, detailed information is shown in Table 1. It shows the name, number of knapsacks, number of items, item prices, knapsack capacity, constraint coefficient matrix, optimal value, and other information of each knapsack problem benchmark set. The detailed data on the item prices and constraint matrix coefficients are shown in the attachment. In addition, the mathematical models of each knapsack problem can be obtained by substituting the various parameters of the six knapsack problem benchmark sets into Equation (2). Among the CEC2017 functions, C20–C29 are composite functions, which could be used to test the algorithm’s ability to solve complex problems.
When solving the first part of the benchmark functions, we select six representative algorithms that simulate biological behaviors, including the Harris hawk optimization (HHO) [36], the osprey optimization algorithm (OOA) [37], the sparrow search optimization algorithm (SSA) [38], and the dung beetle optimization (DBO) [39]. When solving the second part of the benchmark functions, we choose PSO, PSO with comprehensive learning and a modified dynamic multi-swarm strategy (CLDMSL-PSO) [40], guided adaptive search-based PSO (GuASPSO) [41], and success-history based parameter adaptation for differential evolution (SHADE) [42]. CLDMSL-PSO and GuASPSO are improved PSO algorithms with an excellent performance shown in recent years. SHADE is a winning algorithm in the CEC competition that is widely used to compare the performances of algorithms solving a single objective function. The comparison algorithms from these different sources can show the performance of the VN-IPSO algorithm more comprehensively.
In the experiment, the parameters of VN-IPSO are set as follows: M a x ( V ) = 6 ; M a x ( ω ) = 0.8 ; M i n ( ω ) = 0.2 ; M i n ( c 1 ) = 0.1 ; M i n ( c 2 ) = 0.5 ; M i n ( c 3 ) = 0.2 ; M a x ( c 1 ) = 2 ; M i n ( c 2 ) = 2.5 ; M i n ( c 3 ) = 3 . On the one hand, the selection of these parameter values and the determination of their upper and lower limits are based on the original settings of PSO and some improved parameter settings of PSO. On the other hand, the parameter value range obtained in many experiments is used to ensure the excellent performance of the algorithm.

4.2. Qualitative Analysis

In order to conduct a qualitative analysis of the solving performance of VN-IPSO, we analyze four well-known indicators in the field: search history (population position change), the movement trajectory of the first particle, the average fitness of the population, and the best fitness value of the population. By recording the initial state, intermediate state, and final state of the population, the search calendar shows the position changes of all particles in the population in the iterative process and can reflect the movement trend of the population. The trajectory of the first particle is monitored by recording the change in the first one-dimensional variable of the first particle in the iterative process. The average fitness of the population was recorded in each generation to reflect the overall quality change and population stability. The best fitness value of the population reflects the convergence of the algorithm by recording the best fitness value in each generation.
One benchmark function is selected from each class of 23 benchmark functions for display and analysis, and two benchmark test sets WEING1 and WEING2 are selected from 6 benchmark test sets for display and analysis. Every complete qualitative analysis consists of four indexes: search history (population position change), the movement trajectory of the first particle, the average fitness, and convergence of the population. The search calendar consists of green dots, blue dots, and red dots, which represent the initial state, intermediate state, and final state of the population, respectively. The x axis of the first particle trajectory diagram refers to the number of iterations, and the y axis refers to the value of the first one-dimensional variable of the first particle. The x axis of the population average fitness graph refers to the number of iterations, and the y axis refers to the average fitness value of the population. The x axis of the optimum fitness diagram of the population refers to the number of iterations, and the y axis refers to the optimum fitness value of the population. Figure 5 shows the qualitative analysis results of VN-IPSO for single-peak optimization problems (F1) and multi-peak optimization problems (F9, F18). Figure 6 shows the qualitative analysis results of VN-IPSO when solving constrained 0-1 integer programming problems (WEING1 and WEING2).
As can be seen from the search history in Figure 6a–c and Figure 7a,b, the population gradually gathers from the initial green random dispersion state into the middle blue state in the iterative process and finally converges to the red point. This reflects that the population in the initial stage of the VN-IPSO algorithm has a good diversity, and the convergence performance of VN-IPSO gradually appears with the advancement of the iterative process. In addition, comparing the search calendar of the VN-IPSO algorithm with the search calendar of IPSO, it is found that the middle state is more concentrated, indicating that the convergence speed of the VN-IPSO algorithm is improved on the basis of the IPSO algorithm.
It can be seen from the first particle trajectory in Figure 6a–c and Figure 7a,b that the first particle moves with a high frequency and large amplitude in the beginning phase; with an increase in the number of iterations, the position of the first particle tends to be stable, and finally, it is stable in one position. This shows that the VN-IPSO algorithm has a convergence tendency and can find the stable point. In addition, comparing the first particle trajectory diagram of the VN-IPSO algorithm with the first particle trajectory diagram of the IPSO algorithm, it is found that the first particle trajectory diagram is more stable, indicating that it has a higher stability and faster convergence speed than that of the IPSO algorithm.
In Figure 6a–c and Figure 7a,b, with the population average fitness figure (average fitness of all particles), it can be seen that the population average fitness value declines in volatility, and this decline is fast after a slow start. This reflects that the overall quality of the population in the IPSO algorithm gets better with slight fluctuations, and finally tends to be stable.
According to the convergence curve in Figure 6a–c and Figure 7a,b, it can be seen that the optimum fitness value of the population decreases rapidly first and then remains stable with an increase in the number of iterations, indicating that the VN-IPSO algorithm has a convergence ability. In addition, by comparing the population best fitness graphs of the VN-IPSO algorithm and IPSO algorithm, it is found that, in the benchmark function F9, the VN-IPSO algorithm converges faster, while in the other benchmark functions, there is little difference between the two, indicating that the overall convergence ability of the VN-IPSO algorithm is stronger than that of the IPSO algorithm.
By analyzing the search history, the trajectory of the first particle, the average fitness value of the population, and the best fitness value of the population, it can be seen that the VN-IPSO algorithm performs well in terms of population diversity, convergence speed, optimization ability, and robustness. In addition, VN-IPSO also showed a good performance on the six benchmark test sets of knapsack problems. The VN-IPSO algorithm showed a good population diversity, convergence speed, and optimization ability in continuous optimization problems and constrained 0-1 integer programming problems. The results show that the 0-1 integer programming solution designed in this paper can help VN-IPSO to solve constrained discrete optimization problems to some extent.

4.3. Quantitative Analysis

In this section, the VN-IPSO algorithm and comparison algorithms are carried out 10 times successively on 23 benchmark test functions and 6 benchmark test sets of knapsack problems, with 10 experimental results of each algorithm obtained on each benchmark test problem. The experimental results are further processed and analyzed. The population size and maximum number of iterations of all optimization algorithms are set to 200 and 1500, respectively; other parameter settings are consistent with the sources of each algorithm.
In the experiment, the average error value, standard deviation, and Friedman ranking are also used as quantitative analysis indicators to evaluate the performance of the VN-IPSO algorithm. In Table 2, the average error and standard deviation of each algorithm in solving the 23 benchmark functions are recorded, the ranking of eight algorithms is marked on each benchmark function, and the algorithm that ranks first is marked in bold. In addition, the number of times that each algorithm wins first place in the 23 benchmark test functions, the number of times that each algorithm wins first place in the unimodal functions, and the number of times that each algorithm wins first place in the multi-modal functions are also counted, with the algorithm that wins first place in the above three cases being marked in bold. At the end of Table 2, the Friedman ranking of the above eight algorithms is also recorded. According to these Friedman rankings, VN-IPSO ranked first, IPSO ranked second, SSA ranked third, HHO ranked fourth, PSO ranked fifth, OOA ranked sixth, DBO ranked seventh, and GA ranked eighth. The results show that the VN-IPSO algorithm has a strong competitiveness in the comparison with IPSO and other classical algorithms, and its comprehensive solving performance is very high. At the same time, it shows that the improvement based on the IPSO algorithm is effective.
In Table 2, among the experiments for solving the 23 benchmark functions, the VN-IPSO algorithm obtained 18 first places, the IPSO algorithm obtained 13 first places, the SSA algorithm obtained 5 first places, the HHO algorithm obtained 2 first places, the PSO algorithm obtained 3 first places, and the DBO algorithm obtained 2 first places. The OOA algorithm won first place, while the GA algorithm did not win first place in all experiments. The VN-IPSO algorithm performed the best in most of the benchmark functions, which fully proves that the VN-IPSO algorithm has an excellent performance. We calculated the number of times each algorithm won the first place from the perspective of unimodal function and multi-modal functions, respectively, as shown at the end of Table 2.
Among the seven unimodal functions, the VN-IPSO algorithm won first place six times, the IPSO algorithm won first place two times, and the SSA algorithm won first place once. This indicates that the VN-IPSO algorithm has a better development ability than that of IPSO. The local development strategy effectively improves the exploit ability.
Among the 16 multi-modal functions, the VN-IPSO algorithm won first place 12 times, the IPSO algorithm won first place 11 times, the SSA algorithm won first place 4 times, the HHO algorithm won first place 2 times, the PSO algorithm won first place 3 times, the DBO algorithm won first place 2 times, and the OOA algorithm won first place once, showing that the exploration ability exceeds other comparison algorithms.
Therefore, compared with the IPSO algorithm and the other six comparison algorithms, the VN-IPSO algorithm surpassed the other algorithms in both unimodal functions and multi-modal functions. The unimodal function can verify the development ability of the algorithm, and the multimodal function can verify the exploration ability of the algorithm, indicating that the exploration and development capabilities of the VN-IPSO algorithm are very excellent, achieving a balance between the exploration and development of the algorithm, indicating that the local development strategy based on variable neighborhood search designed in this chapter is effective for improving the IPSO algorithm, not only achieving the goal of improving the local development ability of the IPSO algorithm, but also achieving a balance between the exploration and development of the algorithm, further improving the performance of the algorithm.
In addition, the detailed experimental results of each algorithm in solving the six knapsack problem benchmark test sets are recorded in Table 3. It can be seen that VN-IPSO’s results on most datasets are better than the other seven algorithms. On the six discrete problem benchmark test sets, VN-IPSO won four first places, surpassing the other seven algorithms. According to the Friedman ranking, VN-IPSO ranked first, GA ranked second, IPSO ranked third, OOA ranked fourth, HHO ranked fifth, PSO ranked sixth, SSA ranked seventh, and DBO ranked eighth. The experimental results not only prove the high solution performance of the VN-IPSO algorithm, but also verify the effectiveness of the 0-1 integer programming solution. That is, the comprehensive solution performance of VN-IPSO is better than the other comparison algorithms, and when combined with the 0-1 integer programming solution, it is also applicable to solving 0-1 integer programming problems with constraints. From the above analysis results, VN-IPSO is superior to IPSO, GA, DBO, PSO, SSA, HHO, and OOA on most problems. It not only has a high computational efficiency, but also has a very strong optimization ability, convergence ability, global optimization ability, and exploration and development ability. In addition, when solving the constrained 0-1 integer programming problems, the VN-IPSO algorithm also shows a strong competitiveness, indicating that the VN-IPSO algorithm can not only efficiently solve unconstrained continuous optimization problems, but also effectively solve constrained 0-1 integer programming problems.
When setting the experiment parameters, we used 30 independent runs for each comparison algorithm and each test function, 500 iterations in each dimension of the test function, and the population size is set to 200. In order to further demonstrate the performance of the proposed VN-IPSO algorithm in solving composite functions, we compared VN-IPSO with the particle swarm algorithm, three improved particle swarm algorithms proposed in recent years, and SHADE, a winning algorithm in the CEC competition, on the composite test functions of CEC2017. The results are shown in Table 4. It records the average error and standard deviation of each algorithm in solving the 10 composite functions and marks the ranking of the six algorithms on each benchmark function. The algorithm ranked first is marked in bold, and the number of times each algorithm won first place in the 10 composite test functions is counted. Table 4 also gives the Friedman scores and rankings of the six algorithms. According to the Friedman ranking, VN-IPSO ranks first, IPSO ranks second, SSA ranks third, HHO ranks fourth, PSO ranks fifth, and OOA ranks sixth. The results show that the VN-IPSO algorithm is very competitive in comparison with SHADE and multiple improved particle swarm algorithms, and has an excellent performance in solving composite functions.
We compare and analyze the convergence of VN-IPSO and the other five algorithms, and Figure 8 plots the convergence curves of the 10 composite test functions in an independent test. There are large differences in the search speed, solution accuracy, and convergence speed among the different algorithms. On the C21, C22, C24, C25, C26, and C28 functions, VN-IPSO converges quickly and has a high accuracy, with obvious advantages among the six algorithms. On the C20 and C29 test functions, SHADE converges quickly and has the highest accuracy, and VN-IPSO has the second-highest accuracy. On the C23 test function, SHADE has the fastest convergence speed and the highest accuracy, CLDMSL-PSO ranks second, PSO ranks third, VN-IPSO ranks fourth, and IPSO and GuASPSO rank fifth. On the C27 test function, SHADE has the fastest convergence speed and the highest accuracy, CLDMSL-PSO is second, PSO is third, and IPSO and GuASPSO are both fourth, while VN-IPSO is sixth. Overall, IIPSO has a better convergence performance than the other algorithms on most composition function, and its convergence ability is very stable.
We compare and analyze the robustness of VN-IPSO and five other algorithms, and draw the box diagram of six algorithms on 10 composition functions, as shown in Figure 9. There are great differences in the solving accuracy and robustness among the different algorithms. On the C26 and C28 functions, VN-IPSO has the highest robustness and precision. On the C20, C22, and C27 functions, SHADE has the highest robustness and precision, and VN-IPSO ranks second. On the C23 and C29 functions, the data distribution of VN-IPSO is relatively concentrated, and its performance is average. On the C21, C24, and C25 functions, the data distribution of VN-IPSO is widely distributed and has a weak robustness. VN-IPSO and SHADE perform best on most test functions, with relatively concentrated data distributions. CLDMSL-PSO performs mediocrely on some test functions, with a wide data distribution and a small number of outliers. GuASPSO, PSO, and IPSO perform averagely on most test functions, with a wide data distribution and some outliers.
We perform a computational complexity analysis for VN-IPSO. First, we analyze the time complexity. In the algorithm, the initialization stage involves operations such as population initialization, initial fitness calculation, and k-means clustering, and its time complexity is O(N × D).The main cycle stage includes fitness evaluation, individual optimal and population optimal updating, speed and location updating, and other operations, which need to iterate the population several times, and the time complexity is O(T × K × N × D).Therefore, the time complexity of the overall algorithm is approximately O(T × K × N × D). Then, we analyze the space complexity. The space complexity of storing information such as the position and speed of the initial population and another opposite point is O(2 × N × D).The space complexity of storing information such as the number of samples, labels, etc., in each cluster is O(K × N × D).The space complexity of storing information such as parameter variables, nearest neighbors, and optimal values is O(N × D). Therefore, the overall spatial complexity of the algorithm is approximately O(N × D+ K × N × D). The overall time complexity of the original particle swarm optimization algorithm can be approximated as O(T × N × D) and space complexity as O(N × D). Therefore, the difference of time complexity and space complexity between our algorithm and the original PSO is mainly due to the number of clusters. There is little difference when clustering is small.

5. Conclusions

The local development ability of the IPSO algorithm still has room for improvement, and it does not have constrained optimization and discrete optimization capabilities. Therefore, we mainly improve the local development ability of the IPSO algorithm and its ability to solve constrained discrete problems. We propose an improved particle swarm optimization algorithm based on variable neighborhood search and design a set of constrained 0–1 integer programming problems. In terms of improvement strategies, the VN-IPSO algorithm continues to use the three improvement strategies of the IPSO algorithm: population initialization strategy, parameter adaptation strategy, and grouping strategy, and designs a local development strategy based on variable neighborhood search. The local development strategy based on variable neighborhood search improves the exploit ability by deeply developing the current optimal particle of the population in each iteration.
In addition, we conducted a large number of experiments on 23 benchmark test functions, 6 benchmark test sets of knapsack problems, and 10 composite functions, and analyzed the experimental data from two aspects: qualitative analysis and quantitative analysis. The experiment results show that the VN-IPSO algorithm not only has a better local development ability than the IPSO algorithm, but also ranks first in 23 benchmark functions, 6 knapsack problem benchmark sets, and 10 composite functions of Friedman’s ranking. Therefore, the VN-IPSO algorithm proposed in this paper is very competitive, with strong exploration and development capabilities, convergence capabilities, global optimization capabilities, and a strong robustness. In addition, the VN-IPSO algorithm can not only efficiently solve unconstrained continuous optimization problems, but also effectively solve constrained 0-1 integer programming problems in combination with the CBPS scheme. Furthermore, it also performs well in solving complex functions. In future research, we will consider applying VN-IPSO to multi-objective optimization problems to further expand its scope of application.

Author Contributions

Conceptualization, H.L. and J.Z.; methodology, H.L., J.Z., Z.Z. and H.W.; software, H.L.; validation, H.L., J.Z., Z.Z. and H.W.; formal analysis, H.L. and J.Z.; investigation, J.Z.; resources, Z.Z. and H.W.; data curation, H.L.; writing—original draft preparation, H.L. and J.Z.; writing—review and editing, J.Z. and Z.Z.; visualization, H.L.; supervision, J.Z., Z.Z. and H.W.; project administration, H.L. and J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original data and analytical data can be obtained from the author Hao Li ([email protected]).

Conflicts of Interest

Author Jianjun Zhan was employed by the company Cainiao Network. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Osaba, E.; Villar-Rodriguez, E.; Del Ser, J.; Nebro, A.J.; Molina, D.; LaTorre, A.; Suganthan, P.N.; Coello, C.A.C.; Herrera, F. A Tutorial on the Design, Experimentation and Application of Metaheuristic Algorithms to Real-World Optimization Problems. Swarm Evol. Comput. 2021, 64, 100888. [Google Scholar] [CrossRef]
  2. Tang, J.; Liu, G.; Pan, Q. A Review on Representative Swarm Intelligence Algorithms for Solving Optimization Problems: Applications and Trends. IEEE/CAA J. Autom. Sin. 2021, 8, 1627–1643. [Google Scholar] [CrossRef]
  3. Fraser, A.S. Simulation of Genetic Systems by Automatic Digital Computers II. Effects of Linkage on Rates of Advance under Selection. Aust. J. Biol. Sci. 1957, 10, 492–500. [Google Scholar] [CrossRef]
  4. Glover, F. Future Paths for Integer Programming and Links to Artificial Intelligence. Comput. Oper. Res. 1986, 13, 533–549. [Google Scholar] [CrossRef]
  5. Colorni, A.; Dorigo, M.; Maniezzo, V. Distributed Optimization by Ant Colonies. In Proceedings of the First European Conference on Artificial Life, Paris, France, 11–13 December 1991; Volume 142, pp. 134–142. [Google Scholar]
  6. Tang, J.; Chen, X.; Zhu, X.; Zhu, F. Dynamic Reallocation Model of Multiple Unmanned Aerial Vehicle Tasks in Emergent Adjustment Scenarios. IEEE Trans. Aerosp. Electron. Syst. 2022, 59, 1139–1155. [Google Scholar] [CrossRef]
  7. Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the ICNN′95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  8. Abbass, H.A. MBO: Marriage in Honey Bees Optimization-A Haplometrosis Polygynous Swarming Approach. In Proceedings of the 2001 Congress on Evolutionary Computation (IEEE Cat. No. 01TH8546), Seoul, Republic of Korea, 27–30 May 2001; Volume 1, pp. 207–214. [Google Scholar]
  9. Karaboga, D. An Idea Based on Honey Bee Swarm for Numerical Optimization; Technical Report-tr06; Erciyes University, Faculty of Engineering, Computer Engineering Department: Talas, Turkey, 2005. [Google Scholar]
  10. Pan, Q.; Tang, J.; Zhan, J.; Li, H. Bacteria Phototaxis Optimizer. Neural Comput. Appl. 2023, 35, 13433–13464. [Google Scholar] [CrossRef]
  11. Tang, J.; Pan, Q.; Chen, Z.; Liu, G.; Yang, G.; Zhu, F.; Lao, S. An Improved Artificial Electric Field Algorithm for Robot Path Planning. IEEE Trans. Aerosp. Electron. Syst. 2024, 60, 2292–2304. [Google Scholar] [CrossRef]
  12. Haupt, R.L.; Haupt, S.E. Practical Genetic Algorithms; John Wiley & Sons Inc.: Hoboken, NJ, USA, 2004. [Google Scholar]
  13. Tizhoosh, H.R. Opposition-Based Learning: A New Scheme for Machine Intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC′06), Vienna, Austria, 28–30 November 2005; Volume 1, pp. 695–701. [Google Scholar]
  14. Arora, S.; Anand, P. Chaotic Grasshopper Optimization Algorithm for Global Optimization. Neural Comput. Appl. 2019, 31, 4385–4405. [Google Scholar] [CrossRef]
  15. Ewees, A.A.; Abd Elaziz, M.; Oliva, D. A New Multi-Objective Optimization Algorithm Combined with Opposition-Based Learning. Expert Syst. Appl. 2021, 165, 113844. [Google Scholar] [CrossRef]
  16. Yu, X.; Xu, W.; Li, C. Opposition-Based Learning Grey Wolf Optimizer for Global Optimization. Knowl.-Based Syst. 2021, 226, 107139. [Google Scholar] [CrossRef]
  17. Qin, Z.; Yu, F.; Shi, Z.; Wang, Y. Adaptive Inertia Weight Particle Swarm Optimization. In Artificial Intelligence and Soft Computing–ICAISC 2006: Proceedings of the 8th International Conference, Zakopane, Poland, 25–29 June 2006; Springer: Berlin/Heidelberg, Germany, 2006; pp. 450–459. [Google Scholar]
  18. Chauhan, P.; Deep, K.; Pant, M. Novel Inertia Weight Strategies for Particle Swarm Optimization. Memetic Comput. 2013, 5, 229–251. [Google Scholar] [CrossRef]
  19. Sekyere, Y.O.; Effah, F.B.; Okyere, P.Y. An Enhanced Particle Swarm Optimization Algorithm via Adaptive Dynamic Inertia Weight and Acceleration Coefficients. J. Electron. Electr. Eng. 2024, 3, 50–64. [Google Scholar] [CrossRef]
  20. Li, C.; You, F.; Yao, T.; Wang, J.; Shi, W.; Peng, J.; He, S. Simulated Annealing Particle Swarm Optimization for High-Efficiency Power Amplifier Design. IEEE Trans. Microw. Theory Tech. 2021, 69, 2494–2505. [Google Scholar] [CrossRef]
  21. Gómez Blas, N.; Arteta Albert, A.; de Mingo López, L.F. Differential Evoluiton-Particle Swarm Optimization. Int. J. Inf. Technol. Knowl. 2011, 5, 77–84. [Google Scholar]
  22. Zhang, X.; Lin, Q.; Mao, W.; Liu, S.; Dou, Z.; Liu, G. Hybrid Particle Swarm and Grey Wolf Optimizer and Its Application to Clustering Optimization. Appl. Soft Comput. 2021, 101, 107061. [Google Scholar] [CrossRef]
  23. Li, F.; Yue, Q.; Liu, Y.C.; Ouyang, H.B.; Gu, F.Q. A Fast Density Peak Clustering Based Particle Swarm Optimizer for Dynamic Optimization. Expert Syst. Appl. 2024, 236, 121254. [Google Scholar] [CrossRef]
  24. Lu, C.; Zhu, J.; Huang, H.; Sun, Y. A Multi-Hierarchy Particle Swarm Optimization-Based Algorithm for Cloud Workflow Scheduling. Future Gener. Comput. Syst. 2024, 153, 125–138. [Google Scholar] [CrossRef]
  25. Aljohani, A. Optimized Convolutional Forest by Particle Swarm Optimizer for Pothole Detection. Int. J. Comput. Intell. Syst. 2024, 17, 7. [Google Scholar] [CrossRef]
  26. Zhan, J.; Tang, J.; Pan, Q.; Li, H. Improved Particle Swarm Optimization Algorithm Based on Grouping and Its Application in Hyperparameter Optimization. Soft Comput. 2023, 27, 8807–8819. [Google Scholar] [CrossRef]
  27. Eiben, A.E.; Smith, J. From Evolutionary Computation to the Evolution of Things. Nature 2015, 521, 476–482. [Google Scholar] [CrossRef]
  28. Panigrahy, D.; Samal, P. Modified Lightning Search Algorithm for Optimization. Eng. Appl. Artif. Intell. 2021, 105, 104419. [Google Scholar] [CrossRef]
  29. Abdel-Basset, M.; Abdel-Fatah, L.; Sangaiah, A.K. Metaheuristic Algorithms: A Comprehensive Review. Comput. Intell. Multimed. Big Data Cloud Eng. Appl. 2018, 185–231. [Google Scholar] [CrossRef]
  30. Cacchiani, V.; Iori, M.; Locatelli, A.; Martello, S. Knapsack Problems—An Overview of Recent Advances. Part II: Multiple, Multidimensional, and Quadratic Knapsack Problems. Comput. Oper. Res. 2022, 143, 105693. [Google Scholar] [CrossRef]
  31. Abdel-Basset, M.; El-Shahat, D.; Sangaiah, A.K. A Modified Nature Inspired Meta-Heuristic Whale Optimization Algorithm for Solving 0–1 Knapsack Problem. Int. J. Mach. Learn. Cybern. 2019, 10, 495–514. [Google Scholar] [CrossRef]
  32. Yao, X.; Liu, Y.; Lin, G. Evolutionary Programming Made Faster. IEEE Trans. Evol. Comput. 1999, 3, 82–102. [Google Scholar]
  33. Khuri, S.; Bäck, T.; Heitkötter, J. An Evolutionary Approach to Combinatorial Optimization Problems. In Proceedings of the ACM Conference on Computer Science, Phoenix, AZ, USA, 8–10 March 1994; pp. 66–73. [Google Scholar]
  34. Wu, G.; Mallipeddi, R.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition on Constrained Real-Parameter Optimization; Technical Report; National University of Defense Technology: Changsha, China; Kyungpook National University: Daegu, Republic of Korea; Nanyang Technological University: Singapore, 2017. [Google Scholar]
  35. Friedman, M. The Use of Ranks to Avoid the Assumption of Normality Implicit in the Analysis of Variance. J. Am. Stat. Assoc. 1937, 32, 675–701. [Google Scholar] [CrossRef]
  36. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris Hawks Optimization: Algorithm and Applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  37. Dehghani, M.; Trojovskỳ, P. Osprey Optimization Algorithm: A New Bio-Inspired Metaheuristic Algorithm for Solving Engineering Optimization Problems. Front. Mech. Eng. 2023, 8, 1126450. [Google Scholar] [CrossRef]
  38. Xue, J.; Shen, B. A Novel Swarm Intelligence Optimization Approach: Sparrow Search Algorithm. Syst. Sci. Control Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  39. Xue, J.; Shen, B. Dung Beetle Optimizer: A New Meta-Heuristic Algorithm for Global Optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar] [CrossRef]
  40. Wang, R.; Hao, K.; Chen, L.; Liu, X.; Zhu, X.; Zhao, C. A Modified Hybrid Particle Swarm Optimization Based on Comprehensive Learning and Dynamic Multi-Swarm Strategy. Soft Comput. 2024, 28, 3879–3903. [Google Scholar] [CrossRef]
  41. Rezaei, F.; Safavi, H.R. GuASPSO: A New Approach to Hold a Better Exploration–Exploitation Balance in PSO Algorithm. Soft Comput. 2020, 24, 4855–4875. [Google Scholar] [CrossRef]
  42. Tanabe, R.; Fukunaga, A. Success-History Based Parameter Adaptation for Differential Evolution. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 71–78. [Google Scholar]
Figure 1. Diagram of neighborhood and operator.
Figure 1. Diagram of neighborhood and operator.
Mathematics 12 02708 g001
Figure 2. Variable neighborhood search algorithm process.
Figure 2. Variable neighborhood search algorithm process.
Mathematics 12 02708 g002
Figure 3. The flow diagram for the complete work.
Figure 3. The flow diagram for the complete work.
Mathematics 12 02708 g003
Figure 4. The process of the VN-IPSO algorithm.
Figure 4. The process of the VN-IPSO algorithm.
Mathematics 12 02708 g004
Figure 5. Mapping relationship between integer variable x i b i n a r y and continuous variable x i .
Figure 5. Mapping relationship between integer variable x i b i n a r y and continuous variable x i .
Mathematics 12 02708 g005
Figure 6. Qualitative analysis results of VN-IPSO on some benchmark functions.
Figure 6. Qualitative analysis results of VN-IPSO on some benchmark functions.
Mathematics 12 02708 g006aMathematics 12 02708 g006b
Figure 7. Qualitative analysis results of VN-IPSO on some benchmark test sets of knapsack problems.
Figure 7. Qualitative analysis results of VN-IPSO on some benchmark test sets of knapsack problems.
Mathematics 12 02708 g007
Figure 8. Iteration curves of VN-IPSO and comparison algorithms on the cec2017 composition functions.
Figure 8. Iteration curves of VN-IPSO and comparison algorithms on the cec2017 composition functions.
Mathematics 12 02708 g008
Figure 9. Box plots of VN-IPSO and its comparison algorithms on the cec2017 combination functions.
Figure 9. Box plots of VN-IPSO and its comparison algorithms on the cec2017 combination functions.
Mathematics 12 02708 g009
Table 1. Knapsack problem benchmark set.
Table 1. Knapsack problem benchmark set.
DatasetBackpack Quantity ( n ) Item Quantity ( m ) Commodity Price ( m × 1 ) Backpack Capacity
( n × 1 )
Reduced Matrix Coefficient ( n × m ) Optimal Value
WEING1228P1(600, 600)A1141,278
WEING2228P2(500, 500)A2130,883
WEING3228P3(300, 300)A395,677
WEING4228P4(300, 600)A4119,337
WEISH01530P5(400, 500, 500, 600, 600)A54554
WEISH02530P6(370, 650, 460, 980, 870)A64536
Table 2. Optimization results of VN-IPSO and comparison algorithms on 23 benchmark functions.
Table 2. Optimization results of VN-IPSO and comparison algorithms on 23 benchmark functions.
FunctionIndexGADBOPSOSSAHHOOOAIPSOVN-IPSO
F1Mean442.79035910.13850992.8216 × 10−287.14 × 10−2421.566 × 10−2210.006193427.7598 × 10−661.13 × 10−66
Std137.18230816.97974117.0889 × 10−28000.000922782.0045 × 10−652.7073 × 10−66
Rank87512643
F2Mean12.81685621.185158527.3222 × 10−111.076 × 10−1300.032265655.434 × 10−1165.703 × 10−141
Std1.83515420.790659734.216370212.2433 × 10−103.401 × 10−1300.001476251.599 × 10−1151.197 × 10−131
Rank86742531
F3Mean4384.549132347.9230.097884364.588 × 10−1968.3933 × 10−70.010340933.705 × 10−1746.501 × 10−223
Std1638.953763006.001980.0638463101.8972 × 10−60.0021101700
Rank87624531
F4Mean22.22408480.563928720.034203777.636 × 10−2251.5528 × 10−50.031627028.254 × 10−1063.202 × 10−246
Std4.133597830.660191470.018534801.5743 × 10−50.001849412.491 × 10−1050
Rank87624531
F5Mean84531.9336406.33809958.18679746.0048 × 10−524.522322228.63493980.000305691.092 × 10−10
Std61559.4176947.44618830.83385290.0001047218.49119380.067667640.0006023.563 × 10−5
Rank87624531
F6Mean521.62002123.25635211.3476 × 10−277.8357 × 10−71.4713 × 10−70.093915100
Std139.33501611.82442592.1736 × 10−274.1079 × 10−71.2245 × 10−70.0267842300
Rank87354611
F7Mean0.26147980.033846630.548131956.3318 × 10−59.1721 × 10−60.002245411.8619 × 10−61.8619 × 10−6
Std0.074964120.026731471.129398954.3605 × 10−57.3833 × 10−60.000790551.2622 × 10−61.2622 × 10−6
Rank76843511
F8Mean543.7611253318.554815282.894064409.985546404.458148341.223230.001270620.00127062
Std152.5574381470.59194870.5668621171.64832705.745341622.5722480.001944630.00194463
Rank34657811
F9Mean33.629579355.686415948.604400500.0030944329.738190700
Std10.34839766.957877234.925993200.000278214.1477329800
Rank68714511
F10Mean6.341860042.823545192.2471 × 10−144.4409 × 10−161.1102 × 10−140.019119954.4409 × 10−164.4409 × 10−16
Std0.805833630.801363958.67 × 10−1503.7449 × 10−150.0014760200
Rank87514611
F11Mean4.98666371.051903140.0098406200.009351290.0100471800
Std0.835766030.772903850.0128512500.013209620.0019594600
Rank87514611
F12Mean8.300494030.1527491.2445 × 10−303.6485 × 10−62.0575 × 10−80.007729341.5705 × 10−321.5705 × 10−32
Std3.099589460.188416111.4798 × 10−303.4859 × 10−62.3272 × 10−80.002347622.885 × 10−482.885 × 10−48
Rank87354611
F13Mean1640.514861.963104450.001098742.4381 × 10−54.8619 × 10−70.260956171.3498 × 10−321.3498 × 10−32
Std1940.801952.320482830.003474511.5846 × 10−55.2673 × 10−70.069781392.885 × 10−482.885 × 10−48
Rank87543611
F14Mean0.001996160.014266320.001996161.371341780.001996160.196809250.001996160.0018897
Std2.7696 × 10−100.0514264803.061797281.4478 × 10−130.4191186100.00033665
Rank46185711
F15Mean0.000392970.000421690.000241611.4617 × 10−57.5099 × 10−68.5978 × 10−69.9055 × 10−50.00031928
Std0.000100130.000341430.000327916.0474 × 10−62.1452 × 10−81.028 × 10−60.000289570.00051208
Rank78531246
F16Mean1.4095 × 10−56.6738 × 10−62.8453 × 10−52.8453 × 10−52.8453 × 10−52.8434 × 10−52.8453 × 10−52.8453 × 10−5
Std1.3187 × 10−50.0001110806.4355 × 10−151.1322 × 10−152.5584 × 10−800
Rank81265722
F17Mean9.4618 × 10−50.000112640.000112640.000112640.000112640.000112170.000112640.00011264
Std1.8407 × 10−5001.4103 × 10−101.4998 × 10−114.871 × 10−700
Rank81165711
F18Mean0.000346270.000168357.816 × 10−145.47.1498 × 10−142.3473 × 10−77.816 × 10−147.816 × 10−14
Std0.000730580.00053238011.38419966.8126 × 10−153.0929 × 10−700
Rank76184511
F19Mean0.002779490.002782150.002782150.001334920.002782150.002781910.002782150.00278215
Std2.0463 × 10−67.4015 × 10−169.3622 × 10−160.000902594.018 × 10−101.8426 × 10−79.3622 × 10−169.3622 × 10−16
Rank83417244
F20Mean0.058265830.106094450.088361690.01181290.057487560.059363710.021819960.0099352
Std0.062513180.08031980.065446140.043520620.062637120.079642470.050110520.03758289
Rank58724631
F21Mean3.756948362.939544210.505243071.504545923.058812050.001378851.010485830.74703425
Std2.309635232.930794371.597717873.171860632.63258410.000763022.13029052.3623287
Rank86257143
F22Mean0.047003832.138177140.531386310.667723464.252094920.5328230.000140570.00014057
Std0.041068912.697271411.680835562.111971482.241072851.680350261.8724 × 10−151.8724 × 10−15
Rank37468511
F23Mean1.808198553.492756915.407820133.830154780.000109820.001681520.000109820.00010982
Std2.817846072.127546487.086 × 10−74.082363671.8724 × 10−150.001112461.8724 × 10−151.0256 × 10−15
Rank56871411
Number of first place0235211318
Get 1st times in unimodal functions00010026
Get 1st times in multimodal function0234211112
Average ranking6.91306.04344.65213.86954.17395.21732.01.5652
Friedman ranking87534621
Table 3. Optimization results of VN-IPSO and comparison algorithms on 6 knapsack problem benchmark test sets.
Table 3. Optimization results of VN-IPSO and comparison algorithms on 6 knapsack problem benchmark test sets.
ProblemIndexGADBOPSOSSAHHOOOAIPSOVN-IPSO
WEING1Mean200.10009082.30003728.90007004.10007259.70003337.1000696.5000356.6000
Std287.03104770.06802805.20302245.20706549.30003338.3440480.9738317.0052
Rank18567432
WEING2Mean178.600012,774.800010,804.800013,278.80009922.60003539.30003508.30001038.4000
Std291.22096020.91202676.63702885.78307062.76002307.01603072.14802202.4610
Rank17685432
WEING3Mean8043.100022,077.10008975.300020,667.70005011.50003988.00006816.9000814.5000
Std5651.325011888.02006745.89503678.29505117.28802206.01307075.3860860.7260
Rank58673241
WEING4Mean400.400010,854.50008884.500012,168.90006920.50005598.10004713.800040.9000
Std1096.09604093.42103606.97803607.70802532.07302870.26402459.3220129.3372
Rank27685431
WEISH01Mean74.2000749.5000496.0000766.9000405.9000226.900093.20000.0000
Std64.5149727.3459221.0354302.2238201.602294.465184.77920.0000
Rank27685431
WEISH01Mean6.9000815.9000299.8000557.1000216.4000181.4000128.90000.0000
Std11.0900503.2822145.0738236.125499.2373116.224190.49670.0000
Rank28675431
Number of times to win the first place20000004
Average ranking2.16677.50005.83337.33335.00003.66673.16671.3333
Friedman ranking28675431
Table 4. Optimization results of VN-IPSO and comparison algorithms on 10 CEC2017 composition functions.
Table 4. Optimization results of VN-IPSO and comparison algorithms on 10 CEC2017 composition functions.
FunctionIndexSHADECLDMSL-PSOGuASPSOPSOIPSOVN-IPSO
C20Mean021.1536576133.2989895463.2003424733.298989545.657683372
Std041.7379323815.1195113653.9040506115.119511366.103723013
Rank134652
C21Mean133.6615744162.9370387132.2818833146.7459032132.2818833134.845604
Std44.3340298853.8623095654.4876664857.9406294954.4876664854.19161521
Rank361514
C22Mean100101.990648593.9897627296.9263481493.9897627293.24087653
Std01.99840605121.186630319.8597413421.186630323.02059206
Rank562431
C23Mean304.0516673313.4525772335.5633963322.8265642335.5633963333.8308895
Std1.4294788726.6287031259.53443398610.168841699.5344339869.04433487
Rank125364
C24Mean299.6394768309.1760025257.6516403284.6512984257.6516403254.3574406
Std72.8431684883.59799634123.2355665107.1925837123.2355665120.0454233
Rank562431
C25Mean416.6076647442.4497437413.1410394418.3482144413.1410394411.6593503
Std23.228477659.63318277421.8275435522.9507283421.8275435521.1496311
Rank462531
C26Mean300306.6918764276.6666667341.4100252276.6666667278.2383118
Std074.3646141477.38543627189.919884877.3854362778.34845454
Rank451613
C27Mean389.483826398.3006532399.5479121403.5229496399.5479121396.7648786
Std0.1300162135.46715256.95415807219.561799336.9541580726.785615744
Rank134652
C28Mean469.6364644537.3506923344.5934027439.4083219344.5934027332.2310026
Std149.5091467112.815623791.22837107141.10186791.2283710787.83661403
Rank562431
C29Mean231.6811918266.2303566288.8946768292.2126569288.8946768279.9698013
Std1.70436274821.5198958322.4716298546.946065622.4716298518.93510266
Rank124653
Number of first place402024
Average ranking34.53.24.93.22.2
Friedman ranking253631
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, H.; Zhan, J.; Zhao, Z.; Wang, H. An Improved Particle Swarm Optimization Algorithm Based on Variable Neighborhood Search. Mathematics 2024, 12, 2708. https://doi.org/10.3390/math12172708

AMA Style

Li H, Zhan J, Zhao Z, Wang H. An Improved Particle Swarm Optimization Algorithm Based on Variable Neighborhood Search. Mathematics. 2024; 12(17):2708. https://doi.org/10.3390/math12172708

Chicago/Turabian Style

Li, Hao, Jianjun Zhan, Zipeng Zhao, and Haosen Wang. 2024. "An Improved Particle Swarm Optimization Algorithm Based on Variable Neighborhood Search" Mathematics 12, no. 17: 2708. https://doi.org/10.3390/math12172708

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop