Next Article in Journal
Pallet Distribution Affecting a Machine’s Utilization Level and Picking Time
Previous Article in Journal
Calculation Method and Application of Time-Varying Transmission Rate via Data-Driven Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Gaining-Sharing Knowledge Based Algorithm with Parallel Opposition-Based Learning for Internet of Vehicles

1
College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China
2
Department of Information Management, Chaoyang University of Technology, 168, Jifeng E. Rd., Wufeng District, Taichung 41349, Taiwan
3
College of Science and Engineering, Flinders University, Bedford Park, SA 5042, Australia
4
College of Computer and Data Science, Fuzhou University, Xueyuan Road No.2, Fuzhou 350116, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(13), 2953; https://doi.org/10.3390/math11132953
Submission received: 3 June 2023 / Revised: 28 June 2023 / Accepted: 29 June 2023 / Published: 2 July 2023

Abstract

:
Heuristic optimization algorithms have been proved to be powerful in solving nonlinear and complex optimization problems; therefore, many effective optimization algorithms have been applied to solve optimization problems in real-world scenarios. This paper presents a modification of the recently proposed Gaining–Sharing Knowledge (GSK)-based algorithm and applies it to optimize resource scheduling in the Internet of Vehicles (IoV). The GSK algorithm simulates different phases of human life in gaining and sharing knowledge, which is mainly divided into the senior phase and the junior phase. The individual is initially in the junior phase in all dimensions and gradually moves into the senior phase as the individual interacts with the surrounding environment. The main idea used to improve the GSK algorithm is to divide the initial population into different groups, each searching independently and communicating according to two main strategies. Opposite-based learning is introduced to correct the direction of convergence and improve the speed of convergence. This paper proposes an improved algorithm, named parallel opposition-based Gaining–Sharing Knowledge-based algorithm (POGSK). The improved algorithm is tested with the original algorithm and several classical algorithms under the CEC2017 test suite. The results show that the improved algorithm significantly improves the performance of the original algorithm. When POGSK was applied to optimize resource scheduling in IoV, the results also showed that POGSK is more competitive.

1. Introduction

Heuristic optimization algorithms have been studied and discovered over the past three decades and have been fully proven to solve a variety of complex, nonlinear optimization problems. These methods are both user-friendly and do not necessitate mathematical analysis of the optimization problem. Compared with the traditional methods, they have the advantages of flexibility, no gradient mechanism and avoiding being trapped in the local optimum [1]. These features drew a large number of researchers to participate in the design. Most heuristic algorithms are inspired by the author’s observation of animal and plant phenomena in nature. To solve optimization problems, algorithms are used to simulate growth and evolution. In real scenarios, heuristics are also widely used, such as path planning [2], wireless sensor network localization problem [3], wireless sensor network routing problem [4], airport gate assignment [5] and cloud computing workflow scheduling [6].
Heuristic algorithms can be divided into four categories [7]. The first category is algorithms based on swarm intelligence techniques. Much of the inspiration for such algorithms comes from observations of social animals. In a population, each individual exhibits a certain degree of independence while still interacting with the entire group. The main representative algorithms are particle swarm optimization (PSO) [8], the phasmatodea population evolution algorithm (PPE) [9], the Gannet optimization algorithm (GOA) [10], the grey wolf optimizer (GWO) [11], cat swarm optimization (CSO) [12], etc. The second category is algorithms based on evolutionary techniques. Such algorithms are inspired by developments in biology. The initial random population is gradually iterated to achieve the final optimization purpose through crossover, mutation, selection and other operations. The main representative algorithms are the genetic algorithm (GA) [13], the differential evolution algorithm (DE) [14], the quantum evolutionary algorithm (QEA) [15], etc. The third category is the algorithm based on physical phenomena. This kind of algorithm simulates the law of some natural phenomena. The main representative algorithms are the Archimedes optimization algorithm (AOA) [16], the simulated annealing algorithm (SAA) [17], the sine cosine algorithm (SCA) [18], etc. The fourth category of algorithms is those based on human-related technology. As an independent intelligent and rational individual, each person has unique physical and psychological behavior. The main representative algorithms are the Teaching–Learning-Based Optimization Algorithm (TLBO) [19], the Gaining–Sharing Knowledge-based algorithm (GSK) [7], etc.
The original GSK simulates the behavior of acquiring and sharing knowledge throughout a person’s life, culminating in the maturation of the individual [20]. The author divides human life into two distinct phases: the junior phase, which corresponds to childhood, and the senior phase, which corresponds to adulthood. The strategies for knowledge acquisition and sharing are different in these two phases. At the beginning of the algorithm, individuals tend to use a relatively naive method to acquire and share knowledge. However, not all disciplines (on all dimensions of the solution) use this naive method. In some disciplines, individuals will also use relatively advanced methods for knowledge acquisition and sharing. With the growth of individuals, the algorithm enters the middle stage and the learning of knowledge is more inclined to use the advanced method, while a few disciplines still use the naive method. Individuals go through two stages, alternating between naive or advanced strategies to update their knowledge in each discipline. Individuals eventually reach maturity, which is when they find their optimal position. Ali Wagdy Mohamed demonstrated its powerful optimization capabilities on the CEC2017 test suite when he presented the GSK algorithm. Although the GSK algorithm demonstrates excellent convergence in solving the optimization problem, there is room for improvement in avoiding locally optimal solutions and convergence speed. To further improve the performance of the GSK algorithm, we propose several approaches to be incorporated into the GSK algorithm, which are described next in turn. Experiments have been conducted to demonstrate that these approaches are effective in improving the performance of the GSK algorithm.
Parallel processing is concerned with producing the same results using multiple processors with the goal of reducing the running time [21]. Because physical parallel processing cannot be used in the optimization algorithm, we adopt an alternative approach. The main idea of the parallel mechanism is to divide the initial population into several different groups. Each group performs iterative updates independently and communicates regularly between groups. The parallel mechanism has been applied widely, including to the parallel particle swarm algorithm (Chu S C 2005) [21] and parallel genetic algorithms [22]. In addition, parallel strategy is also used in multi-objective optimization algorithms. Cai D proposed an evolutionary algorithm based on uniform and contraction for many-objective optimization [23], which uses a parallel mechanism to enhance the local search ability.
The communication strategies between groups can be varied for optimizing different algorithms. This paper presents a communication strategy using the Taguchi method. The main idea is to use a pre-designed orthogonal table for crossover experiments. Compared with the traditional experimental method, it can achieve almost the same effect while obviously reducing the number of experiments. The Taguchi method has the advantages of reducing the number of experiments, reducing the cost of experiments and improving the efficiency of experiments [24]. It has been successfully applied to improve the genetic algorithm (Jinn-Tsong Tsai 2004) [25], the Archimedes optimization algorithm (Shi-Jie Jiang 2022) [26], the cat swarm optimization algorithm (Tsai P W 2012) [24], etc. In addition, opposition-based learning (OBL) was also incorporated into GSK. The concept of OBL was proposed by Tizhoosh (2005) [27]. After that, some classical optimization algorithms started to introduce this idea. It has been successfully applied to improve grey wolf optimization (Souvik Dhargupta 2020) [28] (Dhargupta S 2020) [29], the differential evolution algorithm (Rahnamayan S 2008) [30] (Wu Deng2021) [31], particle swarm optimization (Wang H 2011) [32], the grasshopper optimization algorithm (Ahmed A. Ewees 2018) [33], etc.
In order to reach a convincing conclusion, the performance of any optimization or evolutionary algorithm can only be judged via extensive benchmark function tests [34]. Some diverse and difficult test problems are required for this purpose and the CEC2017 test suite [35] is a widely accepted test problem. In order to apply the optimization algorithm to the complex real-world optimization problem, it is necessary that the optimization algorithm can effectively solve the single objective optimization problem. CEC2017 test suite contains 30 single-objective real-parameter numerical optimization questions. Compared with CEC2013 and CEC2014, in CEC2017, several test problems with new characteristics are proposed, such as new basic problems, composing test problems by extracting features dimension-wise from several problems, graded level of linkages, rotated trap problems and so on. In this paper, the CEC2017 test suite is utilized to evaluate the proposed algorithm (POGSK), the original algorithm and several classical algorithms.
The IoV enables vehicles on the road to exchange information with roadside units (RSUs) [36]. Therefore, users can expect quick, comprehensive and convenient services, such as road condition information, traffic jam section information, traffic condition information, city entertainment information, etc. However, traditional resource-allocation strategies may not be able to provide satisfactory Quality of Service (QoS) due to several factors. These include resource constraints, network transmission delays and the deployment of RSUs. Scheduling problems can be solved using various methods, which can be roughly grouped into three categories: exact, approximate and heuristic [37]. The exact method is to calculate all the solutions of the whole search space to find the optimal solution, which is obviously only suitable for small-scale problems. The approximation method uses certain mathematical rules to find the optimal solution, which requires different analyses for different problems. However, in most cases, the mathematical analysis of the problem is difficult. Therefore, the heuristic approach is a decent option. The main objective of scheduling algorithms is to find the best resources in the cloud for the applications (tasks) of the end user. This improves the QoS parameters and resource utilization [38]. In order to solve this optimization problem, this paper proposes using the heuristic algorithm POGSK to complete resource scheduling.
The main contributions of this paper are as follows:
1. An improved Gaining–Sharing Knowledge-based algorithm (POGSK) is proposed, which uses parallel strategy and OBL strategy. The use of parallel strategy increases the diversity of the population so that the algorithm can effectively avoid local optimal solutions. The OBL strategy can correct the convergence direction and improve the convergence accuracy.
2. A new inter-group communication strategy is designed. Specifically, the Taguchi communication strategy and the population-merger communication strategy were used. This enables efficient exchange of information between subpopulations and avoids the weakening of algorithm performance caused by the reduction of the number of individuals in subpopulations.
3. POGSK is used in the resource-scheduling problems of the IoV to improve QoS, which can reflect the performance of the algorithm in real scenarios. Simulation results show that POGSK is more competitive than other algorithms.

2. Related Works

2.1. GSK Algorithm

GSK is a human-based heuristic algorithm that gradually updates knowledge (corresponding to the solution of the algorithm) by simulating the process of knowledge sharing and acquisition in human life. The algorithm mainly consists of two phases: the junior phase and the senior phase, for knowledge gaining and sharing in phases have different processes [7].
When individuals are young, they prefer to interact with individuals who are similar to themselves. Despite their immature ability to distinguish right from wrong, they are willing to communicate with unfamiliar people, which represents curiosity in the junior phase. People who are similar to themselves correspond to their relatives, friends and other small surrounding groups. People who are unfamiliar correspond to the stranger. The above scheme is the junior gaining–sharing scheme. In the junior phase, more dimensions are updated using this scheme than using the other (senior gaining–sharing) scheme.
After updating and iteration, individuals reach middle age gradually. As the capacity to distinguish between right and wrong gradually increases, individuals are more willing to divide the crowd into three different populations: the advantaged, the disadvantaged and the general population. Individuals improve themselves by interacting with these three groups. The above scheme is the senior gaining–sharing scheme. In the senior phase, more dimensions are updated using the senior scheme than using the junior scheme. In the following, we describe in detail the dimensions in which the two schemes are utilized and their respective processes.
Let x i , i = 1, 2, 3, …, N; N is population size, x i represents the individual members of the population. x i = ( x i 1 , x i 2 , x i 3 , …, x i D ), D is the size of the problem and x i j represents the value of an individual in this dimension. Before the renewal of each generation, we need to determine which dimensions use the senior scheme and which dimensions use the junior scheme for each individual. Based on the concept of constant human growth, these dimensions are determined to use the following nonlinear increasing or decreasing empirical equation [7].
D ( j u n i o r p h a s e ) = ( p r o b l e m s i z e ) ( 1 G G E N ) k
D ( s e n i o r p h a s e ) = ( p r o b l e m s i z e ) D ( j u n i o r p h a s e )
where K is knowledge rate of a real number, K > 0, G is generation number and GEN is the maximum number of generations.
The steps for the junior scheme are as follows:
1. The fitness of all individuals is calculated and the sequence that follows is generated by ranking the individual from high to low fitness: ( x b e s t , x 2 , x 3 , …, x n 1 , x w o r s e ). Each individual chooses three individuals to communicate with according to step 2.
2. For each individual x i which is not the best and the worst, select x i 1 and x i + 1 . For the best individual x b e s t , select x b e s t + 1 and x b e s t + 2 . For the worst individual x w o r s e , select x w o r s e 1 , x w o r s e 2 . In addition, an individual x r is selected in a random way. These three individuals are sources of information. The pseudo-code for the above junior scheme is presented in Algorithm 1:
Algorithm 1: Junior scheme pseudo-code
Mathematics 11 02953 i001
Here, k f represents the knowledge factor ( k f > 0).
The steps for the senior scheme are as follows:
1. The fitness of all people is calculated and the sequence that follows is generated by ranking the individual from high to low fitness: ( x b e s t , x 2 , x 3 , …, x n 1 , x w o r s e ).
2. The population is divided into three parts: the first 100p% with better fitness, the last 100p% with poor fitness and the middle NP − (2 * 100p%), where p is the proportion of a population division. For example, if p = 0.1, NP = 100, the best group is the top 10 peoples with better fitness, the worst group is the bottom 10 peoples with poor fitness and the middle group is the middle 80 peoples. From these three groups, x p _ b e s t , x p _ w o r s e and x p _ m i d d l e are selected as sources of information. The pseudo-code for the above senior scheme is presented in Algorithm 2:
Algorithm 2: senior scheme pseudo-code
Mathematics 11 02953 i002
There are several important parameters in GSK, respectively knowledge rate K that controls the proportion of junior and senior schemes in the individual renewal scheme, knowledge factor K f that controls the total amount of knowledge currently learned by the individual from others, knowledge ratio K r that controls the ratio between the current and acquired experience. The pseudo-code is presented in Algorithm 3:
Algorithm 3: GSK pseudo-code
Mathematics 11 02953 i003

2.2. Taguchi Method and Parallel Mechanism

Every engineer wants to design a satisfactory product with minimum cost and minimum time. However, many factors can impact the quality of the product and it will take a long time to test one by one. The Taguchi method was developed by Dr. Genichi Taguchi in Japan after World War II [25]. When this method was used in practical production, it greatly promoted Japan’s economic recovery. The orthogonal matrix experiment is one of the important tools of the Taguchi method. Suppose a product has K influencing factors, each of which has Q levels. If the influence of each factor level on product quality is tested one by one, K Q experiments are required [24]. This will consume a lot of time and production costs are difficult to control. The orthogonal matrix experiment uses the pre-designed orthogonal matrix to conduct a small number of experiments on each factor, which can achieve almost the same effect while greatly reducing the number of experiments. Assume that there are seven factors affecting the product, each of which has two levels, then we can use the L 8 ( 2 7 ) shown in Equation (3). Each row in the table represents an experiment, where the values represent the current level of factor adoption. It can be observed that in each column, the number of occurrences is the same for both levels, which guarantees the fairness of the experiment.
L 8 ( 2 7 ) = 1 1 1 1 1 1 1 1 1 1 2 2 2 2 1 2 2 1 1 2 2 1 2 2 2 2 1 1 2 1 2 1 2 1 2 2 1 2 2 1 2 1 2 2 1 1 2 2 1 2 2 1 2 1 1 2
This paper mainly uses two levels orthogonal matrix. For the 10-dimensional experiment uses L 12 ( 2 11 ). For the 30-dimensional experiment uses L 32 ( 2 31 ).
The parallel mechanism, also known as multi-population strategy, is a popular algorithm-optimization method for increasing population diversity. The main idea is to divide the initial population into several subpopulations. The subpopulations were searched independently after initialization and communicated with other subpopulations according to certain conditions. For parallel strategies, the inter-group communication strategy is critical. Because the number of individuals in the subpopulation is reduced, the algorithm easily falls into the local optimum via independent search. Excellent intergroup communication strategies can make subpopulations gain the ability to escape the local optimum and effective communication strategies enable populations to exchange a small amount of information while improving their search ability. Chai proposed the tribal annexation communication strategy and herd mentality communication strategy to improve the searching ability of the whale optimization algorithm [3]. Tsai used the Taguchi communication strategy to enhance the search capability of the cat swarm optimization algorithm [24]. The above scheme demonstrates the diversity of communication strategies for different algorithms. In this paper, a suitable communication strategy is proposed according to the characteristics of the GSK algorithm. Specifically, the Taguchi communication strategy and the population-merger communication strategy were used.

2.3. Opposition-Based Learning

OBL was proposed by Tizhoosh (2005) [27], which is fundamentally based on estimates and counter estimates. OBL can modify the convergence direction of the algorithm and improve the search accuracy of the algorithm. In order to get close to the optimal position quickly, we generally want the population to be near it. However, the initial populations are generated randomly and it may be far from the optimal position or even the exact opposite. As a result the algorithm converges so slowly that it does not converge to near the optimal solution under the specified conditions. The main idea of OBL is to find the opposite position of the random initial population, evaluate it and select the better position to replace the initial population. Furthermore, during the population updating process, the position of the current individual and its opposite are evaluated and the better individuals are left.
Definition 1
(Opposite Number [27]). Let x be a real number defined in the interval [a,b], then the Opposite Number x o p of x is defined by the following equation:
x o p = a + b x
Definition 2
(Opposite Point [27]). Let P ( x 1 , x 2 , …, x n ) be a point in an n-dimensional coordinate system with ( x 1 , …, x n ) being real numbers, where each of x j is defined in the interval [ a j , b j ]. Then, the Opposite Point P o p ( x 1 o p , x 2 o p , …, x n o p ) is defined by the following equation:
x j o p = a j + b j x j
In the actual search process, the search space usually changes dynamically, the Opposite Point P i o p = (( x i 1 o p , x i 2 o p , …, x i n o p )) of P i is defined by the following equation; P i belongs to the population P = ( P 1 , P 2 , …, P n ).
a j m i n = m i n ( P i , j ) i = 1 , 2 , , n
b j m a x = m a x ( P i , j ) i = 1 , 2 , , n
x i , j o p = a j m i n + b j m a x x i , j

2.4. Resource-Scheduling Problem of the IoV

With intelligent transportation and smart city development, the IoV has received more and more attention. Based on the information interaction between the on-board unit and the roadside unit, a successful architecture of the IoV is formed. In this process, due to roadside unit deployment, own resource limitation and network delay, it may not guarantee satisfactory QoS for users. In this case, the most likely bottleneck is the proper scheduling of resources [39]. Therefore, we need a scheduling algorithm to distribute the user workload into the RSUs. The algorithm must be based on resource capacity and solve the problem of over- and underutilization [6]. The algorithm should take into account the available resources and work to improve QoS. Based on practical considerations, the performance of the algorithm can be evaluated by resource utilization, load balancing, maximum completion time, execution cost, power consumption, reliability and other indicators.
To improve the computing capabilities of mobile devices, edge computing transfers computation-intensive applications from resource-constrained smart mobile devices to nearby edge servers with computational capabilities [40]. Bin Cao proposed a space–air–ground-integrated network (SAGIN)-IoV edge-cloud architecture based on software-defined networking (SDN) and network function virtualization (NFV) [36] that takes into consideration that the actual user needs to establish an optimization model. Yao proposed a big data-based heterogeneous Internet of Vehicles engineering cloud system resource allocation optimization algorithm [39]. Filip proposed a new model for scheduling microservices over heterogeneous cloud-edge environments [41]. This model aims to improve the resource utilization of edge computing equipment and reduce cost. Farid M proposed a new multi-objective scheduling algorithm with fuzzy resource utilization (FR-MOS) for scheduling scientific workflow based on the PSO method [6]. This algorithm’s primary objective is to minimize cost and makespan in consideration of reliability constraints, where the constraint coefficient is determined by cloud resource utilization.
Existing resource-scheduling algorithms generally account for a variety of usage scenarios. In our proposed case study, we recognize that it is practical to employ these existing methods. In this article, service delay, resource utilization, load balancing and security are considered simultaneously and an optimization model is constructed.

3. Proposed Algorithm and Its Application

3.1. Parallel Communication Strategy

Choosing a suitable communication strategy is critical in parallel strategy since it facilitates information exchange between two groups, thereby enhancing their search capabilities. In this paper, two primary communication strategies are used. The first primary communication strategy is controlled by the communication control factor R. If the random number generated in the communication is greater than R, all groups will be matched in pairs and the following Taguchi communication will be performed:
1. Select the optimal solution of the two groups and select the appropriate orthogonal table based on the number of levels and factors. For example, the experiment is two-level if it contains two candidates and is seven-factor if each candidate in the experiment contains seven influencing factors. Conduct the experiment according to the orthogonal table and calculate the fitness value for each new individual.
2. In each dimension, calculate the fitness sum of the two levels separately. For each dimension, select the level with better fitness as a candidate. Combine the candidates to produce an optimal individual.
When the random number is less than R, the optimal solutions within all groups are compared with the global optimal solution using the following steps:
1. If it is worse than the global optimal solution, the intra-group optimal solution is replaced by the global optimal solution.
2. If it is better than or equal to the global optimal solution, a random mutation operation is performed.
Every individual has the potential to excel in some dimensions. The Taguchi method can efficiently excavate these excellent dimensions and then combine them together. The communication process described above will be visualized through an example. The fitness function is assumed to be:
f ( x ) = i = 1 n x 2
Suppose the search goal is to find the smallest fitness value. The two candidate individuals are shown in Table 1. The Taguchi orthogonal experiment is a two-level seven-factor experiment that employs the L 8 ( 2 7 ) orthogonal table shown in Equation (3). Table 2 depicts the specific operation process. The cumulative fitness value of the two candidate solutions in the table is calculated based on whether the solution is selected in the orthogonal table. In the first dimension of Table 2, the candidate solution x 2 was used in experiments 5 to 8, the cumulative fitness value for the first dimension of x 2 is the sum of the fitness values from 5 to 8 experiments.
In addition, this paper employs the population-merger communication strategy as the second primary communication strategy. In a swarm intelligence algorithm with parallel strategies, multiple subpopulations search independently and communicate with each other at intervals. The parallel strategy increases the diversity of the algorithms, but reduces the number of individuals in each subpopulation and some algorithms require more individual data for search. This conflict weakens the performance of the algorithm. After testing, the search performance of the original GSK algorithm was significantly reduced when the algorithm was divided into several subpopulations. In order to solve the above problem, the population-merger communication strategy was adopted. Specifically, each subpopulation searched independently in the early stage of the algorithm. Once a specific condition is met, the two adjacent subpopulations combine into one population. The newly formed population incorporates all the information from both subpopulations. Finally, before the end of the algorithm, all the individuals are merged into one population, which contains all the information of the original subpopulation. In the first stage of this paper, the initial GSK population was divided into four groups. In the second stage, the four groups were merged into two groups. In the third stage, the two groups are merged into one group. The condition of merger refers to the number of fitness functions.

3.2. Incorporate OBL into GSK

There are two primary steps involved in adding OBL to GSK. Using OBL, optimize the initial population as the first step. In the second step, the opposite population is generated to correct the convergence direction.
For the original algorithm (GSK), the initial population is randomly generated within a defined range. The initial individuals thus generated may be too far away from the global optimal position. The utilization of the OBL strategy enables the generation of a population closer to the optimal position, thereby facilitating more effective algorithm optimization. The steps in detail are as follows:
1. Initialize the population X = { x 1 , x 2 , …, x n } randomly according to the defined range, n denotes the number of individuals in the population. Generate the opposing population X o p = { x 1 o p , x 2 o p , …, x n o p } with the following formula:
x i , j o p = a j + b j x i , j i = 1 , 2 , n ; j = 1 , 2 , , D ;
where a j represents the upper bound of the current dimension and b j represents the lower bound.
2. Select individuals with excellent fitness from { x i , x i o p } and combine them into NP, taking NP as the initial population.
In the process of population updating, by using similar methods to generate the opposite population and for evaluation, the current population can be guaranteed to be closer to the global optimal position. The probability of generating an opposing population can be controlled by adjusting the jump rate r. The steps in detail are as follows:
1. After each update of the population, a random number is generated to compare with the jump rate r. If the random number is less than r, the opposite population of the current population will be generated, the following formula shows the process:
x i , j o p = m a x j + m i n j x i , j i = 1 , 2 , n ; j = 1 , 2 , , D ;
where m a x j represents the maximum value of the j dimension in the current population and m i n j represents the minimum value.
2. Current and opposing populations are combined, and the fitness is tested separately. Then, the fittest individuals are selected from { x i , x i o p }.
In this paper, several different approaches are considered for integration with GSK and Figure 1 illustrates the process. In POGSK, the initial population is divided into four subpopulations after OBL optimization and the four subpopulations independently conduct GSK and communicate with each generation according to the Taguchi method. After updating the current population, the OBL operation is performed. The pseudocode for POGSK is shown in Algorithm 4. Moreover, in order to demonstrate the POGSK process more visually, Figure 2 shows its main processes.
Algorithm 4: POGSK pseudo-code
Mathematics 11 02953 i004

3.3. Apply the POGSK to Solve the Resource-Scheduling Problem in IoV

In order to reasonably allocate the resources of RSUs in IoV and enhance the QoS, this paper proposes the following mathematical model. Assume that there are multiple vehicles on the road and a total of n tasks are submitted simultaneously and each task contains four attributes: (1) the size of the task; (2) deadlines for tasks; (3) type and quantity of resources required; (4) the transfer time of the submitted work. The n tasks are represented as follows [36]:
T = { T i } ( i = 1 , 2 , . . , n )
Suppose that there are m processing nodes in the current scenario. Each processing unit contains two attributes: (1) type and quantity of resources owned by the processing unit and (2) the processing capacity of the processing unit. The m processing nodes are represented as follows [36]:
P = { P j } ( i = 1 , 2 , . . , m )
  • Service delay
    In order to provide users with faster services, service latency should be as short as possible. A processing node can handle multiple tasks simultaneously, with different processing capabilities for each node. Then, the processing time of a task on the processing node is [36]
    D O P i j = S i H j
    where S i represents the size of the task T i and H j represents the processing capability of the processing node P j . Then, the time required for a processing node to complete all the tasks assigned to it is
    P T j = i = 1 n D O P i j C ( i , j )
    where C(i,j) is a binary value and indicates whether task T i is assigned to node P j , denoted as
    C ( i , j ) = 0 , o t h e r w i s e . 1 , T i i s a s s i g n e d t o P j
    The sum of the processing delays of all nodes is
    F T = j = 1 m P T j
  • Resource utilization
    According to research, the energy consumption of the server in the idle state can account for more than 60% of the full load operation [42], which leads to a large amount of energy wasted on the idle server. So we want the roadside unit to be as resource-efficient as possible. The service request in the IoV requires the support of four kinds of computer resources, namely CPU, memory, disk and bandwidth. We need to pay attention to all four sources. Then, the total resource utilization is
    F U = i = 1 n k = 1 4 C R ( i , j ) j = 1 m k = 1 4 ( P N j P R ( j , k ) )
    where CR(i,k) represents the number of r e s o u r c e s k required by T i , PR(j,k) represents the number of r e s o u r c e s k owned by P j and P N j is a binary value indicating whether P j is turned on or not (k = 1,2,3,4 represents four resources).
    P N j = 0 , o t h e r w i s e . 1 , P j i s r u n n i n g
  • Load balancing
    The service request in the IoV has different demands on different resources; it is easy to cause the load of different types of resources to be unbalanced. A valuable load-balancing technique in cloud computing can enhance the accuracy and efficiency of cloud computing performance [43]. So we want the processing unit to be as load-balanced as possible. Then, the utilization of r e s o u r c e k in P j is
    u j k = i = 1 n ( C R ( i , j ) C ( i , j ) ) P R ( j , k )
    where C(i,j) is calculated using Equation (16). The mean of resource utilization of P j is
    M U j = k = 1 4 u j k 4
    The variance of resource utilization of P j is
    V U j = k = 1 4 ( u j k M U j ) 4
    The average resource utilization variance for all processing units is
    F N = j = 1 m ( V U j k P N j ) z
    where P N j is calculated by Equation (19) and z represents the number of processing units opened.
  • Security
    In the IoV, tasks must be completed on schedule to ensure safety, as service requests are made at high speeds. In real scenarios, the network latency and security of the task is important [6,44]. Task deadlines will be sent with task submissions and we want as many tasks as possible to be completed on time. Then, the actual time required to complete the task is
    p s i = D O P i j + T L ( i , j )
    where D O P i j is calculated with Equation (14) and TL(i,j) represents the transmission buffer time from T i to P j . Then, whether the task is completed on time is expressed as a binary value:
    S i = 0 , c s i < p s i . 1 , c s i p s i
    where c s i indicates the deadtime of T i , which is uploaded when the task is submitted. Then, we express the degree of security as the successful execution rate of the task.
    F S = i = 1 n S i n
Considering the above four objectives, we propose the following fitness function:
f i t n e s s = a F T + b F U + c F N + d F S
For processing unit P j , the number of various resources required by all the tasks running on it is not permitted to exceed the number of resources owned by the unit. The workflow is shown in Figure 3. The constraint conditions are:
i = 1 n ( C ( i , j ) C R ( j , k ) ) < P R ( j , k ) j = 1 , 2 , , m ; k = 1 , 2 , 3 , 4

4. Results

4.1. Simulation Results on CEC2017

Single objective optimization algorithms are the basis of the complex optimization algorithm. It is considered effective to test with some classical mathematical functions. CEC2017 contains 30 benchmark functions to test the optimization ability of the algorithm. The F2 function was abandoned because of the dimension-setting problem. F1 and F3 are Unimodal Functions, F4–F10 are Simple Multimodal Functions, F11–F20 are Hybrid Functions, F21–F30 are Composition Functions. Set error = ( f i f i ) as the objective function, where f i is the actual value of the ith test function and f i is the minimum value of the ith test function. The optimization goal is to make the error as small as possible. Values of error and standard deviations less than 10 8 are considered as zero [35].
In this paper, POGSK is compared with the original algorithm GSK, PSO, DE and GWO. The Taguchi strategy in POGSK results in additional fitness function calls in each population generation. So for the sake of fairness, in this paper, the termination condition of the algorithm is set to the maximum number of function evaluations (NEFS) which is set to 10,000*problem_size. For example, the NEFS in a test with 30 variables is 300,000 times. The population size is set to 100 and the range of all test function solutions is set to [−100, 100]. Conduct 31 independent experiments each time to avoid special circumstances. The best results are marked in bold for all problems. The basic parameter settings of each algorithm are shown in Table 3.
Table 4 shows experimental results of POGSK, GSK and DE over 31 independent runs on 29 test functions of 10 variables under CEC2017. Table 5 shows the experimental results of POGSK, GWO and PSO. Compared with the original algorithm GSK, POGSK obtains excellent results in 26 test functions, five of which reach the minimum value of the test function. In contrast to DE, POGSK obtains excellent results on 23 functions. In contrast to GWO, POGSK obtains excellent results on 25 functions. In contrast to PSO, POGSK obtains excellent results on 25 functions. In addition, it is worth noting that POGSK obtained 9 times better results on functions 21–30. This shows that it has excellent search ability on Composition Functions.
Table 6 shows the experimental results of POGSK, GSK and DE over 31 independent runs on 29 test functions of 30 variables under CEC2017. Table 7 shows the experimental results of POGSK, GWO and PSO. Compared with the original algorithm GSK, POGSK obtains excellent results in 20 test functions, five of which reach the minimum value of the test function. In contrast to DE, POGSK obtains excellent results on 26 functions. Compared with GWO, POGSK obtains excellent results on 28 functions. In contrast to PSO, POGSK obtains excellent results on 25 functions. Furthermore, it is worth noting that POGSK obtained 7 times better results on functions 21–30. This again validates its excellent search ability on combinatorial functions.
To demonstrate the algorithm performance of each CEC2017 test problem for multiple numbers of objective function evaluation allowances, we conducted further experiments setting the algorithm termination conditions to 0.1*maximum NFES, 0.3*maximum NFES and 0.5*maximum NFES. Continue using the basic parameters of each algorithm shown in Table 3 without change. Table 8 shows experimental results of POGSK, GSK and PSO over 31 independent runs on 10 variables for multiple numbers of objective function evaluation. Table 9 shows experimental results of POGSK, GWO and DE. For presentation purposes, only the mean fitness values for 31 independent runs of the algorithm are shown in Table 8 and Table 9. It can be observed that in comparison with the 1*max NFES termination condition, POGSK still shows a strong optimization performance when NFES is reduced. It is worth noting that POGSK shows a slight decrease in optimization performance compared to the PSO algorithm. In particular, when NFES = 0.1*Max NFES, POGSK outperforms PSO for only 18 functions. We believe this performance is reasonable because the optimization capability of the POGSK algorithm is not fully utilized when NFES is reduced.
To better visualize the performance of POGSK, the convergence curves of the nine benchmark functions on 10 variables are shown in Figure 4 and the convergence curves of the nine benchmark functions on 30 variables are shown in Figure 5. The convergence curves of POGSK on functions 1–10 of 10 variables are not shown much because unimodal functions and simple multimodal functions are too simple to distinguish the search capability in the case of a few variables. We can see that in the middle and late stages of the algorithm, POGSK shows its powerful search ability to effectively avoid local optima. This reflects the fact that the addition of the OBL strategy and parallel strategy significantly enhances the search capability of the original algorithm. Through the above experimental comparison, it can be determined that POGSK has better capability in the CEC2017 test suite compared with GSK, PSO, DE and GWO.

4.2. Simulation Results on Resource-Scheduling Problems

In this paper, we used the fitness function proposed in Section 3.3 to test the optimized performance of POGSK in real scenarios. We consider the construction of an edge processing system consisting of ten processing units. The size of the tasks to be processed is a random distribution in the interval (0, 5] × 10 6 instructions. The maximum evaluation times were set as 300,000 times and the population size as 100. In order to avoid exceeding the constraint, the constraint test is carried out when the solution of the algorithm enters the fitness function. Specific constraint testing steps are as follows:
1. Each individual is represented as X i = { x i , 1 , x i , 2 , …, x i , m }, the assignment list { k 1 , k 2 , …, k m } is obtained by rounding each dimension of the individual. K i = b indicates that task i is assigned to node b. All nodes are traversed to find idle nodes and all nodes whose resource utilization is lower than 50%. The idle queues F and low resource utilization queues L are established, respectively. The node is traversed and the over-allocated node is found. Queue E j is established for the tasks on this node.
2. The tasks in queue E j are redistributed to the nodes in queue L or to F if L is empty. After each redistribution, the current node is checked to see if it is over-allocated. If not, the current operation is stopped and the process returns to step 3 until all nodes have been traversed. Based on the above results, the individuals need to be adjusted. For example, x i , 3 = 2.1 and k 3 = 2. After the constraint test, k 3 is adjusted to 6, then x i , 3 is updated randomly in the range [5.6, 6.4].
In this paper, we randomly generated 11 independent scenarios which submitted 30 tasks to the processing unit simultaneously. The best results are marked in bold for all scenarios. Table 10 shows other experimental parameters. These experimental parameters control the scene setting in the experiment. Each experiment was independently run 20 times. Table 11 shows the experimental performance of POGSK with GSK, GWO and PSO. You can see that out of the 11 experiments, POGSK won nine times. POGSK achieved excellent results in 9 scenarios compared to GSK. Compared with PSO, POGSK achieved excellent results in 9 scenarios. Compared with GWO, excellent results were obtained in 11 scenarios. This is due to the use of the parallel strategy and the OBL strategy. The Taguchi communication strategy allows the original GSK to effectively avoid falling into a local optimum. The population-merging communication strategy allows POGSK to not weaken algorithm performance due to the reduction in the number of individuals in the subpopulation. The use of the OBL strategy corrects the direction of convergence of the algorithm and increases the speed of convergence.
Figure 6 shows the fitness function value as the number of test function call changes. We can still see that POGSK performs well in avoiding local optimality. It shows that POGSK also performs well in constrained realistic optimization problems.

5. Conclusions

In this paper, the POGSK algorithm is proposed to solve the resource-scheduling problem in the IoV. Based on the original algorithm, POGSK uses OBL and parallel strategy. The information exchange of subpopulations uses the Taguchi strategy and the population-merger strategy. By testing with the original algorithm and some classical algorithms on CEC2017, it is shown that the new algorithm has stronger searching ability. Then, we applied POGSK to the resource-scheduling problem and carried out the simulation test, which also showed better results.
In the future, we can continue to improve the inter-group communication strategy and enhance the search capability of the algorithm. We can also study the application of POGSK in multi-objective problems, engineering optimization problems and binary optimization problems. We believe the new algorithm can also achieve better results.

Author Contributions

Conceptualization, J.-S.P.; methodology, J.-S.P. and L.-F.L.; software, J.-S.P. and L.-F.L.; validation, L.-F.L. and S.-C.C.; investigation, L.-F.L.; resources, J.-S.P. and P.-C.S.; data curation, G.-G.L. and P.-C.S.; writing—original draft preparation, L.-F.L.; writing—review and editing, J.-S.P. and P.-C.S.; supervision, G.-G.L.; project administration, S.-C.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The datasets generated for this study are available on request to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, H.; Wang, W.; Cui, Z.; Zhou, X.; Zhao, J.; Li, Y. A new dynamic firefly algorithm for demand estimation of water resources. Inf. Sci. 2018, 438, 95–106. [Google Scholar] [CrossRef]
  2. Song, B.; Wang, Z.; Zou, L. An improved PSO algorithm for smooth path planning of mobile robots using continuous high-degree Bezier curve. Appl. Soft Comput. 2021, 100, 106960. [Google Scholar] [CrossRef]
  3. Chai, Q.w.; Chu, S.C.; Pan, J.S.; Hu, P.; Zheng, W.M. A parallel WOA with two communication strategies applied in DV-Hop localization method. EURASIP J. Wirel. Commun. Netw. 2020, 2020, 50. [Google Scholar] [CrossRef]
  4. Wu, J.; Xu, M.; Liu, F.F.; Huang, M.; Ma, L.; Lu, Z.M. Solar Wireless Sensor Network Routing Algorithm Based on Multi-Objective Particle Swarm Optimization. J. Inf. Hiding Multim. Signal Process. 2021, 12, 1–11. [Google Scholar]
  5. Deng, W.; Xu, J.; Song, Y.; Zhao, H. Differential evolution algorithm with wavelet basis function and optimal mutation strategy for complex optimization problem. Appl. Soft Comput. 2021, 100, 106724. [Google Scholar] [CrossRef]
  6. Farid, M.; Latip, R.; Hussin, M.; Hamid, N.A.W.A. Scheduling scientific workflow using multi-objective algorithm with fuzzy resource utilization in multi-cloud environment. IEEE Access 2020, 8, 24309–24322. [Google Scholar] [CrossRef]
  7. Mohamed, A.W.; Hadi, A.A.; Mohamed, A.K. Gaining–sharing knowledge based algorithm for solving optimization problems: A novel nature-inspired algorithm. Int. J. Mach. Learn. Cybern. 2020, 11, 1501–1529. [Google Scholar] [CrossRef]
  8. Eberhart, R.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the MHS’95, Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995; pp. 39–43. [Google Scholar]
  9. Song, P.C.; Chu, S.C.; Pan, J.S.; Yang, H. Simplified Phasmatodea population evolution algorithm for optimization. Complex Intell. Syst. 2022, 8, 2749–2767. [Google Scholar] [CrossRef]
  10. Pan, J.S.; Zhang, L.G.; Wang, R.B.; Snášel, V.; Chu, S.C. Gannet optimization algorithm: A new metaheuristic algorithm for solving engineering optimization problems. Math. Comput. Simul. 2022, 202, 343–373. [Google Scholar] [CrossRef]
  11. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  12. Chu, S.C.; Tsai, P.W.; Pan, J.S. Cat swarm optimization. In Proceedings of the PRICAI 2006: Trends in Artificial Intelligence: 9th Pacific Rim International Conference on Artificial Intelligence, Guilin, China, 7–11 August 2006; Proceedings 9. Springer: Berlin/Heidelberg, Germany; 2006, pp. 854–858. [Google Scholar]
  13. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  14. Storn, R.; Price, K. Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341. [Google Scholar] [CrossRef]
  15. Han, K.H.; Kim, J.H. Quantum-inspired evolutionary algorithm for a class of combinatorial optimization. IEEE Trans. Evol. Comput. 2002, 6, 580–593. [Google Scholar] [CrossRef] [Green Version]
  16. Hashim, F.A.; Hussain, K.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W. Archimedes optimization algorithm: A new metaheuristic algorithm for solving optimization problems. Appl. Intell. 2021, 51, 1531–1551. [Google Scholar] [CrossRef]
  17. Kirkpatrick, S.; Gelatt, C.D., Jr.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  18. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  19. Rao, R.V.; Savsani, V.J.; Vakharia, D. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput.-Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  20. Mohamed, A.W.; Abutarboush, H.F.; Hadi, A.A.; Mohamed, A.K. Gaining–sharing knowledge based algorithm with adaptive parameters for engineering optimization. IEEE Access 2021, 9, 65934–65946. [Google Scholar] [CrossRef]
  21. Chu, S.C.; Roddick, J.F.; Pan, J.S. A parallel particle swarm optimization algorithm with communication strategies. J. Inf. Sci. Eng 2005, 21, 809–818. [Google Scholar]
  22. Harada, T.; Alba, E. Parallel genetic algorithms: A useful survey. ACM Comput. Surv. CSUR 2020, 53, 1–39. [Google Scholar] [CrossRef]
  23. Cai, D.; Lei, X. A New Evolutionary Algorithm Based on Uniform and Contraction for Many-objective Optimization. J. Netw. Intell. 2017, 2, 171–185. [Google Scholar]
  24. Tsai, P.W.; Pan, J.S.; Chen, S.M.; Liao, B.Y. Enhanced parallel cat swarm optimization based on the Taguchi method. Expert Syst. Appl. 2012, 39, 6309–6319. [Google Scholar] [CrossRef]
  25. Tsai, J.T.; Liu, T.K.; Chou, J.H. Hybrid Taguchi-genetic algorithm for global numerical optimization. IEEE Trans. Evol. Comput. 2004, 8, 365–377. [Google Scholar] [CrossRef]
  26. Jiang, S.J.; Chu, S.C.; Zou, F.M.; Shan, J.; Zheng, S.G.; Pan, J.S. A parallel Archimedes optimization algorithm based on Taguchi method for application in the control of variable pitch wind turbine. Math. Comput. Simul. 2023, 203, 306–327. [Google Scholar] [CrossRef]
  27. Mahdavi, S.; Rahnamayan, S.; Deb, K. Opposition based learning: A literature review. Swarm Evol. Comput. 2018, 39, 1–23. [Google Scholar] [CrossRef]
  28. Yu, X.; Xu, W.; Li, C. Opposition-based learning grey wolf optimizer for global optimization. Knowl.-Based Syst. 2021, 226, 107139. [Google Scholar] [CrossRef]
  29. Dhargupta, S.; Ghosh, M.; Mirjalili, S.; Sarkar, R. Selective opposition based grey wolf optimization. Expert Syst. Appl. 2020, 151, 113389. [Google Scholar] [CrossRef]
  30. Rahnamayan, S.; Tizhoosh, H.R.; Salama, M.M. Opposition-based differential evolution. IEEE Trans. Evol. Comput. 2008, 12, 64–79. [Google Scholar] [CrossRef] [Green Version]
  31. Deng, W.; Shang, S.; Cai, X.; Zhao, H.; Song, Y.; Xu, J. An improved differential evolution algorithm and its application in optimization problem. Soft Comput. 2021, 25, 5277–5298. [Google Scholar] [CrossRef]
  32. Wang, H.; Wu, Z.; Rahnamayan, S.; Liu, Y.; Ventresca, M. Enhancing particle swarm optimization using generalized opposition-based learning. Inf. Sci. 2011, 181, 4699–4714. [Google Scholar] [CrossRef]
  33. Ewees, A.A.; Abd Elaziz, M.; Houssein, E.H. Improved grasshopper optimization algorithm using opposition-based learning. Expert Syst. Appl. 2018, 112, 156–172. [Google Scholar] [CrossRef]
  34. Mohamed, A.W.; Hadi, A.A.; Mohamed, A.K.; Awad, N.H. Evaluating the performance of adaptive gainingsharing knowledge based algorithm on CEC 2020 benchmark problems. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
  35. Awad, N.; Ali, M.; Liang, J.; Qu, B.; Suganthan, P. Problem Definitions and Evaluation Criteria for The CEC 2017 Special Session and Competition on Single Objective Real-Parameter Numerical Optimization; Technical Report; Nanyang Technological University: Singapore, 2016. [Google Scholar]
  36. Cao, B.; Zhang, J.; Liu, X.; Sun, Z.; Cao, W.; Nowak, R.M.; Lv, Z. Edge–Cloud Resource Scheduling in Space–Air–Ground-Integrated Networks for Internet of Vehicles. IEEE Internet Things J. 2022, 9, 5765–5772. [Google Scholar] [CrossRef]
  37. Ðurasević, M.; Jakobović, D. Heuristic and metaheuristic methods for the parallel unrelated machines scheduling problem: A survey. Artif. Intell. Rev. 2022, 56, 3181–3289. [Google Scholar] [CrossRef]
  38. Singh, S.; Chana, I. QoS-aware autonomic resource management in cloud computing: A systematic review. ACM Comput. Surv. CSUR 2015, 48, 1–46. [Google Scholar] [CrossRef]
  39. Yao, J. Research on Optimization Algorithm for Resource Allocation of Heterogeneous Car Networking Engineering Cloud System Based on Big Data. Math. Probl. Eng. 2022, 2022, 1079750. [Google Scholar] [CrossRef]
  40. Wang, Q.; Guo, S.; Liu, J.; Yang, Y. Energy-efficient computation offloading and resource allocation for delay-sensitive mobile edge computing. Sustain. Comput. Inform. Syst. 2019, 21, 154–164. [Google Scholar] [CrossRef]
  41. Filip, I.D.; Pop, F.; Serbanescu, C.; Choi, C. Microservices scheduling model over heterogeneous cloud-edge environments as support for IoT applications. IEEE Internet Things J. 2018, 5, 2672–2681. [Google Scholar] [CrossRef]
  42. Guo, M.; Li, L.; Guan, Q. Energy-efficient and delay-guaranteed workload allocation in IoT-edge-cloud computing systems. IEEE Access 2019, 7, 78685–78697. [Google Scholar] [CrossRef]
  43. Ullah, A.; Nawi, N.M.; Ouhame, S. Recent advancement in VM task allocation system for cloud computing: Review from 2015 to2021. Artif. Intell. Rev. 2022, 55, 2529–2573. [Google Scholar] [CrossRef]
  44. Cao, B.; Sun, Z.; Zhang, J.; Gu, Y. Resource allocation in 5G IoV architecture based on SDN and fog-cloud computing. IEEE Trans. Intell. Transp. Syst. 2021, 22, 3832–3840. [Google Scholar] [CrossRef]
Figure 1. The approaches of POGSK.
Figure 1. The approaches of POGSK.
Mathematics 11 02953 g001
Figure 2. The flowchart of POGSK.
Figure 2. The flowchart of POGSK.
Mathematics 11 02953 g002
Figure 3. Scheduling model.
Figure 3. Scheduling model.
Mathematics 11 02953 g003
Figure 4. Convergence curves of 9 functions on 10 variables.
Figure 4. Convergence curves of 9 functions on 10 variables.
Mathematics 11 02953 g004
Figure 5. Convergence curves of 9 functions on 30 variables.
Figure 5. Convergence curves of 9 functions on 30 variables.
Mathematics 11 02953 g005
Figure 6. Convergence curves of four situations.
Figure 6. Convergence curves of four situations.
Mathematics 11 02953 g006
Table 1. The position of the candidate individuals.
Table 1. The position of the candidate individuals.
PositionDimension
1234567Fitness Value
x 1 01100103
x 2 10011014
Table 2. The Taguchi method is used to produce better individuals.
Table 2. The Taguchi method is used to produce better individuals.
Experiment NumberDimension
1234567Fitness Value
101100103
201111015
300000011
400011103
511001115
611010003
710101003
810110115
x 1 cumulative fitness value12161612121612-
x 2 cumulative fitness value16121216161216-
Selected dimension source x 1 x 2 x 2 x 1 x 1 x 2 x 1 -
Position of the new individual00000000
Table 3. Parameter settings of each algorithm.
Table 3. Parameter settings of each algorithm.
AlgorithmsParameters Settings
POGSKG = 4, R = 0.5, L = 1, r = 0.1, K f = 0.5, K r = 0.9, K = 1
GSK K f = 0.5, K r = 0.9, K = 1
PSO V m a x = 6, V m i n = −6, wMax = 0.9, wMin = 0.2, c1 = c2 = 2
DE b e t a m i n = 0.2, b e t a m a x = 0.8, pCR = 0.2
GWO d = 2 (linearly decreased over iterations)
Table 4. Experimental results of POGSK, GSK and DE over 31 independent runs on 29 test functions of 10 variables under CEC2017.
Table 4. Experimental results of POGSK, GSK and DE over 31 independent runs on 29 test functions of 10 variables under CEC2017.
FunctionPOGSKGSKDE
MeanStdMeanStdMeanStd
100001243.046917.0439
300001521.115622.8467
400006.0372440.307901
517.485514.65399920.507913.0413519.7105141.981759
6000000
729.316114.91491530.332683.70621620.852211.880978
817.001344.474419.455663.8710049.7940021.752965
9000000
10930.7502187.40531022.488101.7212492.0772107.547
110.2888590.459088003.2547680.650172
12102.701687.5021480.938762.50609162,272.884,043.21
134.399382.9028286.4898741.6227161247.311044.067
140.873960.8830045.9821042.98679326.2390220.65982
150.151390.276890.3139740.2966528.1715424.51716
160.997092.0363692.9170964.2990843.3142341.910263
171.822953.7020299.2502516.7964551.896040.899154
181.671424.2067371.6735155.02943967.1823585.524
190.04970.0566230.1049050.11455427.6027933.09174
200.4465230.3128350.4147590.21180700
21176.301257.57345193.007351.14463161.433636.74808
2295.850616.53658100.38940.81402797.409939.366758
23306.1222.840924317.76653.307184312.05152.093624
24306.69781.28064341.434321.0113316.243839.04365
25399.4648.161678427.174321.27511410.43919.406591
26296.77417.960533002.99E−13300.504252.62727
27389.2170.239349389.50440.052422389.55750.2579
283002.67E−13303.115117.34415430.371469.26502
29243.9265.174707248.34455.043812263.92798.011281
30445.73638.268417363.99838,494.2413,511.917149.584
Win--26152318
Lose--314611
Table 5. Experimental results of POGSK, PSO and GWO over 31 independent runs on 29 test functions of 10 variables under CEC2017.
Table 5. Experimental results of POGSK, PSO and GWO over 31 independent runs on 29 test functions of 10 variables under CEC2017.
FunctionPOGSKGWOPSO
MeanStdMeanStdMeanStd
1004,497,09111,601,7351159.3971605.775
300259.1001389.852400
4009.2011565.9590953.9160312.5333
517.485514.65399913.252377.43672128.500689.743601
6000.4646120.8817534.0770673.713828
729.316114.91491526.102988.27084221.024746.645744
817.001344.474411.492564.36992815.469996.8877
9003.3441288.09429100
10930.7502187.4053455.7428250.51793.989304.649
110.2888590.45908819.5961425.720123.7679411.81411
12102.701687.50214480,552.9684,130.612,243.211,168.19
134.399382.9028288697.5864995.0977018.6555919.406
140.873960.8830041108.5311650.03678.09055101.3466
150.151390.276891291.431574.817238.5524322.1582
160.997092.03636977.8844369.76017212.4675118.4518
171.822953.70202944.0184819.8894443.6978625.65489
181.671424.20673728,538.614,703.868380.5265920.313
190.04970.0566234300.0295439.223619.0984767.2034
200.4465230.31283551.8376639.5646363.1195346.19121
21176.301257.57345208.084819.82006173.336363.70194
2295.850616.53658103.386617.3914102.34620.964004
23306.1222.840924315.72597.263185361.604972.55207
24306.69781.28064337.376143.27372348.4116106.5231
25399.4648.161678439.604213.4442425.048223.00597
26296.77417.96053355.5426169.6211367.0571174.7731
27389.2170.239349397.286517.07097432.78842.47196
283002.67E−13537.155299.1675377.729848.39093
29243.9265.174707275.936834.50598308.047838.2051
30445.73638.26841601,218.1684,207.64893.0863573.952
Win--25262528
Lose--4341
Table 6. Experimental results of POGSK, GSK and DE over 31 independent runs on 29 test functions of 30 variables under CEC2017.
Table 6. Experimental results of POGSK, GSK and DE over 31 independent runs on 29 test functions of 30 variables under CEC2017.
FunctionPOGSKGSKDE
MeanStdMeanStdMeanStd
10000565.1619467.5469
20.17102250.9519441.27E−073.59E−0778,485.4311,969.56
32.38962.003828.068090615.7257688.873051.39142
432.700729.603168157.0618610.58822129.98849.275313
54.78E−057.06E−056.48E−071.44E−0600
6110.929359.25905184.538410.80525163.757610.61515
731.709029.949111157.843768.553038130.76758.930799
81.39611110.9644450000
96640.146352.15476773.1382280.21395270.445300.3687
1015.35278.17278432.01021437.36949110.257610.49883
117473.15144191.4885872.0333954.4884,064,7321,326,017
1268.1024633.2956297.60321138.84956146,977.170,134.12
1347.5859113.2648856.5803424.59412459,985.9229,927.38
1432.79513116.2724516.7878110.9250820,442.6611,240.16
15225.2992208.1208795.64371166.3631588.2006138.6957
1647.1360516.82453198.9859392.83664162.909738.25502
17104.5658764.0095136.676588.700766424,098.9151,960.3
1822.2423918.2820210.977384.41614719,255.239427.15
1962.5611150.5338768.73920665.88684173.898449.83044
20234.230420.45764349.177748.751155332.63748.650852
21100.079350.4417871004.52E−131202.919888.6275
22374.7816.959946463.5223259.16068480.75367.607155
23444.19576.710317567.0692729.46917582.8558.472587
24385.917721.780967386.920290.19508387.32660.082705
25685.0133500.74731035.9259330.02922326.13582.41891
26496.611347.400898492.53017.147451509.68582.319662
27303.337318.54975321.0596843.75981427.041413.69322
28460.231641.83653563.47372111.3137729.161176.50307
292045.67480.23482080.6714121.125320,973.936734.785
Win--20122615
Lose--917314
Table 7. Experimental results of POGSK, GWO and PSO over 31 independent runs on 29 test functions of 30 variables under CEC2017.
Table 7. Experimental results of POGSK, GWO and PSO over 31 independent runs on 29 test functions of 30 variables under CEC2017.
FunctionPOGSKGWOPSO
MeanStdMeanStdMeanStd
1001.065E+099.63E+082399.5133445.498
30.17102250.95194428,199.90210,867.380.0592750.066189
42.38962.00382160.7121547.2004460.072128.92923
532.700729.60316878.08440418.62267152.966225.34413
64.78E−057.06E−054.50108382.50856531.188787.532825
7110.929359.25905136.8524523.06388117.773523.32769
831.709029.94911173.03770618.75822115.382625.07128
91.39611110.964445524.43328280.14442084.344424.3998
106640.146352.15472822.749522.92343384.133730.3928
1115.35278.172784296.75347137.967790.0239817.27491
127473.15144191.48828,314,82440,351,46249,597.6929,308.71
1368.1024633.295625,377,176.923,990,31311,223.1312,434.89
1447.5859113.26488109,516.46312,614.97271.9575769.247
1532.79513116.27245309,301.92786,4556660.2357995.343
16225.2992208.1208673.1389239.95961021.81208.1095
1747.1360516.82453224.34632107.809518.3412162.6259
18104.5658764.00951449,259.69471,076.9113,207.184,886.55
1922.2423918.28202680,051.231,796,56210,046.0814,864.67
2062.5611150.53387316.20714113.958425.1631129.9895
21234.230420.45764269.7957915.67731324.648426.97333
22100.079350.4417872207.28821473.0691253.8741862.064
23374.7816.959946430.2778338.0415708.1221116.4942
24444.19576.710317514.8041250.32647778.176872.14411
25385.917721.780967455.4090824.09939379.6216.204345
26685.0133500.74731924.7685304.04713503.611534.257
27496.611347.400898533.2358812.11012494.105771.15496
28303.337318.54975562.2531362.77035371.515661.83288
29460.231641.83653781.81747149.2681917.2247292.7391
302045.67480.23484,740,432.94,505,1333024.2574562.477
Win--28262526
Lose--1343
Table 8. Experimental results of POGSK, GSK and PSO over 31 independent runs on 10 variables for multiple numbers of objective function evaluation.
Table 8. Experimental results of POGSK, GSK and PSO over 31 independent runs on 10 variables for multiple numbers of objective function evaluation.
FunctionPOGSKGSKPSO
0.1*Max0.3*Max0.5*Max0.1*Max0.3*Max0.5*Max0.1*Max0.3*Max0.5*Max
165,168.640.0087299.00E−08198,621.80.1482951.24E−071507.6311841.9591481.528
31072.8740.0209551.37E−091258.3660.1505233.43E−083.6229421.32E−060
44.468690.1568320.0006424.7821250.02521.26E−056.7880899.6007044.619372
535.9159728.1718922.9198835.0053329.0122223.609931.5818430.4905929.62402
60.677120.0002313.46E−071.1318350.0014759.12E−069.3465494.0686555.220518
746.4020536.9218834.4189850.0072437.9698534.890628.8708224.4277223.73282
834.319326.8823822.6999336.8345828.5507426.6398817.8451315.9193214.53922
90.335181001.1909110017.396656.5875526.853679
101532.0951315.3121128.5881525.4351311.7491192.912837.2588806.28762.448
1111.170215.2098621.53280712.141155.797043.5191226.1661328.7945225.05342
1291,453.6486.7377199.4322164,573.9617.8236179.680630,604.5811,899.8116,792.79
1347.4132712.209718.40304460.5560311.711069.4888026398.4739347.3216564.624
1427.4600619.8685911.8403128.984318.2926915.3611203.128558.986266.1244
159.5113092.5293620.61254310.670482.4898270.6670592532.7541061.556548.5125
1683.4729319.349394.68427591.1037236.560913.88171216.4358208.4391229.0174
1771.518830.7067318.4754485.2501344.1226329.5754849.0215150.0212545.54325
1874.4413114.673615.338491162.156915.293514.68650413,029.718474.6988372.767
196.4252251.7496150.6829057.2160831.9443510.8927023491.4622753.9342199.245
2068.7745311.385351.39773481.4486321.287383.67915897.7128583.9126375.89307
21188.5297184.346189.9476209.0319202.3022178.1039171.7462178.0085189.2755
22103.8606100.5495100.2226105.9777102.3459101.2098131.042596.96854135.8181
23335.3295325.8999317.8954335.7887327.2087323.999377.3523369.7106371.4575
24357.042341.9053316.7562361.736347.599348.096330.8915320.122347.5205
25403.4286409.5803404.0327420.6563426.9033422.8455415.486420.797423.8288
26302.3367296.7742296.7742301.0423300300395.6739392.4169469.6308
27391.4686389.2144389.2638391.1973389.4444389.4419446.6022436.6223439.4359
28383.6587300.0077303.8159432.3408312.2833300405.3951389.2501350.7108
29310.7228273.0201257.9113311.5873277.9013266.0836319.5175320.2717321.4858
3011,6124.7735.7863508.8163169,420.926,998.33474.593773,193.3414,707.459126.257
win---252423182224
lose---4561175
Table 9. Experimental results of POGSK, GWO and DE over 31 independent runs on 10 variables for multiple numbers of objective function evaluation.
Table 9. Experimental results of POGSK, GWO and DE over 31 independent runs on 10 variables for multiple numbers of objective function evaluation.
FunctionPOGSKGWODE
0.1*Max0.3*Max0.5*Max0.1*Max0.3*Max0.5*Max0.1*Max0.3*Max0.5*Max
165,168.640.0087299.00E−082,346,40614178569.56562616,135,02020,544.914898.193
31072.8740.0209551.37E−092051.369948.22630.75261114,530.998845.1235727.654
44.468690.1568320.00064222.3348711.3545329.3476312.854296.9295056.505718
535.9159728.1718922.9198818.4557112.6357313.7409330.6047918.8183714.40469
60.677120.0002313.46E−071.2280190.8234633.7636871.8100730.0013246.07E−07
746.4020536.9218834.4189837.2126629.32247506.076144.7066730.2033825.16062
834.319326.8823822.6999315.9190615.3044621.2012731.7707420.5348114.54278
90.335181006.6722168.604824522,798.141.74640.0177942.17E−07
101532.0951315.3121128.588677.9308573.427311,452.421210.891851.5756702.4145
1111.170215.2098621.53280730.6082831.733611246.45247.172728.0571275.097437
1291,453.6486.7377199.43221,407,448643,050.92140.9696,603,0381,351,308594,344.9
1347.4132712.209718.40304411065.1912,549.4790.0041716,654.354276.9511909.666
1427.4600619.8685911.840312450.41269.14552.02613961.5601184.9314110.3554
159.5113092.5293620.6125435154.4143513.66725,350.521426.507236.2673121.9112
1683.4729319.349394.684275118.3694120.01374220.43396.5735222.849959.823819
1771.518830.7067318.4754468.6356561.111160.5160953.1640329.5622714.18701
1874.4413114.673615.33849124,579.3225,987.05201.313741,806.476779.2772723.725
196.4252251.7496150.6829059683.897409.389106.66332030.003296.4361199.3466
2068.7745311.385351.39773488.4223571.2303317.613837.928794.110680.002431
21188.5297184.346189.9476211.797205.8708342.6138209.201179.1673172.5472
22103.8606100.5495100.2226111.9155108.5442436.9162126.1442102.526101.7367
23335.3295325.8999317.8954323.4811317.4113369.5282332.2919319.7871316.0538
24357.042341.9053316.7562354.3611335.1408398.7922361.1658352.334343.8135
25403.4286409.5803404.0327439.5899433.9101552.888450.7069431.3259420.7792
26302.3367296.7742296.7742370.8195374.8314275.9333541.4581399.4515350.68
27391.4686389.2144389.2638396.9011394.4558852,804.2397.4228391.6064390.2805
28383.6587300.0077303.8159551.6053539.17533358.777549.7038511.7699488.9492
29310.7228273.0201257.9113299.4216291.22493191.632345.5497296.2041280.3842
30116,124.7735.7863508.8163491,702795,289.2255,507.2285,465.270,612.3832,384.37
win---212326222121
lose---863788
Table 10. Experimental parameter settings.
Table 10. Experimental parameter settings.
SymbolsDescriptionsValues
S i Size of the ith task(0, 5] × 10 6 instr
H j Processing capacity of the jth processing unit[0.5, 2] × 10 6 instr/ms
CR(i,k)The number of r e s o u r c e s k required by T i [0, 5]
PR(i,k)The number of r e s o u r c e s k owned by P j [5, 25]
TL(i,j)The transmission buffer time from T i to P j [0, 3] ms
c s i Deadtime of T i S i + [0, 3] ms
Table 11. A total of 30 tasks were assigned to 10 processing unit experiments.
Table 11. A total of 30 tasks were assigned to 10 processing unit experiments.
ScenarioPOGSKGSKPSOGWO
MeanStdMeanStdMeanStdMeanStd
15.7182090.1449525.8559390.1351156.0531490.2482568.9913580.330173
26.3291430.3408466.4851260.2418816.2654630.2702689.9979380.317971
35.9816960.1630716.0030880.1584146.3396090.2872269.3444610.246512
45.2123160.2043135.367990.2278425.9177520.3935458.9741010.311377
55.2454720.1629595.3262940.2477765.4286650.8328149.1578560.251572
65.6241880.1154485.7893020.2312266.213640.5594479.5555590.421648
76.6612260.3254186.7937650.2498676.9753660.40981810.004480.244958
85.1587540.2785784.9926220.1564625.536840.1514738.2335070.457459
95.5678880.0964275.709770.1057885.8746410.1399838.9464710.32081
105.8283610.2080585.9292720.2931226.3301070.475679.3433150.307779
115.6775460.0088425.5581770.1757365.2740450.6732089.2224620.412783
Win--9699119
Lose--252202
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pan, J.-S.; Liu, L.-F.; Chu, S.-C.; Song, P.-C.; Liu, G.-G. A New Gaining-Sharing Knowledge Based Algorithm with Parallel Opposition-Based Learning for Internet of Vehicles. Mathematics 2023, 11, 2953. https://doi.org/10.3390/math11132953

AMA Style

Pan J-S, Liu L-F, Chu S-C, Song P-C, Liu G-G. A New Gaining-Sharing Knowledge Based Algorithm with Parallel Opposition-Based Learning for Internet of Vehicles. Mathematics. 2023; 11(13):2953. https://doi.org/10.3390/math11132953

Chicago/Turabian Style

Pan, Jeng-Shyang, Li-Fa Liu, Shu-Chuan Chu, Pei-Cheng Song, and Geng-Geng Liu. 2023. "A New Gaining-Sharing Knowledge Based Algorithm with Parallel Opposition-Based Learning for Internet of Vehicles" Mathematics 11, no. 13: 2953. https://doi.org/10.3390/math11132953

APA Style

Pan, J. -S., Liu, L. -F., Chu, S. -C., Song, P. -C., & Liu, G. -G. (2023). A New Gaining-Sharing Knowledge Based Algorithm with Parallel Opposition-Based Learning for Internet of Vehicles. Mathematics, 11(13), 2953. https://doi.org/10.3390/math11132953

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop