Next Article in Journal
Study on Crack Propagation of Rock Bridge in Rock-like Material with Fractures under Compression Loading with Sudden Change Rate
Previous Article in Journal
Multimodal Power Management Based on Decision Tree for Internet of Wearable Things Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Discrete Particle Swarm Optimization Algorithm for Dynamic Scheduling of Transmission Tasks

1
College of Computer Science, Beijing University of Posts and Telecommunications, Beijing 100876, China
2
Radio and Television & New Media Intelligent Monitoring Key Laboratory of NRTA (Radio, Film & Television Design & Research Institute), Beijing 100045, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(7), 4353; https://doi.org/10.3390/app13074353
Submission received: 13 March 2023 / Revised: 27 March 2023 / Accepted: 28 March 2023 / Published: 29 March 2023
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
The dynamic-scheduling problem of transmission tasks (DSTT) is an important problem in the daily work of radio and television transmission stations. The transmission effect obtained by the greedy algorithm for task allocation is poor. In the case of more tasks and equipment and smaller time division, the precise algorithm cannot complete the calculation within an effective timeframe. In order to solve this problem, this paper proposes a discrete particle swarm optimization algorithm (DPSO), builds a DSTT mathematical model suitable for the DPSO, solves the problem that particle swarm operations are not easy to describe in discrete problems, and redefines the particle motion strategy and adds random disturbance operation in its probabilistic selection model to ensure the effectiveness of the algorithm. In the comparison experiment, the DPSO achieved much higher success rates than the greedy algorithm (GR) and the improved genetic algorithm (IGA). Finally, in the simulation experiment, the result data show that the accuracy of the DPSO outperforms that of the GR and IGA by up to 3.012295% and 0.11115%, respectively, and the efficiency of the DPSO outperforms that of the IGA by up to 69.246%.

1. Introduction

Radio- and television-transmitting stations usually deploy multiple equipment to complete multiple transmission tasks every day. Different equipment can be selected for each transmission task, but the effect of the execution on different equipment is different. On-duty personnel usually draw the transmission plan based on experience and control the equipment to complete the task according to the plan [1,2]. However, this does not guarantee the optimal transmission effect of the prepared plan. In addition, when a temporary task is added and the transmission plan needs to be adjusted, the transmission plan cannot be adjusted according to the transmission effect in time to obtain a new transmission plan with the best effect. To solve the above problems and improve the field of radio and television coverage, it is urgent to complete the dynamic scheduling of transmission tasks (DSTT) through intelligent algorithms and improve the transmission effect.
In previous research, we completed the static allocation of transmission tasks, that is, at the same time point, we assigned multiple tasks to multiple equipment with the goal of achieving an optimal comprehensive transmission effect [3]. The static-task-allocation problem can be summarized as a combinatorial optimization problem (COP), which is an NP-Hard problem. When there are fewer tasks and equipment, the enumeration algorithm is used to accurately calculate the global-optimal solution; when the number of tasks and equipment increases, the computing time of the enumeration algorithm increases exponentially. By introducing intelligent algorithms to solve this, the optimal solution can be calculated within an acceptable timeframe. Based on this, to calculate the task allocation of multiple time periods, the problem can be broken down into independent allocations of multiple time periods. However, in practice, once the transmission task starts, the equipment cannot be interrupted and replaced before it ends. The calculation method of independent allocations gives priority to the previous tasks’ optimal selection. The optimal allocation of the subsequent mission reduces the possibility of optimization, and the overall view is that the global-optimal solution cannot be obtained. Based on a global comprehensive evaluation, the scheduling problem of multi-period tasks for multiple equipment is thus solved thanks to the DSTT.

1.1. Related Works

This is a research paper written by Chinese radio and television researchers on the preparation of a transmission plan. In the process of preparing the plan, the greedy algorithm (GR) is used to find the priority allocation of the tasks that can obtain the optimal allocation of all tasks. According to this principle, until all tasks are scheduled, the number of algorithm iterations is related to the task period, and the backtracking algorithm is used to deal with conflicts in the scheduling. Although the algorithm has been recognized by the industry and has made considerable progress compared to manual compilation based on experience, there is still a lot of room for improvement in the transmission effect [4].
In recent years, related research has made some progress in solving dynamic-task-scheduling problems.
Zhou et al. summarize the application of an immune-optimization algorithm in unmanned aerial vehicles (UAVs)’ scheduling problem. In the optimization process, the immune algorithm introduces an affinity-evaluation operator, an individual-concentration-evaluation operator and an incentive-evaluation operator before searching for the global-optimal solution using the mechanisms of population-diversity maintenance and parallel-distributed search [5]. Nazarov et al. use queuing theory to deal with the access-task-scheduling problems of database node services [6]. Liu et al. combine the theories of individual concentration and individual incentive degree in the immune algorithm with the fitness function in the genetic algorithm (GA) to both guide the algorithm’s search process and achieve multi-objective task scheduling optimization [7]. Liu et al. use the evolutionary algorithm (EA) as a search engine for large-scale global optimization (LSGO) technology to find the global-optimal solution in complex high-dimensional spaces [8]. Jia et al. propose distributed cooperative co-evolution algorithms (DCC) to solve the optimization-evaluation problem, evaluate the contribution of each subgroup to the global fitness, and allocate the computing resources of the subpopulation according to their degree of contribution [9]. Sun et al. propose a threshold-based grouping strategy and grouping variables by pre-setting the relevant threshold of subgroups [10]. Li et al. solve the overlapping LSGO problem by setting a threshold to specify the size of subgroups. The decision variables are grouped by clustering to avoid the problem of uneven groupings [11]. In solving the multi-objective problems (MOP), the multi-objective evolutionary algorithm (MOEA) is adopted and the algorithm is improved, providing many ideas when studying the dynamic-task-scheduling problem [12,13,14,15,16,17,18,19]. Kuppusamy et al. introduce a reinforced strategy-dynamic-opposition learning based on social-spider-optimization algorithms to enhance individual superiority and schedule workflow in fog computing [20]. Tang et al. propose a job-scheduling algorithm based on the workload prediction of computing nodes, analyze the causes of workload imbalance and the feasibility of reallocating computing resources, and design an application and a workload-aware scheduling algorithm (AWAS) by combining the previously designed workload prediction models, and propose a parallel-job scheduling method based on computing-node-workload predictions [21]. Jia et al. study single-objective flexible job shop scheduling problems (JSP) by combining genetic algorithms and whale-swarm algorithms; reorganizing whale individuals and genetic codes to improve the local-search ability; and making comparisons with standard examples. Based on this, they conclude that the optimal solution can be obtained [22].
In the above research using EAs, GAs and other intelligent algorithms to solve the task-dynamic-scheduling problem, we found that the DSTT is most similar to the JSP. The algorithm for solving JSP problems has reference significance for the DSTT.
The classical job-shop-scheduling problem (JSP) is one of the most well-known scheduling problems. It consists of m machines and n jobs. A job contains several operations to be processed in a fixed order. In the JSP, each operation can be processed by one specific machine [23,24]. The flexible job shop scheduling problem (FJSP) is an extension of the JSP, which allows an operation to be processed by one of two or more machines. In other words, an operation is processed by one of the alternative machines in the FJSP [25]. Since a machine is predetermined for a specific operation, the JSP can be solved by specifying the priority of given operations such that a high-priority operation precedes others with a lower priority in the queue for given machines. Thus, the FJSP can be regarded as a sequencing problem, suitable for intelligent algorithms [26,27]. The genetic algorithm based on candidate sequence (COGA) is adopted to solve the FJSP [28].
Developed by Eberhart and Kennedy in 1995, particle-swarm optimization (PSO) is a population-based stochastic optimization technique inspired by the swarming behavior characteristic of bird flocks or fish schools [29]. PSO has recently been applied to COPs, such as shop scheduling [30,31], the traveling-salesman problem [32], quality-of-service multicast routing [33], and vehicle routing [34,35].
PSO is usually used to solve continuous problems, and discretization is required when solving COPs. Zheng et al. use PSO to solve the assembly JSP, design a discrete particle swarm optimization algorithm (DPSO) to realize the discretization of the position-update process through operations such as insertion and exchange. The Metropolis criterion in the simulated annealing algorithm (SA) is used to set the acceptance probability of the location update [36]. Chen et al. used GAs and DPSOs to manage the complexity of the problem, compute feasible and quasi-optimal trajectories for mobile sensors, and determine the demand for movement among nodes [37]. Fan et al. used DPSOs combined with the genetic operators of GAs to compute feasible and quasi-optimal schedules for directional sensors and to determine the sensing orientations among the directional sensors [38]. The above studies use other intelligent algorithms such as the SA and GA to hybrid with PSO, replacing particle updates with intelligent-algorithm operations. We need to make targeted improvements to the DPSO for specific problems.

1.2. Contributions

The following problems need to be solved when applying a DPSO to the DSTT:
  • It is necessary to establish a mathematical model suitable for the DPSO of the DSTT;
  • It is necessary to solve the DPSO’s mathematical-description problems of particle position, direction, and velocity.
  • It is necessary to design specific particle-update methods.
Based on the above problems, this paper conducts research on the DSTT, and improves the DPSO. The contributions of this paper are mainly reflected in the following aspects:
  • We build a mathematical model of the DSTT, including a task model, an evaluation model, an evaluation function, and an output of the Schedule—the mathematical model of the transmission plan—and propose a one-dimensional code to describe the Schedule, making it suitable for the calculation of the DPSO;
  • We propose a DPSO to define the particle position, particle-update direction and target definition, and solve the mathematical description of the DPSO for specific problems;
  • Based on the basic idea of the DPSO, and taking into account inertia retention, particle best, and global best, we use the probability-selection model to realize the particle update, and we propose random perturbation to improve the diversity of the particle population.
In Section 2, we build a mathematical model of the DSTT suitable for intelligent-algorithm processing. In Section 3, we outline the specific operation and implementation method of the DPSO used in this paper and test the DPSO’s optimal parameter combination. In Section 4, the experimental results are presented, and the results are analyzed and discussed. In Section 5, we make a conclusion and point out future avenues of work.

2. Preliminaries

The task-scheduling plan of a radio- and television-transmission station for daily execution of the transmission task is called the Schedule. According to the Chinese radio and television industry standard [39,40], the operation diagram is defined as follows:
Definition 1. 
Schedule: a table that specifies the broadcasting tasks undertaken by the transmitter in a day according to the sequence of program broadcasting time.
The mathematical model of the Schedule is a two-dimensional table; the horizontal axis represents the time, and the vertical axis represents the equipment code. The DSTT model can be described as the problem of filling the transmission task into the Schedule and maximizing the comprehensive-evaluation function. The DSTT model framework is shown in Figure 1.
The transmission-task sequence and the transmission-effect evaluation value matrix are the input data, and the maximum value of the evaluation function is taken as the fitness. Using intelligent-algorithm calculations, a Schedule meeting the requirements is obtained.

2.1. Task Queue

The task queue is the set of tasks to be executed every day. The parameters of the task model include the code of the working frequency band, the code of the task’s start time, and the delay time of the task, which can be expressed in triples:
T a s k   =   < F r e q B a n d , S t a r t T i m e , D e l a y T i m e >
The task queue can be described as:
T a s k Q u e u e = T a s k 1 , T a s k 2 , , T a s k j , T a s k p
where, p is the number of tasks, T a s k j is the j-th of tasks, T a s k j . F r e q B a n d is the frequency-band code of the working frequency of the task, T a s k j . S t a r t T i m e is the code of the start time of the task, and T a s k j . D e l a y T i m e is the delay time of the task.

2.2. Value Matrix

In order to evaluate the transmission effect of the equipment in each operating frequency band, the value matrix is set, which is obtained by polling the transmission frequency band during the trial operation of the station’s transmission equipment. The value matrix can be described as follows:
V a l u e M a t r i x = V a 11 V a 12 V a 21 V a 22 V a 1 i V a 1 m V a 2 i V a 2 m V a i 1 V a i 2 V a n 1 V a n 2 V a i j V a i m V a n j V a n m
where n is the number of equipment, m is the number of frequency-band divisions, and V a i j is the evaluation value of the transmitting effect of the i-th equipment working in the j-th frequency band.
Based on the supervised-learning theory, the value of the matrix is weighted and adjusted in combination with the results collected from each daily transmission to ensure that the matrix is dynamically updated to meet the practical needs of the system.

2.3. Task-Scheduling Result

In the model design, it is assumed that the transmission tasks completed by all the equipment are fully loaded, that is, the design can cover all the equipment according to the length of the task’s start and end. In order to adapt to the definition of population in intelligent algorithms and the requirements of intelligent-algorithm-related operations, a one-dimensional sequence is used to represent the optimization-result data. Define the scheduling-result data as binary:
T a s k R e s u l t   =   < T a s k , T r a n s N o >
where, T a s k R e s u l t . T a s k is the task of T a s k Q u e u e , T a s k R e s u l t . T r a n s N o is the equipment code that carries the task, and the output-result sequence of the algorithm can be expressed as:
S c h e d u l e R e s u l t = T a s k R e s u l t 1 . T r a n s N o , , T a s k R e s u l t k . T r a n s N o , T a s k R e s u l t p . T r a n s N o
where, p is the number of tasks, T a s k R e s u l t k represents the scheduling result of the k-th task, and T a s k R e s u l t k . T r a n s N o is the code of the equipment that carries the k-th task of the T a s k Q u e u e .

2.4. Schedule

Define the Schedule as a two-dimensional table:
S c h e d u l e = F r e q B a n d 11 F r e q B a n d 12 F r e q B a n d 21 F r e q B a n d 22 F r e q B a n d 1 i F r e q B a n d 1 m F r e q B a n d 2 i F r e q B a n d 2 m F r e q B a n d i 1 F r e q B a n d i 2 F r e q B a n d n 1 F r e q B a n d n 2 F r e q B a n d i j F r e q B a n d i m F r e q B a n d n j F r e q B a n d n m
where, n is the number of equipment, m is the number of Schedule time divisions, and F r e q B a n d i j is the transmission-frequency-band code of the i-th equipment in the j-th operation time. The value-taking formula is:
F r e q B a n d i j = T a s k R e s u l t k . T a s k . F r e q B a n d k 1 ,   p , i = T a s k R e s u l t k . T r a n s N o ,   j [ T a s k R e s u l t k . T a s k . S t a r t T i m e , T a s k R e s u l t k . T a s k . S t a r t T i m e   + T a s k R e s u l t k . T a s k . D e l a y T i m e )
where k represents the task-scheduling-result code from 1 to p , and T a s k R e s u l t k . T a s k . F r e q B a n d is the frequency-band code of the Schedule task represented by the result. The value of the abscissa i of the Schedule is the equipment code assigned to the result, and the value of the ordinate j of the Schedule is the start time of the Schedule task represented by the result and is marked until the end of the task.

2.5. Fitness Function

Set the fitness function of the Schedule as the maximum value of the transmission-effect evaluation, described as follows:
M a x : V a l u e ( S c h e d u l e R e s u l t ) = k = 1 p V a T a s k R e s u l t k . T r a n s N o T a s k R e s u l t k . T a s k . F r e q B a n d × T a s k R e s u l t k . T a s k . D e l a y T i m e m × n
The subscripts of V a T a s k R e s u l t k . T r a n s N o T a s k R e s u l t k . T a s k . F r e q B a n d are T a s k R e s u l t k . T r a n s N o , and T a s k R e s u l t k . T a s k . F r e q B a n d . T a s k R e s u l t k . T r a n s N o represents the equipment code represented by the k-th value of the result sequence, T a s k R e s u l t k T a s k . F r e q B a n d represents the frequency-band number of the Schedule task represented by the k-th value of the result sequence, and the V a l u e ( S c h e d u l e R e s u l t ) represents the comprehensive evaluation value of the Schedule.
According to the above model framework and the data model description, the data-structure parameters are outlined in Table 1.

2.6. Example

Combined with the description of the mathematical model, an example of a DSTT is shown in Figure 2.
According to the value matrix, under the condition of obtaining the maximum value for the fitness function of the Schedule, the task-dynamic-scheduling result is obtained through the intelligent algorithm. The result data are the equipment-code sequence. The two-dimensional table of the Schedule is shown in Figure 2. The abscissa is the sequence code of the period, the ordinate is the sequence code of the equipment, and the data are the frequency band transmitted by the equipment during the period. For example, the number of position 0 in the result value is 1, indicating that task0 is assigned to equipement1. The frequency-band code of task0 is 3, the start time is 0, and the delay time is 3. With time from 0 to 2 in equipment1 in the Schedule, the frequency-band code is 3.

3. Methodologies

3.1. PSO

PSO is a group-search-optimization algorithm. The motion of each particle is determined by the value of the fitness function, and the “direction” and “target” of its motion are determined by the “velocity” of each particle. Then, the particles iterate in the solution space according to the direction of the best particle.
In PSO, x represents the position of the particles, v represents the velocity of the particles, and Pbest represents the best position of the particles. The PSO initializes a group of random particles and finds the best solution through iteration. In each iteration, the particle updates its position by tracking two best values. One best value is the best solution that the particle can find. This solution is called particle best. The other best value is the best solution found by the whole population at present, which is called the global best. Suppose that a population composed of K particles is searched in the D-dimensional solution space, where the position of the i-th particle is expressed as a D-dimensional vector:
X i = x i 1 , x i 2 , , x i D , i = 1,2 , , K
The motion velocity of the i-th particle is also a vector of the D-dimension:
V i = v i 1 , v i 2 , , v i D , i = 1,2 , , K
The best position searched by the i-th particle, namely, the particle best, is expressed as:
P b e s t i = p i 1 , p i 2 , , p i D , i = 1,2 , , K
The best position searched by the whole population, namely, the global best, is expressed as:
G b e s t = g 1 , g 2 , , g D
The updated formula of velocity and position is as follows:
v i d t + 1 = ω v i d t + c 1 r 1 p i d t x i d t + c 2 r 2 g d t x i d t ,       d = 1,2 , , D
x i d t + 1 = x i d t + v i d t ,       d = 1,2 , , D
where c 1 and c 2 are the acceleration constant, r 1 and r 2 are a uniform random number, and ω is the inertia constant.
According to the above description, PSO is applicable to the continuous-function calculation, and the update of velocity and position adopts a continuous-vector calculation. Based on the discrete data characteristics of the DSTT, combined with the previous research, we carry out a targeted operation of the DPSO.

3.2. Algorithm Description

  • Definition of particle position: in combination with the DSTT and the example, see Formula (15) for the definition of particle position.
X i = x i 1 , x i 2 , , x i j , , x i n , i = 1,2 , , K ,   x i j = T a s k j . T r a n s N o       n = T a s k N u m b e r
where K represents the population size, T a s k N u m b e r represents the number of tasks, and T a s k j . T r a n s N o represents the equipment code assigned to the j-th task. In Figure 2, the particle position is the allocation result of 8 tasks, and each number represents the equipment code assigned to the task. The queue is {1,3,0,2,3,3,1,2}, and each number is the equipment code assigned to each task.
The definitions of particle best and global best are the same as the definition of particle position, which is recorded as:
P b e s t i = p i 1 , p i 2 , , p i j , , p i n , i = 1,2 , , K       p i j = T a s k j . T r a n s N o
G b e s t = g 1 , g 2 , , g j , , g n ,       g i = T a s k j . T r a n s N o
2.
Definition of particle-motion direction: in combination with the characteristics of data discretization in DSTT, there is no direct association management between each task. Binary processing is adopted when defining particle-motion direction, that is, each particle-motion direction is each task that can change the equipment, and is recorded as:
V D i = v d i 1 , v d i 2 , , v d i j , , v d i n , i = 1,2 , , K       v d i j = 0,1
Only one of the v d i j ’s sequence values represented by each V D i is 1, and the other is 0. Position 1 represents the task of replacing the node when the particle in Schedule updates the position, that is, the motion direction of the particle in Schedule. For example, V D i = 0,0 , 0,1 , 0,0 , 0,0 indicates that the i-th particle in the population needs to replace the equipment working on Task2.
3.
Definition of particle-motion target: the velocity displacement is defined as the serial number of the equipment to be replaced by the node representing the motion direction of particles in Schedule. The particle-motion target suitable for the operation of DSTT is defined as V T i , which is recorded as:
V T i = v t i 1 , v t i 2 , , v t i j , , v t i n , i = 1,2 , , K , v t i j = 1 , T r a n s N o
Only one value of each V T i represented by sequence v t i j is recorded as T r a n s N o , and the others are recorded as −1. The value taken by T r a n s N o indicates that the node is replaced by the representative emission equipment when the particle in the Schedule updates its position, that is, the replacement target encoded by the particle’s position in the particle’s motion direction is defined as the particle-motion target in the Schedule. For example, V T i = 1 , 1 , 1 , 3 , 1 , 1 , 1 , 1 indicates that the i-th particle in the population wants to replace the equipment2 working on task3 with equipment3. An example of particle motion is shown in Figure 3.
Combining the problem characteristics of DSTT, particle motion has only one motion direction and target in order to ensure that the particle update of the DPSO have the characteristics of inertia preservation, particle best, and global best direction vector calculations included in the basic PSO. The DPSO uses a probabilistic-selection model to handle particle updates, and determines the proportion of parameters by which particles choose one of the three directions to perform motion operations. If the result of the probabilistic-selection model is inertia retention or the particle-motion target is consistent with the current position, random-perturbation processing is introduced to increase the population particle’s diversity and avoid entering the local-optimal trap.
4.
Definition of particle-position update: according to the characteristics of DSTT, the operation of particle-velocity update is to calculate the motion direction of particles and the motion target of particles. The evaluation value of each particle task is calculated according to the particle position, and X V i is recorded as:
X V i = x v i 1 , x v i 2 , , x v i j , , x v i n , i = 1,2 , , K       x v i j = V a x i j T a s k j . F r e q b a n d
where x i j is the equipment code currently assigned to the j-th task represented by the i-th particle in the particle-position definition, V a x i j T a s k j . F r e q B a n d represents the evaluation value in ValueMatrix when the j-th task is executed by the x i j equipment, and T a s k j . F r e q b a n d represents the frequency-band code of the task. Similarly, the evaluation-value sequence of particle best and global best is recorded as P b e s t V i , G b e s t V :
P b e s t V i = p v i 1 , p v i 2 , , p v i j , , p v i n , i = 1,2 , , K       p v i j = V a p i j t a s k j . F r e q B a n d
G b e s t V = g v 1 , g v 2 , , g v i , , g v n ,       g v i = V a g i T a s k j . F r e q B a n d
where p i j is the equipment code assigned to the j-th task indicated in the definition of the particle-best position of the i-th particle. The V a p i j T a s k j . F r e q B a n d indicates the evaluation value in ValueMatrix when the j-th task is executed on the p i j -th equipment, the T a s k j . F r e q B a n d indicates the frequency-band code of the task. The definition of global best is consistent with that of particle best.
Calculate the evaluation-value gap between the particle-position and the particle-best values based on the difference between the above two evaluation value data series, and calculate the evaluation-value gap between the current particle and the global-best value based on the difference between the two data series. Select the maximum difference in evaluation values as the motion-direction option of particles.
Whether the particle moves towards the particle best direction or the global best direction depends on the output of the probability-selection model. The equipment code of particle best and global best directions is used as the motion target of particles. For example, the output of the probability-selection model is the global-best direction. Compare and calculate X V i and G b e s t V . When j = L, V a g i T a s k j . F r e q B a n d V a x i j T a s k j . F r e q B a n d is at maximum value, then L is the motion direction of particles. The motion direction is V D i = 0,0 , , 1 , , 0 where only the L-th value is 1, and the other value is 0. The motion target is V T i = 1 , 1 , , T a s k L . T r a n s N o , , 1 where only the L-th value is T a s k L . T r a n s N o of G b e s t , and the other value is −1. Combined with DSTT, T a s k L . T r a n s N o is the equipment code assigned to the L-th task with the largest difference between the particle position and the global best.
In DSTT, the goal of fitness function is the maximum evaluation value. The D-value between the evaluation values calculated before and after particle iteration can be understood as the motion distance of the particle.

3.3. Parameter

The DPSO is controlled by three parameters: inertia-retention factor (IRF), particle-best factor (PBF), and global-best factor (GBF). The three parameters correspond to ω , c 1 r 1 , and c 2 r 2 in Formula (13). Set the sum of the three parameters to 1. Each iteration has a proportion of ω to perform inertia retention, a proportion of c 1 r 1 to perform local-best motion, and a proportion of c 2 r 2 to perform global-best motion. In the parameter experiment, we set the change-step size of the parameter to 0.1. The experiment sets the number of equipment and the number of time periods to be equal. The enumeration algorithm is used to verify the interval between the number of time periods and the number of equipment [4, 7]. The enumeration algorithm cannot be tested due to the computing time periods including more than 8 pieces of equipment. Each task number randomly generates 100 task sequences for testing. The experimental data are shown in Table 2:
According to the number of equipment and time periods in the 4–7 interval, the enumeration algorithm is used to obtain the optimal values for the comparison tests. The iteration number of the parameter test table is obtained as follows. In order to better compare the change in the iteration number of different parameter groups, the iteration number is evaluated and calculated. The number of each equipment is calculated by dividing the iteration number by the average value of the iteration number obtained by all parameter groups. The cumulative-average proportion is shown in Table 3.
It can be seen from Table 3 that the number of iterations in the parameter groups’ calculations to obtain all global-best solutions in Table 2 is also small, indicating that the algorithm’s success rate and efficiency are unified within the same parameter group.
It is impossible to enumerate the parts of the algorithm’s comparison experiments and to compare whether the evaluation values obtained under different parameters obtained the maximum value for all parameters. The number of iterations of the algorithm is the number of iterations of the greedy algorithm. The formula is as follows:
I t e r a t i o n N u m b e r = i = 1 n i 2
The number of experimental tasks includes the data from the previous 4–7 intervals. According to the computing power of the current experimental environment, the number of equipment and the number of time periods during the experiment is 12. The data can be found in Table 4.
Analysis of parameters’ test results:
  • According to Table 2, when the number of equipment and time periods is [4, 7], the influence of the parameters on the algorithm results is small. When the PBF is small and the GBF is large, the algorithm’s success rate is high. Taking the IRF of 0.1 as an example, as the PBF increases, the GBF decreases, and the success rate of the algorithm decreases gradually.
  • According to Table 3, when the number of equipment and time periods is small [4, 7], the algorithm performance is evaluated by the weighted average of the number of iterations of the algorithm required to reach the global-best solution. The algorithm parameters have a great impact on the number of iterations of the algorithm. Among the parameter groups with a success rate of 100%, the parameter groups (0.2,0.2,0.6) have the lowest number of iterations.
  • According to Table 4, the interval between the number of equipment and the number of time periods [4, 12] is considered globally. In the test of fixed iterations, the DPSO only achieves the maximum value of multiple parameters when the number of equipment and the number of time periods are 8. In [9, 12], some different parameter groups achieved maximum values, including (0.3,0.1,0.6), (0.5,0.1,0.4), (0.2,0.1,0.7), and (0.4,0.1,0.5).
  • The groups (0.3,0.1,0.6) are better in the iterative-weighting calculation in the previous two comparison tables, but the success rate is slightly lower. The groups (0.2,0.2,0.6)’ weighted-average number of iterations is minimal. In order to ensure the comprehensiveness of the subsequent multi-algorithm experiments, use 5 groups of parameters to carry out experiments in the subsequent comparison experiments of DPSO. The list is in Table 5.

4. Experiments

4.1. Comparison Algorithm and Method

The iteration number of each algorithm in this part is limited by Formula (23). The comparison algorithms can be found in Table 6.
According to the characteristics of the DSTT, the algorithm-comparison test uses the enumeration algorithm as the reference algorithm in the range of [4, 7] between the number of equipment and the number of time periods. The enumeration algorithm takes a long time to produce its calculations, but it can clearly obtain the global-optimal solution. With the optimal solution as a comparison reference, the performance index of the algorithm is evaluated by recording the number of iterations required by the other algorithms to reach the optimal solution.
In the interval where the number of equipment and periods exceeds seven, there is no optimal solution for reference. The comparison method adopts two calculations. Each comparison is completed with each parameter group first. The comparison results show that the calculation results of more than two groups of parameters are completely consistent, and the result is determined to be the global best solution. Then, the second calculation is completed to obtain the number of iterations of the algorithm for each parameter to obtain the global-best solution so as to evaluate the algorithm’s performance.
In order not to lose fairness, all algorithms in the test experiment in this section use the same initialization task queue for their calculations. Each task number randomly generates 100 task sequences for the best-scheduling calculations; compares the success rate of each algorithm in calculating the global-best solution under the above conditions; calculates the iteration times of the algorithm in the global-best solution; and calculates the proportional cumulative evaluation value, which represents the average of the calculated comprehensive evaluation value of the best allocation. Because the global best solution comparison is adopted for the whole interval, the interval is no longer distinguished, and the data are uniformly compared for the whole interval.

4.2. Algorithm Comparison Experiment

This section compares the effectiveness, accuracy, and efficiency of several algorithms in the experiments.
The number of successes in obtaining the global best solution from the experimental calculations are shown in Table 7.
When the number of equipment is greater than 7, the ENU is no calculated due to too long time. The DPSO uses the five sets of parameters in Table 5 to calculate the success numbers and performs statistical calculations on the maximum, minimum, average, and mean square deviation of the five sets of result data. It can be seen that the DPSO’s algorithm data are stable, but data stability decrease as the number of equipment increases. Comparing algorithms, the DPSO is superior to the IGA, and the calculation results of the two intelligent algorithms are far superior to that of the GR.
The average comparison of the evaluation values calculated by the algorithms is shown in Table 8.
According to the data in Table 8, the evaluation-value data calculated by the DPSO algorithm are stable, indicating that the algorithm has high stability. Compared to the IGA, when the number of equipment is small, the accuracy of the two intelligent algorithms is equivalent. As the number of equipment increases, the accuracy advantage of the DPSO significantly increases. The calculation results of the GR algorithm differ greatly from those of the two intelligent algorithms.
The comparison results of the number of iterations to obtain the best solution are shown in Table 9.
This research uses the number of iterations to evaluate the efficiency of the algorithm, because intelligent algorithms need to calculate the results of the fitness function for each iteration, and generally the algorithm’s operation time is much lower than the calculation time of the fitness function. Table 9 shows the statistical analysis of the iterations of the two intelligent algorithms. It can be seen that the iterations of the DPSO are far superior to the efficiency of the IGA. Because the DPSO is a cluster search and the number of iterations of the algorithm includes randomness, there is a large gap in the data statistics of DPSO algorithms.

4.3. Summary of Experimental Analysis

Within the range of the number of equipment and time periods that can be verified by the enumeration algorithm, and based on an analysis of success, the DPSO and the IGA are effective, and the approximate rates can calculate the global-optimal solution, while the GR is less effective. Within the range of the number of equipment and time periods that cannot be verified by the enumeration algorithm, the DPSO can calculate and obtain a global-optimal solution in most cases based on the disaster-recovery-backup idea and the analysis of the algorithm itself. As the number of equipment increases, the effectiveness of IGA algorithms significantly decreases.
Based on the analysis of the evaluation values, the evaluation values calculated by the DPSO and the IGA are close, while the GR’s calculation results differ greatly from those calculated by the two intelligent algorithms. In all equipment-count tests, the DPSO achieved higher evaluation values than the IGA. This indicates that the accuracy of the DPSO is consistently higher than that of the IGA.
By analyzing the efficiency of the two intelligent algorithms through the number of iterations, the execution efficiency of the IGA differs greatly from that of the DPSO.
The advantages and disadvantages of various algorithms and comparison algorithms proposed in this paper are shown in Table 10.
In comparative analyses, the DPSO algorithm is comprehensively evaluated to be the best.

4.4. Simulation Experiment

This section compares the advantages and disadvantages of several algorithms by simulating the actual situation of the transmitting station.
The experimental program writing tool used is Visual Studio 2012, and the language used is C++ with MFC architecture. The hardware environment used is a Microsoft Surface X1 portable computer, the CPU used is Intel Core i7 3.60 GHz, the memory used is 16 GB, and the operating system used is Win10.
In the actual working environment of the transmission station, the number of equipment is usually fixed, and the number of time periods varies according to the program settings. In this part of the simulation experiment, the parameters with the best performance of the algorithm are selected for the simulation experiment. The enumeration algorithm was abandoned due to the long execution time after the number of tasks increased, and the GR had no parameters. The IGA parameters are (0.8,0.1,0.1) for the simulation experiments, and the DPSO selected parameters (0.3,0.1,0.6) for the simulation experiments. The fixed value of the number of equipment is 10, and the number of time periods is 24 h per day, with one time period every half an hour, that is, a [4, 48] interval. The two intelligent algorithms are tested with the same number of iterations. Because there is no optimal-value comparison, the success-rate data cannot be compared. The simulation experiment only compares the algorithm’s accuracy with the average value of the comprehensive evaluation value obtained 100 times. The data obtained are in Table 11.
According to Table 11, the DPSO has obtained an evaluation value superior to that of the IGA, which shows that the two intelligent algorithms operate stably, and that the DPSO has obvious accuracy advantages when dealing with DSTTs. See Figure 4 for a comparison diagram.
In the simulation experiments, select 100 experimental data with the largest number of time periods, that is, 48 time periods, and observe the evaluation value and algorithm calculation time of each of the two algorithms. The data are presented in Figure 5.
According to Figure 4, Figure 5 and Figure 6, the following statistics can be obtained:
  • In the simulation experiment of multiple time periods, the evaluation values of the transmission effect for all time periods are averaged. The DPSO improved the evaluation value by 3.012295% compared to the GR, and the DPSO improved the evaluation value by 0.1111146% compared to the IGA.
  • In the simulation experiment with a maximum period of 48, the results of the DPSO were better than that of the IGA. In almost 60% of the 100 experiments, the IGA achieved the same results as the DPSO while other results were lower than that of the DPSO. The DPSO improved the evaluation value by 0.11115% compared to the IGA.
  • In the simulation experiment with a maximum period of 48, in 100 experiments the average execution time of the DPSO was 3.195 s, and the average execution time of the IGA was 10.388 s. The DPSO improved the execution efficiency by 69.246%.
According to the simulation experiment, it can be concluded that the intelligent algorithm and mathematical model proposed can solve the DSTT problem of radio- and television-transmission stations. The algorithm is stable in many simulation experiments, and the DPSO has the highest accuracy and the shortest execution time.

5. Conclusions

The paper conducts research on the DSTT and designs a DSTT mathematical model suitable for operation by an intelligent algorithm. We proposed a fitness function with the goal of achieving the highest-effectiveness evaluation value. Based on PSO and recent research on the DPSO, the DPSO is specifically proposed for the DSTT.
The DPSO redefines particle-motion direction and particle-motion target based on the two-dimensional Schedule. Based on the characteristics of discrete problems, it proposes a probability-selection model to solve the specific operation of particle-position updates in discontinuous problems. It sets three parameters, namely, the inertia-retention factor, the particle-best factor and the global-best factor, to control the particle-position update to meet the idea of the DPSO, and adopts a random-perturbation operation. It avoids particle motion falling into the trap of local-optimal solutions. In parameter testing, the traversal method is used to determine the parameter group with the best calculation effect.
Finally, this paper outlines comparison experiments. In comparison experiments with GRs and IGAs, the effectiveness of the algorithm is verified using the success rate. Computational accuracy is verified using the calculation of evaluation values, and efficiency is compared using algorithm iterations. The results show that the DPSO has significant advantages in these three aspects.
The research results of this paper have certain reference significance for task scheduling and the COP in other industries.
The next research direction is to conduct multi-objective optimization research based on the research results of this paper, while ensuring the transmission effect and increasing the utilization rate of the equipment.

Author Contributions

Conceptualization, W.Y. and X.W.; methodology, W.Y. and X.W.; software, X.W.; validation, X.W.; formal analysis, W.Y.; investigation, W.Y. and X.W.; data curation, X.W.; writing—original draft preparation, X.W.; writing—review and editing, W.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yang, M.; Xing, G.; Pang, J. Design and Implementation of Software Architecture of Intelligent Scheduling System for SW Broadcasting. Radio TV Broadcast Eng. 2007, 4, 112–116. [Google Scholar]
  2. Hao, W. Application of Artificial Intelligence in Radio and Television Monitoring and Supervision. Radio TV Broadcast Eng. 2019, 46, 126–128. [Google Scholar]
  3. Wang, X.; Yao, W. Research on Transmission Task Static Allocation Based on Intelligence Algorithm. Appl. Sci. 2023, 13, 4058. [Google Scholar] [CrossRef]
  4. Zhou, D.; Song, J.; Lin, C.; Wang, X. Research on Transmission Selection Optimized Evaluation Algorithm of Multi-frequency Transmitter. In 2015 International Conference on Automation, Mechanical Control and Computational Engineering; Atlantis Press: Amsterdam, The Netherlands, 2015; pp. 323–326. [Google Scholar]
  5. Zhou, Z.; Luo, D.; Shao, J. Immune genetic algorithm based multi-UAV cooperative target search with event-triggered mechanism. Phys. Commun. 2020, 41, 101103. [Google Scholar] [CrossRef]
  6. Nazarov, A.; Sztrik, J.; Kvach, A. Asymptotic analysis of finite-source M/M/1 retrial queueing system with collisions and server subject to breakdowns and repairs. Ann. Oper. Res. 2019, 277, 213–229. [Google Scholar] [CrossRef]
  7. Zhao, X.; Xia, X.; Wang, L. A fuzzy multi-objective immune genetic algorithm for the strategic location planning problem. Clust. Comput. 2019, 22, 3621–3641. [Google Scholar] [CrossRef]
  8. Liu, W.; Zhou, Y.; Li, B. Cooperative Co-evolution with soft grouping for large scale global optimization. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation, Wellington, New Zealand, 10–13 June 2019; pp. 318–325. [Google Scholar]
  9. Jia, Y. Distributed cooperative co-evolution with adaptive computing resource allocation for large scale optimization. IEEE Trans. Evol. Comput. 2019, 23, 188–202. [Google Scholar] [CrossRef]
  10. Sun, Y.; Li, X.; Ernst, A.; Omidvar, M.N. Decomposition for large-scale optimization problems with overlapping components. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation, Wellington, New Zealand, 10–13 June 2019; pp. 326–333. [Google Scholar]
  11. Li, L.; Fang, W.; Wang, Q.; Sun, J. Differential grouping with spectral clustering for large scale global optimization. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation, Wellington, New Zealand, 10–13 June 2019; pp. 334–341. [Google Scholar]
  12. Ismayilov, G.; Topcuoglu, H.R. Neural network-based multi-objective evolutionary algorithm for dynamic workflow scheduling in cloud computing. Future Gener. Comput. Syst. 2020, 102, 307–322. [Google Scholar] [CrossRef]
  13. Cabrera, A.; Acosta, A.; Almieida, F. A dynamic multi-objective approach for dynamic load balancing in heterogeneous systems. IEEE Trans. Parallel Distrib. Syst. 2020, 31, 2421–2434. [Google Scholar] [CrossRef]
  14. Zhang, Q.; Yang, S.; Jiang, S. Novel prediction strategies for dynamic multi-objective optimization. IEEE Trans. Evol. Comput. 2019, 24, 260–274. [Google Scholar] [CrossRef]
  15. Cao, L.; Xu, L.; Goodman, E.D. Evolutionary dynamic multi-objective optimization assisted by a support vector regression predictor. IEEE Trans. Evol. Comput. 2019, 24, 305–319. [Google Scholar] [CrossRef]
  16. Qu, B.; Li, G.; Guo, Q. A niching multi-objective harmony search algorithm for multimodal multi-objective problems. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation, Wellington, New Zealand, 10–13 June 2019; pp. 1267–1274. [Google Scholar]
  17. Liang, J.; Xu, W.; Yue, C. Multimodal multi-objective optimization with differential evolution. Swarm Evol. Comput. 2019, 44, 1028–1059. [Google Scholar] [CrossRef]
  18. Qu, B.; Li, C.; Liang, J. A self-organized speciation-based multi-objective particle swarm optimizer for multimodal multi-objective problems. Appl. Soft Comput. 2020, 86, 105886. [Google Scholar] [CrossRef]
  19. Ishibuchi, H.; Peng, Y.; Shang, K. A scalable multimodal multi-objective test problem. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation, Wellington, New Zealand, 10–13 June 2019; pp. 310–317. [Google Scholar]
  20. Kuppusamy, P.; Kumari, N.M.J.; Alghamdi, W.Y.; Alyami, H.; Ramalingam, R.; Javed, A.R.; Rashid, M. Job scheduling problem in fog-cloud-based environment using reinforced social spider optimization. J. Cloud Comput. 2022, 11, 99. [Google Scholar] [CrossRef]
  21. Tang, X.; Liu, Y.; Deng, T.; Zeng, Z.; Huang, H.; Wei, Q.; Li, X.; Yang, L. A job scheduling algorithm based on parallel workload prediction on computational grid. J. Parallel Distrib. Comput. 2023, 171, 88–97. [Google Scholar] [CrossRef]
  22. Jia, P.; Wu, T. A hybrid genetic algorithm for flexible job-shop scheduling problem. J. Xi’an Polytech. Univ. 2020, 10, 80–86. [Google Scholar]
  23. Cheng, R.; Gen, M.; Tsujimura, Y. A Tutorial Survey of Job-Shop Scheduling Problems using Genetic Algorithms-I. Representation. Comput. Ind. Eng. 1996, 30, 983–997. [Google Scholar] [CrossRef]
  24. Cheng, R.; Gen, M.; Tsujimura, Y. A tutorial survey of job-shop scheduling problems using genetic algorithms, part II: Hybrid genetic search strategies. Comput. Ind. Eng. 1999, 36, 343–364. [Google Scholar] [CrossRef]
  25. Chaudhry, I.; Khan, A. A research survey: Review of flexible job shop scheduling techniques. Int. Trans. Oper. Res. 2016, 23, 551–591. [Google Scholar] [CrossRef]
  26. Kim, J. Developing a job shop scheduling system through integration of graphic user interface and genetic algorithm. Mul-timed. Tools Appl. 2015, 74, 3329–3343. [Google Scholar] [CrossRef]
  27. Kim, J. Candidate Order based Genetic Algorithm (COGA) for Constrained Sequencing Problems. Int. J. Ind. Eng. Theory Appl. Pract. 2016, 23, 1–12. [Google Scholar]
  28. Park, J.; Ng, H.; Chua, T.; Ng, Y.; Kim, J. Unifified Genetic Algorithm Approach for Solving Flexible Job-Shop Scheduling Problem. Appl. Sci. 2021, 11, 6454. [Google Scholar] [CrossRef]
  29. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  30. Pan, Q.; Tasgetiren, M.F.; Liang, Y.C. A discrete particle swarm optimization algorithm for the permutation flowshop sequencing problem with makespan criterion. In Research and Development in Intelligent Systems XXIII, SGAI 2006; Bramer, M., Coenen, F., Tuson, A., Eds.; Springer: London, UK, 2006; pp. 19–31. [Google Scholar]
  31. Lian, Z.; Gi, X.; Jiao, B. A novel particle swarm optimization algorithm for permutation flow-shop scheduling to minimize makespan. Chaos Solitons Fractals 2008, 35, 851–861. [Google Scholar]
  32. Shi, X.; Liang, Y.; Lee, H.; Lu, C.; Wang, Q. Particle swarm optimization-based algorithms for TSP and generalized TSP. Inf. Process. Lett. 2007, 103, 169–176. [Google Scholar] [CrossRef]
  33. Abdel-Kader, R.F. Hybrid discrete PSO with GA operators for efficient QoS-multicast routing. Ain Shams Eng. J. 2011, 2, 21–31. [Google Scholar] [CrossRef] [Green Version]
  34. Roberge, V.; Tarbouchi, M.; Labonte, G. Comparison of parallel genetic algorithm and particle swarm optimization for real-time UAV path planning. IEEE Trans. Ind. Inf. 2013, 9, 132–141. [Google Scholar] [CrossRef]
  35. Xu, S.H.; Liu, J.P.; Zhang, F.H.; Wang, L.; Sun, L.J. A combination of genetic algorithm and particle swarm optimization for vehicle routing problem with time windows. Sensors 2015, 15, 21033–21053. [Google Scholar] [CrossRef] [Green Version]
  36. Zheng, P.; Zhang, P.; Wang, M.; Zhang, J. A Data-Driven Robust Scheduling Method Integrating Particle Swarm Optimization Algorithm with Kernel-Based Estimation. Appl. Sci. 2021, 11, 5333. [Google Scholar] [CrossRef]
  37. Chen, H.-W.; Liang, C.-K. Genetic Algorithm versus Discrete Particle Swarm Optimization Algorithm for Energy-Efficient Moving Object Coverage Using Mobile Sensors. Appl. Sci. 2022, 12, 3340. [Google Scholar] [CrossRef]
  38. Fan, Y.-A.; Liang, C.-K. Hybrid Discrete Particle Swarm Optimization Algorithm with Genetic Operators for Target Coverage Problem in Directional Wireless Sensor Networks. Appl. Sci. 2022, 12, 8503. [Google Scholar] [CrossRef]
  39. GY/T 280-2014; Specifications of Interface Data for Transmitting Station Operation Management System. State Administration of Press, Publication, Radio, Film and Television: Beijing, China, 2 November 2014.
  40. GY/T 290-2015; Specification of Code for Data Communication Interface of Radio and Television Transmitter. State Administration of Press, Publication, Radio, Film and Television: Beijing, China, 3 March 2015.
Figure 1. Model framework of DSTT.
Figure 1. Model framework of DSTT.
Applsci 13 04353 g001
Figure 2. Example of DSTT.
Figure 2. Example of DSTT.
Applsci 13 04353 g002
Figure 3. Example of particle motion.
Figure 3. Example of particle motion.
Applsci 13 04353 g003
Figure 4. Multi-algorithm evaluation value comparison chart.
Figure 4. Multi-algorithm evaluation value comparison chart.
Applsci 13 04353 g004
Figure 5. Two-algorithm evaluation-value comparison chart of 100 tests.
Figure 5. Two-algorithm evaluation-value comparison chart of 100 tests.
Applsci 13 04353 g005
Figure 6. Two-algorithm calculate-time comparison chart of 100 tests.
Figure 6. Two-algorithm calculate-time comparison chart of 100 tests.
Applsci 13 04353 g006
Table 1. Parameter table of DSTT mathematical model.
Table 1. Parameter table of DSTT mathematical model.
ParameterExplain
m number of Schedule definition periods
n number of equipment defined in Schedule
p number of Schedule tasks
T a s k k task of the k-th Schedule, including < T a F r , S t a r t , S p a n >
T a s k R e s u l t k data description of the k-th result, including < T a s k , T r a n s N o >
V a i j evaluation value of the transmitting effect of the i-th equipment in the j-th frequency band
F r e q B a n d i j Schedule node data, frequency code of the i-th equipment working in the j-th time period
S c h e d u l e R e s u l t The output result of Schedule is composed of p piece of T r a n s N o code values
Table 2. Comparison of the number of successful tests of DPSO parameters.
Table 2. Comparison of the number of successful tests of DPSO parameters.
ParameterNumber of Equipment and Time PeriodsTotal
IRFPBFGBF4567
0.80.10.110010010099399
0.70.10.2100100100100400
0.60.20.210010010099399
0.60.10.3100100100100400
0.60.20.210010010099399
0.60.30.11001009997396
0.50.10.4100100100100400
0.50.20.3100100100100400
0.50.30.21001009898396
0.50.40.11001009793390
0.40.10.5100100100100400
0.40.20.410010010099399
0.40.30.310010010099399
0.40.40.2100999690385
0.40.50.1100999696391
0.30.10.610010010099399
0.30.20.510010010097397
0.30.30.4100999997395
0.30.40.31001009796393
0.30.50.2100999894391
0.30.60.1100999793389
0.20.10.7100100100100400
0.20.20.6100100100100400
0.20.30.5100999995393
0.20.40.4100989897393
0.20.50.3100999492385
0.20.60.2100959789381
0.20.70.199988987373
0.10.10.810010010099399
0.10.20.710010010099399
0.10.30.61001009899397
0.10.40.5100969894388
0.10.50.4100969990385
0.10.60.3100979593385
0.10.70.299989089376
0.10.80.199968987371
Table 3. Comparison table of iterations of DPSO.
Table 3. Comparison table of iterations of DPSO.
ParameterNumber of Equipment and Time PeriodsAverage Weighted
IRFPBFGBF4567
0.80.10.1224871265521,8243.9881
0.70.10.2112599221710,5482.3223
0.60.20.2172557165087222.3816
0.60.10.3121591183651291.9570
0.60.20.2113439392397762.4696
0.60.30.1178452330916,4463.1171
0.50.10.4149512156985602.1828
0.50.20.3132367146953361.7257
0.50.30.2118432242311,8782.3078
0.50.40.1112307616325,4903.7425
0.40.10.5122522174937911.7931
0.40.20.4122415127073741.8020
0.40.30.312433497584761.7363
0.40.40.2119780828735,3455.2982
0.40.50.11091033752021,6874.5165
0.30.10.6149443161971912.0399
0.30.20.5110453136310,8592.0050
0.30.30.4127843319512,7092.9862
0.30.40.3104351673517,4363.3739
0.30.50.2754628460419,8186.9327
0.30.60.11031049746022,0954.5122
0.20.10.7139567145043261.9032
0.20.20.6109415128331751.4791
0.20.30.5120725334714,7492.9821
0.20.40.41191037492915,4023.6608
0.20.50.3109554982728,0834.8906
0.20.60.21074385699037,5518.7769
0.20.70.1498163216,42546,44810.6003
0.10.10.8133455197857321.9499
0.10.20.7115378160057621.6963
0.10.30.6126352428943132.1971
0.10.40.51462829556013,8265.6724
0.10.50.41181822208129,4464.7190
0.10.60.31163604904719,1927.3423
0.10.70.2614226615,31439,79211.2526
0.10.80.1612227818,46636,46011.7022
Table 4. Comparison of times required to obtain the best value.
Table 4. Comparison of times required to obtain the best value.
ParameterNumber of Equipment and Time PeriodsTotal
IRFPBFGBF456789101112
0.80.10.1 3
0.70.10.2 4
0.60.20.2 3
0.60.10.3 4
0.60.20.2 3
0.60.30.1 2
0.50.10.4 6
0.50.20.3 4
0.50.30.2 2
0.50.40.1 2
0.40.10.5 6
0.40.20.4 3
0.40.30.3 3
0.40.40.2 1
0.40.50.1 1
0.30.10.6 5
0.30.20.5 3
0.30.30.4 1
0.30.40.3 2
0.30.50.2 1
0.30.60.1 1
0.20.10.7 5
0.20.20.6 4
0.20.30.5 1
0.20.40.4 1
0.20.50.3 1
0.20.60.2 1
0.20.70.1 0
0.10.10.8 3
0.10.20.7 3
0.10.30.6 2
0.10.40.5 1
0.10.50.4 1
0.10.60.3 1
0.10.70.2 0
0.10.80.1 0
Table 5. Comprehensive comparison table of parameter tests of DPSO.
Table 5. Comprehensive comparison table of parameter tests of DPSO.
Parameter[4, 7][4, 12]
IRFPBFGBFSuccess RateWeighted IterationMaximum Number of Times
DPSO10.50.10.4100%2.18286
DPSO20.40.10.5100%1.79316
DPSO30.30.10.699.75%2.03995
DPSO40.20.10.7100%1.90325
DPSO50.20.20.6100%1.47914
Table 6. DSTT comparison algorithm list.
Table 6. DSTT comparison algorithm list.
AlgorithmExplain
ENUEnumeration algorithm: The global-optimal solution can be obtained by traversing the running chart of all tasks and calculating the evaluation value. With the increase in the number of equipment and time periods, the iteration number increases rapidly and the algorithm’s execution time is long.
GRGreedy algorithm: Find out the tasks that can obtain the best allocation among all tasks. According to this principle, until all tasks are allocated, the algorithm’s execution time is stable, the number of algorithm iterations is related to the task period, and the conflicts encountered in the allocation are backtracked [4].
IGAImproved genetic algorithm: Select the elitist-retention strategy, the discontinuous cycle replacement group crossover strategy for crossover, and the overall equipment task switching strategy for mutation. The three parameters are set as the algorithm’s optimal parameter array, with a selection factor = 0.8, a crossover factor = 0.1, and a mutation factor = 0.1. Refer to previous research results for the selection of parameters [3].
DPSODiscrete particle swarm optimization algorithm: The parameters are calculated according to the five sets of parameters in Table 5, and the corresponding statistical calculations are performed.
Table 7. Multi-algorithm success number comparison table.
Table 7. Multi-algorithm success number comparison table.
Eq.ENUGRIGADPSO
[min, max]Avg.Stdea.
410025100[100, 100]1000.000000
51008100[100, 100]1000.000000
61005100[99, 100]99.80.447214
7100299[98, 100]99.60.894427
8N/A189[99, 100]99.80.447214
9N/A070[92, 99]97.22.949576
10N/A062[93, 100]97.82.774887
11N/A035[93, 98]96.82.167948
12N/A039[83, 97]93.25.932959
13N/A020[90, 100]94.83.701351
14N/A04[77, 97]88.68.049845
15N/A010[67, 94]87.411.436783
Total40041728[1124, 1182]115526.870058
Table 8. Comparison table of evaluation values of multi-algorithms.
Table 8. Comparison table of evaluation values of multi-algorithms.
Eq.GRIGADPSO
[min, max]Avg.Stdea.
40.8152140.838134[0.838134, 0.838134]0.8381340.000000
50.8741830.895642[0.895642, 0.895642]0.8956420.000000
60.8567630.883459[0.883458, 0.883459]0.88345880.000000
70.8994970.916141[0.916114, 0.916156]0.91615280.000007
80.8924680.913266[0.91355, 0.913556]0.91355480.000003
90.8583190.881545[0.882189, 0.882289]0.88226520.000043
100.8746370.898784[0.89988, 0.899956]0.89993520.000031
110.8733440.896935[0.898354, 0.898388]0.8983790.000014
120.8782530.902085[0.903259, 0.90338]0.90335140.000052
130.8783910.914041[0.916594, 0.916635]0.91661760.000018
140.9044080.931707[0.934612, 0.934725]0.93467160.000048
150.8918880.917394[0.920402, 0.920621]0.9205690.000094
Avg.0.8747800.899094[0.900205, 0.900242]0.90022760.000018
Table 9. Multi-algorithm iteration number comparison table.
Table 9. Multi-algorithm iteration number comparison table.
Eq.IGADPSO
[min, max]Avg.Stdea.
4643[105, 132]116.410.0
52692[340, 552]446.888.4
67452[1674, 2088]1812.4179.8
715,095[4355, 9186]5893.81963.8
874,431[7719, 11,447]9395.81465.2
9266,112[36,169, 69,736]48,312.813,134.6
10421,878[42,925, 88,540]63,07916,922.1
11899,103[108,956, 208,574]132,676.242,597.5
121,210,932[209,287, 407,190]309,613.490,217.6
132,211,539[362,240, 765,968]498,789166,816.1
143,241,904[493,569, 1,381,226]841,163.8328,465.6
154,141,849[735,817, 2,595,418]1,297,816.8746,327.9
Table 10. Comparison of advantages and disadvantages of multiple algorithms.
Table 10. Comparison of advantages and disadvantages of multiple algorithms.
Eq.AvailabilityAccuracyEfficiencyEvaluation
ENUhighN/AN/Abad
GRlowlowN/Abad
IGAmiddlehighlowbetter
DPSOhighhighhighbest
Table 11. Accuracy comparison data table of three algorithms simulation experiments.
Table 11. Accuracy comparison data table of three algorithms simulation experiments.
Periods456789101112
GR0.8939720.9017040.9137780.8818620.8793430.8894350.9061430.8840890.896052
IGA0.9174160.921640.9340220.903670.9045810.9077940.9321870.9115420.920281
DPSO0.9180310.9223330.9350520.9046550.9050950.9090010.9335860.9126950.921244
Periods131415161718192021
GR0.8934060.8817680.896540.9071050.898740.8609290.8643310.8949560.8873
IGA0.9210190.9107780.9169360.933310.9207620.882720.8908270.9212930.914671
DPSO0.9221690.9118390.917860.934530.9217160.8837480.8919330.9222910.915707
Periods222324252627282930
GR0.8855420.8857670.8852190.877280.8801790.879550.8520320.8838150.863596
IGA0.9102870.9120080.9103580.9046370.9120720.9020950.8842310.9090960.888886
DPSO0.9112390.9135640.9113960.9060420.9129250.903340.8852260.9105360.889882
Periods313233343536373839
GR0.8906290.893590.8576890.8660870.8736450.8830040.8677410.884130.867158
IGA0.9155450.919560.8855520.8890540.8994540.9104420.8909410.9145580.897373
DPSO0.9167020.9206540.8862970.890050.9001860.9115750.8917820.9152960.898685
Periods404142434445464748
GR0.8621680.8676210.869520.8480840.8769470.8638690.8759810.9055130.858482
IGA0.8843120.8960440.8959310.8725970.9072360.8874780.8998540.9303170.889568
DPSO0.8849250.8966710.8970140.8733670.9084030.8885660.9007280.9311610.890556
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, X.; Yao, W. A Discrete Particle Swarm Optimization Algorithm for Dynamic Scheduling of Transmission Tasks. Appl. Sci. 2023, 13, 4353. https://doi.org/10.3390/app13074353

AMA Style

Wang X, Yao W. A Discrete Particle Swarm Optimization Algorithm for Dynamic Scheduling of Transmission Tasks. Applied Sciences. 2023; 13(7):4353. https://doi.org/10.3390/app13074353

Chicago/Turabian Style

Wang, Xinzhe, and Wenbin Yao. 2023. "A Discrete Particle Swarm Optimization Algorithm for Dynamic Scheduling of Transmission Tasks" Applied Sciences 13, no. 7: 4353. https://doi.org/10.3390/app13074353

APA Style

Wang, X., & Yao, W. (2023). A Discrete Particle Swarm Optimization Algorithm for Dynamic Scheduling of Transmission Tasks. Applied Sciences, 13(7), 4353. https://doi.org/10.3390/app13074353

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop