Next Article in Journal
Continuum–Discontinuum Bonded-Block Model for Simulating Mixed-Mode Fractures
Previous Article in Journal
A Distorted-Image Quality Assessment Algorithm Based on a Sparse Structure and Subjective Perception
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Progressive Archive in Adaptive jSO Algorithm

Department of Informatics and Computers, University of Ostrava, 30. Dubna 22, 70103 Ostrava, Czech Republic
Mathematics 2024, 12(16), 2534; https://doi.org/10.3390/math12162534
Submission received: 17 June 2024 / Revised: 5 August 2024 / Accepted: 14 August 2024 / Published: 16 August 2024
(This article belongs to the Section Computational and Applied Mathematics)

Abstract

:
The problem of optimisation methods is the stagnation of population P, which results in a local solution for the task. This problem can be solved by employing an archive for good historical solutions outperformed by the new better offspring. The archive A was introduced with the variant of adaptive differential evolution (DE), and it was successfully applied in many adaptive DE variants including the efficient jSO algorithm. In the original jSO, the historical good individuals replace the random existing positions in A. It causes that outperformed historical solution from P with lower quality to replace the stored solution in A with better quality. In this paper, a new approach to replace individuals in archive A more progressively is proposed. Outperformed individuals from P replace solutions in the worse part of A based on the function value. The portion of A selected for replacement is controlled by the input parameter, and its setting is studied in this experiment. The proposed progressive archive is employed in the original jSO. Moreover, the Eigenvector transformation of the individuals for crossover is applied to increase the efficiency for the rotated optimisation problems. The efficiency of the proposed progressive archive and the Eigen crossover are evaluated using the set of 29 optimisation problems for CEC 2024 and various dimensionality. All the experiments were performed on a standard PC, and the results were compared using the standard statistical methods. The newly proposed algorithm with the progressive archive approach performs substantially better than the original jSO, especially when 20 or 40 % of the worse individuals of A are set for replacement. The Eigen crossover increases the performance of the proposed jSO algorithm with the progressive archive approach. The estimated time complexity illustrates the low computational demands of the proposed archive approach.

1. Introduction

Global optimisation research covers many fields of science, industry, and medicine, where maximal or minimal settings of objective function are required. The objective function f ( x ) , x = ( x 1 , x 2 , , x D ) I R D , defined on the search domain Ω , is limited by the limits, Ω = j = 1 D [ a j , b j ] , a j < b j , j = 1 , 2 , , D . The required solution of the problem is global minimum point x * , which fulfils condition f ( x * ) f ( x ) , x Ω .
The global optimisation problem can be solved by various optimisation techniques. In addition to standard analytical methods, the most widely used methods in practice for optimisation are called evolutionary algorithms (EA). A popular representative of the EAs is the differential evolution (DE) algorithm, which was introduced by Storn and Price in 1995 [1,2]. The DE algorithm is a very simple population-based optimiser that develops a set of individuals to locate the solution of the task.
The performance of DE is on a good level in the case where the problems are rather simple. For more complex optimisation problems, DE is not able to provide the true solution. Many successful adaptive variants of DE have been introduced to be applied to various optimisation problems [3,4], but no one can solve all possible problems in the best way [5]. Similarly, no single DE setting can outperform other DE parameters when solving the various optimisation problems [6].
Despite the good performance of the DE algorithm, there are many optimisation problems where this optimiser is not able to achieve at least promising results. The reason is mostly the stagnation of the population in the area of the local solution of the problems. In these cases, the return to the historical good solutions can help with leaving the local solution area. Thus, an archive of the historical solutions was introduced into the DE algorithm. In this paper, a new progressive approach to storing historical good parent individuals in an archive is proposed for the adaptive jSO algorithm. This approach promises to keep and prioritise better individuals in the archive to provide better results. The rest of the paper is organised as follows. A brief description of the original jSO with applications is presented in Section 2. A newly proposed jSO algorithm is described in Section 3. The experimental setting, the results obtained, and the analysis are presented and discussed in Section 4 and Section 5. The conclusions are drawn in the Section 6.

2. Adaptive Variant of jSO

In 2017, Brest et al. proposed a very successful adaptive variant of DE algorithms called jSO [7]. This algorithm was the best-performing DE algorithm in Congress on Evolutionary Computation (CEC) 2017, and it took, overall, the second position. The jSO algorithm is derived from variants of JADE, SHADE, L-SHADE, and iL-SHADE. Note that the first DE variant that used the archive of the old successful individuals was JADE [8]. More details on the evolution of this algorithm, the jSO control parameters, and settings are provided in [7]. The most important features of the original jSO algorithm are discussed below.
The jSO algorithm employs a newly developed weighted version of the popular current-to-pbest mutation strategy (current-to-pbestw). In addition, an archive A is used to store the outperformed parent individuals. In addition, circle memories are inherited to adapt the main DE control parameters F and CR, which are inherited from the L-SHADE variant ( μ CR = 0.8 and μ F = 0.5 ). The jSO uses the same initial values for μ CR , while the settings of F use a smaller μ F = 0.3 . Furthermore, both mean values located on the last Hth positions of the circle memories are set to the same value, μ CR = μ F = 0.9 . In addition, a linear reduction in population size is used. The description of differences between the jSO algorithm and the preceding variants provides the original paper [7]. The jSO variant uses a self-adapted approach for its parameters during the search process; therefore, it is parameter-free.
The steps of the original jSO are described in Algorithm 1. Initially, the population of possible solutions N is located randomly and evaluated by the objective function. The circle memories for the F and CR parameters are allocated. For each solution, the mean values of the F and CR parameters— M F and M CR —are selected using a roulette wheel. The control parameters of jSO are generated by the standard Gauss (CR) and Cauchy (F) distributions using the mean values. In addition, the values of the control parameters F and CR are truncated to certain values based on the current step of the search process.
After the setting of the jSO control parameters, a mutation point is generated using a novel weighted mutation variant (1):
u i = x i + F w ( x pBest x i ) + F ( x r 1 x r 2 ) ,
where x i is the current point; x pBest is a randomly selected point from p × N , the best points of P; x r 1 is selected randomly from P; and x r 2 is selected randomly from population and archive, P A . The newly introduced part is the newly used parameter F w , computed using the recommended rules (2):
F w = 0.7 × F , FES < 0.2 × maxFES 0.8 × F , FES < 0.4 × maxFES 1.2 × F , otherwise .
Algorithm 1 jSO algorithm
  • archive A
  • initialise population P = { x 1 , x 2 , , x N }
  • set all values of M F to 0.5
  • set all values of M CR to 0.8
  • while stopping condition not reached do
  •     S CR , S F
  •    for  i = 1 , 2 , , N  do
  •       r i select from [ 1 , H ] randomly
  •      if  r i = H  then
  •          M F , r i 0.9
  •          M CR , r i 0.9
  •      end if
  •      if  M CR , r i < 0  then
  •          CR i 0
  •      else
  •          CR i N i ( M CR , r i , 0.1 )
  •      end if
  •      if  g < 0.25 G max  then
  •          CR i max ( CR i , 0.7 )
  •      else if  g < 0.5 G max  then
  •          CR i max ( CR i , 0.6 )
  •      end if
  •       F i C i ( M F , r i , 0.1 )
  •      if  g < 0.6 G max and F i > 0.7  then
  •          F i 0.7
  •      end if
  •       y i current-to-pbestw/1/bin
  •      compute f ( y i )
  •    end for
  •    for  i = 1 , 2 , , N  do
  •      if  f ( y i ) f ( x i )  then
  •          x i y i
  •      end if
  •      if  f ( y i ) < f ( x i )  then
  •          x i A , CR i S CR , F i S F
  •      end if
  •      update M CR and M F
  •      update population size
  •      update p
  •    end for
  • end while
The parameter controlling the portion of the best individuals to select x pBest point (p) is adapted during the search using the following Formula (3):
p = p max p min maxFES × FES + p min ,
where p max and p min are input parameters, maxFES is the total number of function evaluations per run, and FES is the current number of function evaluations. The authors of jSO recommend using p max = 0.25 and p min = 0.125 .
After mutation, the new trial solution point y i is generated using the current point x i and the mutation point u i in the binomial crossover. The new trial point is evaluated and replaces the old parent solution x i only if f ( y i ) f ( x i ) . In this case, the old solution is lost in the original DE algorithm. The variant of jSO uses an archive of old good solutions A where the parent solutions replaced by the trial solutions are stored. When the archive is full, the new outperformed parent individuals are located in randomly selected positions.

2.1. Eigen Crossover

The original jSO uses standard binomial crossover which provides promising results in various optimisation problems. However, the study in which several state-of-the-art DE variants were used with binomial or exponential crossover to solve real-world optimisation problems illustrates the high performance of the exponential crossover [9].
The purpose of using the Eigen crossover in DE was inspired by the original papers in which the main idea of such a transformation was discussed. At first, the Eigen transformation approach was introduced in the CoBiDE variant, and this mechanism was an inspiration for this paper [10]. In CoBiDE, the Eigen transformation is used during crossover operation to increase the efficiency in problems defined by rotated objective functions. Similarly, Guo et al. proposed the Eigen transformation for crossover in DE [11].
In the original CoBiDE, at the beginning of each generation, the Eigenvalues (matrix D) and Eigenvectors (matrix B) are computed from the covariance matrix (C), which is computed from a ps part of better individuals from population P:
C = B D 2 B T .
After the Eigen transformation, a new trial solution y i is produced using the Eigen coordinate system:
x i = B 1 x i = B T x i ,
u i = B 1 u i = B T u i .
After the Eigen transformation, a binomial crossover is applied to produce a new solution y i , which is finally transformed back into a standard coordinate system:
y i = B y i .
Note that this Eigen approach is used only for a whole generation if the randomly generated number rand is lower than the second input parameter peig, which controls the portion using the Eigen transformation.
Moreover, the results achieved by the experimental study [12] illustrate a substantially higher performance from the successful jSO algorithm when the Eigen crossover approach (inspired by CoBiDE) is applied. The newly employed Eigen transformation for the crossover operation enables us to solve the optimisation problems with highly correlated coordinates (i.e., when the functions are rotated).

2.2. Motivation to Enhance the jSO Variant

Although the original jSO algorithm uses a fine-tuned setting of its control parameters, a parallel model of eight cooperating jSO algorithms enables substantially better performance when solving real-world optimisation problems [13].
Furthermore, the results of the comprehensive study [14], in which more than 20 JADE variants were applied to two sets of problems (CEC 2014 and CEC 2011), illustrate very promising conclusions. JADE-based variants (including jSO) are very robust optimisers in the case of artificial benchmark problems and also real-world problems.
In addition, the application of the archive for successful old individuals in the popular Harris Hawks algorithm allows significantly better results compared to the original variant [15]. The results were achieved on a set of 22 real-world problems; therefore, the practical performance of the archived individuals is undisputed.
The results of the aforementioned studies were the main motivation to enhance the archive of the old successful individuals used in the jSO algorithm to increase its performance. There was no effort to deal with the archive of old good individuals; therefore, a newly proposed jSO employed a more progressive approach to update the archive.

3. New jSO Variant with Progressive Archive

The results of the DE variants with the archive of historically successful individuals illustrate high performance for various optimisation problems. The random replacement of individuals in A is the approach most commonly used by the DE variants. In this case, the old successful individuals in the archive are replaced randomly without any priority or limits. This enables a higher diversity of individuals in the archive and keeps more recent individuals for mutation. In contrast, in some situations, older or better historical individuals in the archive can help when the population is stuck in the area of a local solution. Therefore, a new progressive replacement for historically successful individuals is proposed and described in archive A.

3.1. Related Works

At first, several of the existing approaches to storing old successful individuals in an archive are mentioned and discussed.
In 2010, Wang et al. proposed the idea of the elitist archive in DE to solve multi-objective optimisation problems [16]. The old solutions outperformed by better new offspring are moved into the archive, where there are three conditions to insert a new solution. If the solution is worse than all the members of the archive, the solution is rejected. If the solution is worse than a part of the archive members, these members are removed from the archive, and the solution is inserted. Finally, if the solution is not worse than any archive member, it is inserted into the archive.
In 2020, Li et al. proposed an elite archive for the DE algorithm [17]. The idea is to divide the archive into two parts, superior (better) and inferior (worse) based on the function value of the individuals. Then, the individuals from the superior or inferior part of the archive are selected to be used in the newly proposed mutation scheme. Unfortunately, there is no information on how the archive is divided, and the parts of the archive are not used to insert individuals.
In 2024, Zhihua et al. introduced a new adaptive DE with an archive reuse mechanism [18]. The algorithm employs elitism for the selection in the reproduction process as standard DE. Then, old outperformed parent individuals are moved into the archive, where they are stored based on the tournament selection. The proposed approach avoids random updates of individuals in the archive, which promises greater robustness.

3.2. Progressive Update of A

In the original jSO variant, the parent individual x i to be replaced by a better offspring individual y i , which is inserted into the archive A. At the beginning of the search process, the archive is empty and the successful old individuals are inserted into the empty positions. In the case when the archive is full, the new incoming individuals are located in randomly selected positions to keep a high stochastic approach. This means that the better solutions in A are replaced with the same probability as the old worse solutions. The idea of the newly proposed archive approach is to employ a control mechanism to replace rather worse solutions in A (see Algorithm 2).
Algorithm 2 Progressive Archive
  • if  f ( y i ) < f ( x i )  then
  •    if A is full then
  •       s o r t ( A )
  •       A A better A worse
  •       A worse x i
  •    else
  •       A x i
  •    end if
  • end if
The proposed progressive approach to insert and store successful parent individuals outperformed in the archive of the jSO algorithm promises to maintain better historical solutions. At the moment when the old successful solution has to be replaced by its offspring solution and archive A is full, the archived individuals are ordered according to the function values from better individuals to worse individuals. Then the individuals are divided into two parts—better individuals ( A better ) and worse individuals ( A worse ). This means that in the better part of A, there are only better historical solutions (evaluated by the lower function values) compared to the worse part of A. Then, the newly incoming successful individual x i is inserted into a randomly selected position in the worst part A worse . In other words, the better solutions of A are not replaced during the replacement process. This approach enables us to maintain the best historical solutions and refresh the worst individuals in archive A.
The portion of the archive where only worse individuals are located is controlled by an input parameter A p , and its value is studied in this paper. The value of A p represents a relative part of archive A that contains the worst solutions. For example, when A p = 0.2 then 20 % of the worst individuals in A are selected to be replaced by the newly incoming old successful solution x i (the remaining 80 % of the better solutions in A remain unchanged). The scheme illustrating the dependency between A p and A worse is depicted in Figure 1. Finally, compared to the original archive (jSO and jSOeig), where all individuals are ready to be replaced, in the proposed approach, only the part of A containing worse individuals is delineated for replacement.
Although the proposed approach is very simple and does not increase the time complexity of the original jSO algorithm, the estimated time complexity is also proposed. The newly proposed method is called jSOa (jSOeE) in the following text because a new approach to the archive (‘a’) is used.

4. Experiments

The test suite of 29 problems was proposed for a special session and competition on single objective-bound constrained real-parameter numerical optimisation of Congress on Evolutionary Computation (CEC) 2024. This session was intended as a competition of optimisation algorithms where new variants of algorithms are introduced. The functions are described in [19], including the experimental settings required for the competition. The source code of the functions is also available on the website given in the report [19]. The test functions CEC 2024 are divided into four categories based on their difficulty:
  • unimodal functions—simple problems: F 1 , F 3 ,
  • multimodal functions—with many local minima: F 4 F 10 ,
  • hybrid functions—difficult, considered to be the real-world problems: F 11 F 20 ,
  • composition functions—very difficult, composed of several different functions: F 21 F 30 .
All the algorithms are implemented in Matlab 2020b, and this environment was also used for the experiments. All the computations were carried out on a standard PC with Windows 11, Intel(R) Core(TM)i7-4790 CPU 3.6 GHz, 16 GB RAM. The experimental setting follows the requirements given in the report [19], where 29 minimisation problems are also defined. The search range (domain) for all test functions is [ 100 , 100 ] D .
The experiments were carried out at four dimension levels, D = 10 , 30 , 50 , 100 , where 25 independent runs per each test function were performed. Each point in the population is evaluated by the cost function. The function-error value is computed as the difference between the function value of the best-searched point and the known best function value of each test problem. The algorithm is stopped if the prescribed amount of function evaluation MaxFES = D × 10 4 is reached. The population size of all the algorithms compared is initialised at the same value N init = 25 × log ( D ) × D , as recommended by the authors of jSO. The size of archive A is set to 2.6 × N , where N is the current population size in jSO.
The proposed approach uses the control parameter A p , and its setting is experimentally studied. Four different settings are proposed in an equidistant manner, A p = ( 0.2 , 0.4 , 0.6 , 0.8 ) . A lower value of A p means that a lower part of the worse individuals in A are to be replaced, and vice versa. Therefore, the variants of the proposed algorithm are abbreviated by the combination of the original jSO, the ‘a’ symbol, presence of the Eigen crossover (‘E’), and the number illustrating the parameter A p .
The newly proposed approach of the progressive archive in the jSO algorithm is evaluated by the aforementioned optimisation problems, where the better efficiency of the approach is represented by the lower function value achieved in the search process. In addition, the computational complexity of the newly designed approach is estimated using standard time-complexity tests proposed by the authors of the benchmark problems.

5. Results

Eight newly proposed variants of the jSOa algorithm were evaluated by 29 test problems and 4 dimension levels (that is, 116 test problems were performed for each algorithm). Four variants employ the Eigen crossover and four the standard binomial crossover. The results achieved are compared with the original jSO and jSOeig algorithms to illustrate the performance of the newly proposed archive mechanism and its setting ( A p ).
The experiment provides a huge amount of data results to analyse. To illustrate an overview of 10 algorithms that optimise 29 problems with 4 dimensions, an advanced statistical comparison is performed instead of standard descriptive values and plots. At first, the absolute mean ranks from the Friedman tests of the ten jSO variants, ten equidistant stages, are provided for each dimension separately in Figure 2, Figure 3, Figure 4 and Figure 5. The medians computed from the achieved minimum function values at the end of the search are used.
The lines of the original jSO variants are printed in grey, and each A p setting has a different colour to distinguish the results. Similarly, the algorithm lines employing the Eigen crossover are illustrated by dashed lines, and the variants employing the binomial crossover are printed using solid lines. The values in the horizontal axis are enhanced by the asterisk symbol (*) if the zero hypothesis of the Friedman test is rejected (a significance level of the test is lower than 1 × 10 2 ).
It is obvious that for all dimensions, both the original jSO variants achieved rather worse (often the worst) results, represented by the biggest mean rank, including all 29 problems. In addition, the variability of the mean ranks of the compared algorithms is reduced during the search process (with an increase in stage). This results in a situation where the null hypothesis of the Friedman tests in the later stages is not rejected. It occurs because the population size of the algorithms is decreased linearly to a small value and the populations lose diversity and stagnate.
The purple dashed lines mostly illustrate the least mean ranks for the proposed jSO variant with the Eigen crossover and A p = 0.2 ( 20 % worst points of A are replaced). When the portion of the worst individuals of A is increased ( A p is increased), the mean ranks of the compared algorithms are higher (lower performance of the algorithm). Finally, in most stages and dimensions, the jSO variants using the Eigen crossover perform better compared to the variants with the binomial crossover.
The numerical representation of the mean ranks of the Friedman tests is presented in Table 1, Table 2, Table 3 and Table 4. The algorithms are ordered from the best to the worst according to the mean ranks achieved in the final stage of the search process. The least mean ranks for each dimension and stage are highlighted for higher readability. In summarising these tables, several interesting facts are obvious. At first, lower A p values achieve mostly better results than higher settings (i.e., A p = 0.2 or A p = 0.4 ). This means that a more strict (progressive) replacement of poorer individuals in A is preferred compared to the original approach, where all points are replaced with the same probability. Keeping the better solutions and replacing the worse solutions allows us to achieve better solutions.
For D = 10 , the original jSO achieves the lowest rank of the ten methods, and the original jSOeig performs better than the proposed approach only for A p = ( 0.6 , 0.8 ) . For D = 30 , the variant of jSOeig with A p = 0.6 outperforms variants with A p = 0.2 and 0.4 but occurs especially only in the final stage. In addition, both original jSO variants achieved the worst results, including all 29 problems.
For D = 50 , the worst results in the final stage are achieved by the original jSO variants, along with the proposed versions that employ the wide replacing positions in A ( A p = 0.8 ). The Eigen crossover using a small portion of the worse individuals in A to be replaced achieves the best results in most of the search stages.
The mean ranks for D = 100 also illustrate the high efficiency of the proposed progressive approach of A when only 40% (or 20%) of the worst individuals can be randomly replaced. The original jSOeig outperforms the proposed approach when A p = 0.6 or 0.8 (that is, the lower part of the poorer individuals in A performs better).
The set of CEC 2024 problems is divided into four categories based on the complexity of the problems. Therefore, the best and worst mean ranks for each problem and dimension of the Kruskal–Wallis test are provided in Table 5, Table 6, Table 7 and Table 8. The results are distinguished for the four categories of the problems where the null hypothesis was rejected if the computed significance level is less than 5 × 10 2 (see highlighted values in the column ‘sig.’). The number of problems in which the difference between the algorithms was rejected is 2/4/8/9 for D = 10 , 30 , 50 , 100 . This means that the proposed approach increases the performance of the archive, especially for higher dimensions.
An overview of these results is provided in Table 9, where a total number of the first, second, third, and last positions of the algorithms are provided for each dimension independently. Note that only the separated positions are taken into account and shared positions are eliminated. The algorithms are ordered from the most frequently winning ones to the most frequently losing ones. Interestingly, the best optimisers are the proposed jSO algorithms with the Eigen crossover and small A p setting, followed by the proposed variants using the binomial crossover. The worst results were achieved by the original jSO, which is unable to solve even one optimisation problem, and it was the worst method in 30 problems out of 116 ( 26 % ). Finally, the proposed approach provides significantly better results in all four categories of optimisation problems. Note that in some problems, two or more algorithms share the same position. In these cases, two methods or ‘many’ (more than two) or ‘all’ methods are provided.
The search process is divided into ten equidistant stages where the best-achieved solution is stored to illustrate the progress by the convergence curve. Several representatives of the convergence curves for the problems in which the algorithms propose different results are depicted in Figure 6, Figure 7, Figure 8 and Figure 9. The logarithmic scale was used for the plots to better illustrate the differences. The original jSO and jSOeig variants are highlighted by solid lines with symbols. The plots can be divided into two groups—where some method achieves the best-shared solution but at the early stage ( F 3 , D = 10 , F 3 , D = 30 , and F 22 , D = 50 ) or when some method achieves better results measured by function value (other cases). It is obvious that in some problems, the proposed approach provides substantially better results compared to the original jSO variants ( F 19 , D = 10 or F 21 , D = 10 ).

5.1. Crossover Comparison

The aforementioned results illustrate the superiority of the jSO variants with the Eigen crossover. These algorithms use the mechanism introduced by the CoBiDE algorithm, where the binomial crossover in the standard coordinate system is alternating with the Eigen coordinate system. Therefore, the relative success of the Eigen approach in producing new individuals is proposed in Figure 10.
The Eigen crossover achieved relative success very similarly in all compared jSOeig variants (the original and four newly proposed versions). The higher value of 0.5 (that is, more than 50 % of the generated individuals) compared to the original binomial crossover was achieved only in several problems and dimensions. The Eigen crossover performs well on unimodal functions and D = 30 , 50 , on some hybrid functions and all dimensions, and on some composed functions that include all dimensions. In addition, the Eigen approach performs substantially better compared to the original binomial crossover for D = 100 because most parts of the success curves are above the 0.5 success ratio.

5.2. Time-Complexity

The authors of the CEC 2024 benchmark suite proposed an approach to estimate the time complexity of the algorithms [19]. Initially, the performance of the computer is estimated when random solutions are evaluated for the dimension and test problem 10,000. Then, the average time T 1 is computed, including all problems for dimension independently. Each algorithm solves each test problem for each dimension by evaluating the limitations of the function FES = 10,000, and the measured time T 2 is recorded. In addition, the average time of the values achieved T 2 is computed for each algorithm and dimension. Finally, the time complexity is computed ( T 2 T 1 ) / T 1 and presented in Table 10, where algorithms are ordered based on average complexity including all dimensions.
The compared algorithms achieved similar time-complexity values, and the least complex method is highlighted for each dimension. The proposed jSO variants with the progressive archive achieved similar or less time complexity compared to the original jSO. Algorithms employing the Eigen crossover are slightly more complex because the two transformations for each solution are performed.

5.3. Real-World Engineering Problem

The proposed mechanism was also applied to solve the real-world engineering problem of compressive spring design, see Figure 11. The search space of this problem is limited by several inequality limits, more detail is provided in study [20]. The minimal (optimal) function value of this problem is f ( x * = 0.012665232788 . The maximum number of function evaluations for this problem was FES = 9000 , and the best solution was achieved in ten equidistant stages of the search process (that is, 900 , 1800 , 2700 , , 9000 ). Each algorithm performed 31 independent runs to achieve scientifically robust results.
The results achieved by the algorithms for this problem are very similar, especially in the later stages of the search process. Therefore, the mean ranks of the Kruskal–Wallis test of all compared algorithms and each search stage independently are shown in Table 11. The standard symbols for the significance level are applied, where ‘**’ is for p < 0.01 , and finally ‘***’ illustrate p < 0.001 . The algorithms propose significantly different results in most stages (except in the second stage) and are ordered based on the mean rank of the last stage. The best-performing optimiser from the third stage is the proposed variant of jSO with the progressive archive approach and the Eigen crossover ( A p = 0.2 ).

6. Conclusions

This paper proposed a new progressive approach to replace the individuals in archive A of the adaptive jSO algorithm. The approach specifies that a portion of the worst individuals in A is limited to replacement by new outperformed successful solutions. The approach is controlled by the parameter A p , and its setting was studied in this paper.
The original jSO algorithm and its variant that uses the Eigen crossover were selected and compared with four new variants of jSO and four new variants of jSOeig based on four different values of A p , A p = 0.2 , 0.4 , 0.6 , 0.8 . The ten algorithms were experimentally applied to 29 test problems of the CEC 2024 competition and 4 dimension levels. The results were statistically analysed using standard statistical methods.
The Friedman tests were used to compare the algorithms, including all 29 problems and each dimension independently. In summary, the proposed variants performed substantially better compared to the original jSO. The best results were achieved especially for algorithms with A p = 0.2 and 0.4 , where the Eigen crossover mainly provided better efficiency. For D = 10 , the original jSO was the algorithm that performed the worst, and jSOeig took the third position out of 10 algorithms. For D = 30 , both the original jSO variants achieved the worst performance, and for D = 50 , these variants were better only than the proposed mechanism with A p = 0.8 . Finally, for D = 100 , the jSO took the last position and the jSOeig was outperformed by the proposed method using A p = 0.2 and 0.4 .
The comparison of the methods for each problem and dimension separately using the Kruskal–Wallis tests provides similar conclusions. The highest number of first positions of all methods was achieved by the proposed jSOeig employing A p = 0.2 , and the second best optimiser was the proposed jSOeig using A p = 0.4 .
Studying the convergence curves, there are a lot of problems where the proposed progressive archive approach enables us to achieve better accuracy or faster convergence compared to the original jSO variants.
Finally, the success of the employed Eigen crossover was studied for the original jSOeig and four derived proposed variants. The relative success of the Eigen crossover (compared to the binomial one) was very similar for all five jSOeig variants. Regarding the dimensionality of the problems, the highest performance was achieved for D = 100 where the Eigen crossover was successful for more than 50 % of the generated individuals in most of the test problems.
The proposed progressive mechanism to replace the individuals in the archive of the jSO algorithm achieved very promising results without substantially higher complexity. In addition, it is possible to include this approach in any population-based evolutionary algorithm with an elitism strategy. Therefore, this approach will be studied and analysed in future research to illustrate its efficiency in various optimisation algorithms. Furthermore, the sensitivity analysis of the F and CR parameters will be studied in subsequent studies. The Matlab source code of the algorithm is available in GitHub https://github.com/PetBuj/jSOa, accessed on 13 August 2024.

Funding

This research received no external funding.

Data Availability Statement

All the data were measured in MATLAB during the experiments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Storn, R.; Price, K.V. Differential Evolution—A Simple and Efficient Adaptive Scheme for Global Optimization over Continuous Spaces; Technical report; International Computer Science Institute: Berkeley, CA, USA, 1995; Available online: https://cse.engineering.nyu.edu/~mleung/CS909/s04/Storn95-012.pdf (accessed on 13 August 2024).
  2. Storn, R.; Price, K.V. Differential evolution—A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  3. Das, S.; Mullick, S.; Suganthan, P. Recent advances in differential evolution—An updated survey. Swarm Evol. Comput. 2016, 27, 1–30. [Google Scholar] [CrossRef]
  4. Das, S.; Suganthan, P.N. Differential Evolution: A Survey of the State-of-the-Art. IEEE Trans. Evol. Comput. 2011, 15, 27–54. [Google Scholar] [CrossRef]
  5. Wolpert, D.H.; Macready, W.G. No Free Lunch Theorems for Optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  6. Bujok, P.; Tvrdík, J. A comparison of various strategies in differential evolution. In Proceedings of the MENDEL, 17th International Conference on Soft Computing, Brno, Czech Republic, 10–12 July 2011; Matoušek, R., Ed.; Brno University of Technology: Brno, Czech Republic, 2011; pp. 48–55. [Google Scholar]
  7. Brest, J.; Maučec, M.S.; Bošković, B. Single objective real-parameter optimization: Algorithm jSO. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), Donostia, Spain, 5–8 June 2017; pp. 1311–1318. [Google Scholar]
  8. Zhang, J.; Sanderson, A.C. JADE: Adaptive Differential Evolution with Optional External Archive. IEEE Trans. Evol. Comput. 2009, 13, 945–958. [Google Scholar] [CrossRef]
  9. Bujok, P.; Tvrdík, J.; Poláková, R. Differential evolution with exponential crossover revisited. In Proceedings of the MENDEL, 22nd International Conference on Soft Computing, Brno, Czech Republic, 8–10 June 2016; Matoušek, R., Ed.; Brno University of Technology: Brno, Czech Republic, 2016; pp. 17–24. [Google Scholar]
  10. Wang, Y.; Li, H.X.; Huang, T.; Li, L. Differential evolution based on covariance matrix learning and bimodal distribution parameter setting. Appl. Soft Comput. 2014, 18, 232–247. [Google Scholar] [CrossRef]
  11. Guo, S.M.; Tsai, J.S.H.; Yang, C.C.; Hsu, P.H. A self-optimization approach for L-SHADE incorporated with eigenvector-based crossover and successful-parent-selecting framework on CEC 2015 benchmark set. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC), Sendai, Japan, 25–28 May 2015; pp. 1003–1010. [Google Scholar]
  12. Bujok, P.; Poláková, R. Eigenvector Crossover in the Efficient jSO Algorithm. MENDEL-Soft Comput. J. 2019, 25, 65–72. [Google Scholar] [CrossRef]
  13. Bujok, P.; Poláková, R. Migration model of jSO algorithm. In Proceedings of the 2018 25th International Conference on Systems Signals and Image Processing (IWSSIP), Maribor, Slovenia, 20–22 June 2018. [Google Scholar]
  14. Piotrowski, A.P.; Napiorkowski, J.J. Step-by-step improvement of JADE and SHADE-based algorithms: Success or failure? Swarm Evol. Comput. 2018, 43, 88–108. [Google Scholar] [CrossRef]
  15. Bujok, P. Harris Hawks optimisation: Using of an archive. In Proceedings of the Artificial Intelligence and Soft Computing, Virtual, 21–23 June 2021; Rutkowski, L., Scherer, R., Korytkowski, M., Pedrycz, W., Tadeusiewicz, R., Zurada, J.M., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 415–423. [Google Scholar]
  16. Wang, Y.N.; Wu, L.H.; Yuan, X.F. Multi-objective self-adaptive differential evolution with elitist archive and crowding entropy-based diversity measure. Soft Comput. 2010, 14, 193–209. [Google Scholar] [CrossRef]
  17. Li, Y.; Wang, S. Differential evolution algorithm with elite archive and mutation strategies collaboration. Artif. Intell. Rev. 2020, 53, 4005–4050. [Google Scholar] [CrossRef]
  18. Cui, Z.; Zhao, B.; Zhao, T.; Cai, X.; Chen, J. An adaptive differential evolution algorithm based on archive reuse. Inf. Sci. 2024, 668, 120524. [Google Scholar] [CrossRef]
  19. Qiao, K.; Wen, X.; Ban, X.; Chen, P.; Price, K.V.; Suganthan, P.N.; Liang, J.; Wu, G.; Yue, C. Evaluation Criteria for CEC 2024 Competition and Special Session on Numerical Optimization Considering Accuracy and Speed; Technical Report; Computational Intelligence Laboratory, Zhengzhou University: Zhengzhou, China; Nanyang Technological University: Singapore, 2023. [Google Scholar]
  20. Bayzidi, H.; Talatahari, S.; Saraee, M.; Lamarche, C.P. Social Network Search for Solving Engineering Optimization Problems. Comput. Intell. Neurosci. 2021. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Scheme of dividing the individuals in archive A.
Figure 1. Scheme of dividing the individuals in archive A.
Mathematics 12 02534 g001
Figure 2. Mean ranks from the Friedman tests for all methods, ten stages, D = 10 .
Figure 2. Mean ranks from the Friedman tests for all methods, ten stages, D = 10 .
Mathematics 12 02534 g002
Figure 3. Mean ranks from the Friedman tests for all methods, ten stages, D = 30 .
Figure 3. Mean ranks from the Friedman tests for all methods, ten stages, D = 30 .
Mathematics 12 02534 g003
Figure 4. Mean ranks from the Friedman tests for all methods, ten stages, D = 50 .
Figure 4. Mean ranks from the Friedman tests for all methods, ten stages, D = 50 .
Mathematics 12 02534 g004
Figure 5. Mean ranks from the Friedman tests for all methods, ten stages, D = 100 .
Figure 5. Mean ranks from the Friedman tests for all methods, ten stages, D = 100 .
Mathematics 12 02534 g005
Figure 6. Convergenceplots of the compared jSO variants.
Figure 6. Convergenceplots of the compared jSO variants.
Mathematics 12 02534 g006
Figure 7. Convergence plots of the compared jSO variants.
Figure 7. Convergence plots of the compared jSO variants.
Mathematics 12 02534 g007
Figure 8. Convergence plots of the compared jSO variants.
Figure 8. Convergence plots of the compared jSO variants.
Mathematics 12 02534 g008
Figure 9. Convergence plots of the compared jSO variants.
Figure 9. Convergence plots of the compared jSO variants.
Mathematics 12 02534 g009
Figure 10. Relative success of the Eigen crossover in the jSOeig variants.
Figure 10. Relative success of the Eigen crossover in the jSOeig variants.
Mathematics 12 02534 g010
Figure 11. Tension/compression spring design problem.
Figure 11. Tension/compression spring design problem.
Mathematics 12 02534 g011
Table 1. Mean ranks from the Friedman tests for each stage, D = 10 . The mean rank of the best performing method in each stage is higlhlighted.
Table 1. Mean ranks from the Friedman tests for each stage, D = 10 . The mean rank of the best performing method in each stage is higlhlighted.
Alg.mr1mr2mr3mr4mr5mr6mr7mr8mr9mr10
jSOaE022.902.763.573.723.243.974.864.414.484.79
jSOa024.624.933.604.224.484.694.454.815.414.97
jSOeig7.316.977.767.406.836.976.345.885.665.28
jSOaE044.904.074.484.164.714.595.595.785.385.28
jSOa045.865.345.004.815.525.795.435.025.055.38
jSOaE063.593.554.724.244.504.314.525.125.195.45
jSOaE084.556.105.225.975.145.596.005.795.955.69
jSOa087.386.867.506.797.265.935.725.955.595.72
jSOa065.836.345.725.956.286.145.725.715.665.97
jSO8.078.077.417.747.057.036.366.536.646.48
Table 2. Mean ranks from the Friedman tests for each stage, D = 30 .
Table 2. Mean ranks from the Friedman tests for each stage, D = 30 .
Alg.mr1mr2mr3mr4mr5mr6mr7mr8mr9mr10
jSOa025.594.174.865.124.435.074.694.624.174.29
jSOaE064.004.554.834.554.314.334.534.535.224.43
jSOaE023.663.483.453.214.264.864.454.725.244.84
jSOa045.905.665.795.865.716.026.076.216.335.36
jSOaE043.554.723.793.594.554.364.675.295.025.41
jSOaE085.344.865.074.935.505.075.214.845.075.52
jSOa086.796.766.486.286.265.745.715.675.675.97
jSOa065.866.146.076.226.386.786.816.675.406.12
jSO7.938.247.978.336.716.746.916.336.296.43
jSOeig6.386.416.696.916.906.035.956.106.596.62
Table 3. Mean ranks from the Friedman tests for each stage, D = 50 .
Table 3. Mean ranks from the Friedman tests for each stage, D = 50 .
Alg.mr1mr2mr3mr4mr5mr6mr7mr8mr9mr10
jSOaE023.863.723.863.103.983.794.244.594.174.31
jSOaE044.004.383.454.124.714.664.504.334.984.81
jSOa026.415.214.695.525.265.645.095.105.345.19
jSOa044.975.005.485.595.345.525.365.264.845.21
jSOa065.666.416.245.915.486.215.916.225.765.29
jSOaE064.524.764.004.314.474.835.785.745.785.72
jSO7.597.767.977.477.717.106.176.416.315.83
jSOaE084.415.246.034.955.194.905.455.695.976.00
jSOeig7.106.316.556.916.165.845.725.555.986.02
jSOa086.486.216.727.126.716.526.786.105.866.62
Table 4. Mean ranks from the Friedman tests for each stage, D = 100 .
Table 4. Mean ranks from the Friedman tests for each stage, D = 100 .
Alg.mr1mr2mr3mr4mr5mr6mr7mr8mr9mr10
jSOaE043.764.243.933.973.794.984.264.504.074.16
jSOaE024.794.794.314.724.934.295.055.224.194.33
jSOa045.595.795.595.345.145.265.144.985.194.79
jSOeig5.725.595.415.905.976.055.315.476.005.17
jSOaE064.313.594.414.974.915.195.125.506.335.57
jSOaE085.695.345.414.906.415.286.245.665.505.67
jSOa025.665.145.145.524.694.624.935.284.695.76
jSOa065.865.936.215.795.665.676.406.035.666.26
jSOa086.006.797.036.766.916.696.025.646.226.50
jSO7.627.797.557.146.596.976.536.727.166.79
Table 5. Number of separated wins, second, third, and last positions from the Kruskal–Wallis tests, D = 10 . Bold highlight the the significant result.
Table 5. Number of separated wins, second, third, and last positions from the Kruskal–Wallis tests, D = 10 . Bold highlight the the significant result.
FSig.1st2nd3rdLast
1≲1all
3≲1all
4≲1all
50.686681jSOa04, jSOa08jSOa06, jSOaE02jSOaE08
60.107235jSOa08jSOjSOeigjSOaE08
70.115037jSOa04jSOaE02jSOaE06jSOaE08
80.17034jSOa04jSOa04jSOa04jSOaE04
9≲1jSOjSOjSOjSO
100.848352jSOa04jSOa08jSOjSOa02
110.437274manyjSO
120.388367jSOa02jSOeigjSOaE04jSOa06
130.094096jSOeigjSOaE02jSOa08jSOa06
140.200429jSOeig, jSOa08, jSOaE04, jSOaE06jSOaE08
150.549917jSOa04jSOeigjSOa08jSOaE04
160.035464jSOa02jSOa04jSOa06jSOaE04
170.630946jSOaE02jSOeigjSOa04jSOa06
180.032977jSOaE02jSOaE06jSOa08jSOa04
190.452916jSOa02jSOaE02jSOaE08jSOaE06
200.360579manyjSOaE04
210.143888jSOaE08jSOa06jSOa02jSOeig
22≲1all
230.07724jSOaE02jSOaE04jSOaE06jSOa06
240.347575jSOa02jSOeigjSOaE02jSOa06
250.479006jSOaE08jSOa02jSOa08jSOeig
26≲1all
270.107235manyjSOa04
280.303762jSOaE02jSOaE02jSOa06jSOa08
290.925272jSOa04jSOjSOaE02jSOeig
300.927393jSOaE06, jSOaE08jSOeigjSOa06
Table 6. Number of separated wins, second, third, and last positions from the Kruskal–Wallis tests, D = 30 .
Table 6. Number of separated wins, second, third, and last positions from the Kruskal–Wallis tests, D = 30 .
FSig.1st2nd3rdLast
10.150729jSOa08, jSOaE06jSOa06, jSOaE02jSOeig
34.59 × 10 2 jSOaE02jSOa02jSOaE04, jSOaE06jSO
4≲1all
50.014887jSOa04jSOaE06jSOa06jSOaE02
60.180602jSOa02jSOa04, jSOaE06, jSOaE08jSOeig
70.190544jSOa02jSOaE04jSOa06jSO
80.180116jSOaE02jSOa02jSOa06jSOa08
90.517442jSOjSOjSOjSOa08
100.981578jSOaE04jSOa04jSOjSOa06
110.753728jSOa08jSOaE06jSOjSOaE02
121.15 × 10 1 jSOaE08jSOaE06jSOaE02jSOa02
130.702548jSOa02jSOaE06jSOa06jSO
140.705056jSOaE02jSOaE04jSOa02jSOa08
150.021141jSOa08jSOa02jSOa04jSOaE04
160.137264jSOaE02jSOeigjSOaE06jSOaE04
170.14336jSOaE06jSOjSOjSOa08
181.60 × 10 3 jSOaE06jSOaE04jSOeigjSOa08
190.579603jSOa04jSOa06jSOaE04jSOeig
200.967382jSOa08jSOaE06jSOa02jSOa06
210.925725jSOa02jSOa08jSOaE04jSO
22≲1all
230.110236jSOaE04jSOa02jSOaE02jSOeig
246.29 × 10 2 jSOaE02jSOaE08jSOa04jSOeig
250.857949jSOjSOa02jSOaE04jSOeig
260.145442jSOaE02jSOa06jSOa02jSOa08
272.92 × 10 1 jSOaE02jSOa02jSOa04jSOeig
280.375086manyjSOeig
290.056887jSOa04jSOaE04jSOa08jSOeig
300.68587jSOaE06jSOaE08jSOa08jSOa06
Table 7. Number of separated wins, second, third, and last positions from the Kruskal–Wallis tests, D = 50 . The significant results (p < 0.05) are highlighted for each problem.
Table 7. Number of separated wins, second, third, and last positions from the Kruskal–Wallis tests, D = 50 . The significant results (p < 0.05) are highlighted for each problem.
FSig.1st2nd3rdLast
10.008621jSOaE02jSOaE06jSOaE04jSO
34.59 × 10 2 jSOaE04jSOaE02, jSOaE08jSOa06
40.168605jSOa06jSOa08jSOeigjSOaE08
50.864667jSOeigjSOaE04jSOjSOa06
60.0697jSOa04jSOjSOa06jSOaE06
70.296987jSOa04jSOjSOa08jSOaE08
80.503089jSOa06jSOaE06jSOa02jSOa04
90.255206jSO, jSOeig, jSOa08jSOa06
100.374339jSOa06jSOa04jSOjSOaE04
110.454657jSOaE02jSOaE04jSOa02jSOaE06
125.29 × 10 3 jSOaE02jSOaE06jSOaE04jSO
130.765838jSOa06, jSOaE06jSOaE04jSOaE08
140.595938jSOa06jSOaE02jSOa04jSOaE06
150.043031jSOaE02jSOaE08jSOaE06jSOa06
160.032626jSOa06jSOjSOaE04jSOa04
170.957322jSOa06jSOaE02jSOaE04jSOa04
187.91 × 10 2 jSOaE04jSOaE08jSOaE02jSOa04
190.457847jSOa08jSOjSOaE04jSOaE08
200.960644jSOaE04jSOa04jSOa02jSO
210.083268jSOaE06jSOa04jSOaE08jSOaE02
220.785874jSOaE08jSOa08jSOaE02jSOaE06
230.236608jSOaE02jSOaE04jSOeigjSOa02
246.63 × 10 3 jSOa02jSOa04jSOaE02jSOa08
250.137881jSOaE02jSOa02jSOaE08jSOaE04
260.024149jSOaE02jSOa02jSOa04jSOeig
272.23 × 10 3 jSOaE02jSOa02jSOa04jSO
28≲1all
290.47038jSOaE06jSOa08jSOa06jSOaE02
300.055988jSOaE02jSOa06jSOaE04jSOaE06
Table 8. Number of separated wins, second, third, and last positions from the Kruskal–Wallis tests, D = 100 .
Table 8. Number of separated wins, second, third, and last positions from the Kruskal–Wallis tests, D = 100 .
FSig.1st2nd3rdLast
10.002948jSOa02jSOaE02jSOa04jSOeig
39.83 × 10 7 jSOaE02jSOaE04jSOaE06jSO
40.759668jSOjSOa06, jSOa08jSOaE02
50.259987jSOa08jSOaE06jSOeigjSOa02
60.001901jSOa08jSOa02jSOjSOaE06
70.375612jSOeigjSOa04jSOaE06jSOaE04
80.67108jSOjSOaE08jSOaE04jSOaE02
90.437274manyjSOaE08
100.532392jSOa04jSOaE08jSOaE02jSOaE06
110.170542jSOaE08jSOjSOa04jSOa08
121.10 × 10 2 jSOaE04jSOeigjSOaE06jSOa06
130.697136jSOa04jSOa02jSOa06jSOeig
140.065702jSOaE04jSOaE06jSOaE02jSO
150.057776jSOaE02jSOaE04jSOaE08jSO
160.235117jSOaE04jSOeigjSOaE06jSOa08
170.548003jSOa04jSOeigjSOaE02jSOa08
183.27 × 10 2 jSOaE04jSOaE06jSOaE02jSOa04
190.028153jSOaE02jSOaE04jSOa02, jSOa04jSOa08
200.555476jSOaE02jSOeigjSOaE04jSOa02
210.136233jSOaE02jSOjSOeigjSOaE04
220.014564jSOa08jSOjSOaE08jSOaE02
230.162436jSOa04jSOaE04jSOa08jSOaE08
243.69 × 10 3 jSOaE02jSOa02jSOaE04jSOaE08
250.754937jSOa02jSOaE02jSOeigjSOa04
260.329319jSOaE02jSOaE04jSOa02jSO
277.68 × 10 5 jSOaE04jSOaE02jSOa02, jSOa04jSOeig
280.248385jSOaE04jSOa02jSOa08jSOaE02
290.242766jSOaE02jSOaE04jSOaE08jSOa04
300.228661jSOaE04jSOa02jSOaE06jSOaE08
Table 9. Number of first, second, third, and the last separate positions from the Kruskal–Wallis tests and D = 10 / 30 / 50 / 100 / all. The most frequently winning algorithm for each dimension is highlighted.
Table 9. Number of first, second, third, and the last separate positions from the Kruskal–Wallis tests and D = 10 / 30 / 50 / 100 / all. The most frequently winning algorithm for each dimension is highlighted.
Alg.1st2nd3rdLast
jSOaE029/5/8/1/230/1/2/4/71/3/0/4/80/0/0/0/0
jSOaE040/1/2/8/111/4/6/1/123/1/2/1/70/0/0/0/0
jSOaE060/2/0/1/34/3/1/1/92/4/4/2/120/0/0/0/0
jSOa021/2/0/0/34/2/0/3/91/1/3/1/60/0/0/0/0
jSOa040/0/0/0/01/0/1/1/31/0/0/1/20/0/0/0/0
jSOaE080/0/0/0/00/0/0/0/01/1/1/1/40/0/0/0/0
jSOa060/0/0/0/00/0/0/0/00/0/0/0/00/2/0/0/2
jSOeig0/0/0/0/00/0/0/0/00/0/0/0/01/3/0/0/4
jSOa080/0/0/0/00/0/0/0/00/0/0/0/01/0/2/1/4
jSO0/0/0/0/00/0/0/0/00/0/0/0/08/5/8/9/30
Table 10. Estimated time-complexity of the compared algorithms (in seconds).
Table 10. Estimated time-complexity of the compared algorithms (in seconds).
Algorithm D = 10 D = 30 D = 50 D = 100
T10.030.080.160.58
jSOa0215.989.489.3615.49
jSOa0817.139.378.4715.68
jSOa0417.589.758.7614.68
jSOa0616.5310.018.8816.14
jSOaE0817.8510.149.2214.50
jSO16.2210.579.9815.40
jSOaE0618.6111.449.1213.56
jSOaE0218.1310.389.2915.57
jSOeig18.8011.499.2615.01
jSOaE0418.0111.5810.7214.94
Table 11. Mean ranks from the Kruskal–Wallis tests for compressive spring design problem for each algorithm and stage.
Table 11. Mean ranks from the Kruskal–Wallis tests for compressive spring design problem for each algorithm and stage.
Alg.mr1mr2mr3mr4mr5mr6mr7mr8mr9mr10
jSOaE02161.5134.4105.896.472.468.366.474.581.597.3
jSOaE04165.6149130.4110.8112.395.298.389.3111.5106.5
jSOaE08175.5155.2192.1160.9180.2166.1176.2150.3132.1118.4
jSOaE06151.1150.6154.7138.4148.3136.6137.8132.9113.8121.8
jSOe86.4129.4176174.7171.6174.9179.9185.6162.9139.3
jSOa04168.9159.8138.9136.3127132.7134.9146.1159.7166.1
jSOa08192.7180.8175.9211.9192.4203.3193.9181.2186.8181.5
jSOa02150.1159.2118.9120.1115.7133.4138.7152.4163182.3
jSO74.6123.1141.5198.3216.7214.4221215.5209.3195.3
jSOa06178.6163.7170.8157.3168.5180.2157.9177.3184.6196.7
Sign.*************************
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bujok, P. Progressive Archive in Adaptive jSO Algorithm. Mathematics 2024, 12, 2534. https://doi.org/10.3390/math12162534

AMA Style

Bujok P. Progressive Archive in Adaptive jSO Algorithm. Mathematics. 2024; 12(16):2534. https://doi.org/10.3390/math12162534

Chicago/Turabian Style

Bujok, Petr. 2024. "Progressive Archive in Adaptive jSO Algorithm" Mathematics 12, no. 16: 2534. https://doi.org/10.3390/math12162534

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop