Next Article in Journal
Some Comments about the p-Generalized Negative Binomial (NBp) Model
Previous Article in Journal
An Efficient Algorithm for Basic Elementary Matrix Functions with Specified Accuracy and Application
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Introducing a Parallel Genetic Algorithm for Global Optimization Problems

by
Vasileios Charilogis
and
Ioannis G. Tsoulos
*
Department of Informatics and Telecommunications, University of Ioannina, 45110 Ioannina, Greece
*
Author to whom correspondence should be addressed.
AppliedMath 2024, 4(2), 709-730; https://doi.org/10.3390/appliedmath4020038
Submission received: 11 May 2024 / Revised: 5 June 2024 / Accepted: 6 June 2024 / Published: 10 June 2024
(This article belongs to the Special Issue Optimization and Machine Learning)

Abstract

:
The topic of efficiently finding the global minimum of multidimensional functions is widely applicable to numerous problems in the modern world. Many algorithms have been proposed to address these problems, among which genetic algorithms and their variants are particularly notable. Their popularity is due to their exceptional performance in solving optimization problems and their adaptability to various types of problems. However, genetic algorithms require significant computational resources and time, prompting the need for parallel techniques. Moving in this research direction, a new global optimization method is presented here that exploits the use of parallel computing techniques in genetic algorithms. This innovative method employs autonomous parallel computing units that periodically share the optimal solutions they discover. Increasing the number of computational threads, coupled with solution exchange techniques, can significantly reduce the number of calls to the objective function, thus saving computational power. Also, a stopping rule is proposed that takes advantage of the parallel computational environment. The proposed method was tested on a broad array of benchmark functions from the relevant literature and compared with other global optimization techniques regarding its efficiency.

1. Introduction

Typically, the task of locating the global minimum [1] of a function f : S R , S R n is defined as follows:
x * = arg min x S f ( x ) .
where the set (S) is as follows:
S = a 1 , b 1 a 2 , b 2 a n , b n
The values a i and b i are the left and right bounds, respectively, for the point x i . A systematic review of the optimization procedure can be found in the work of Fouskakis [2].
The previously defined problem has been tackled using a variety of methods, which have been successfully applied to a wide range of problems in various fields, such as medicine [3,4], chemistry [5,6], physics [7,8,9], economics [10,11], etc. Global optimization methods are divided into two main categories: deterministic and stochastic methods [12]. The first category belongs to the interval methods [13,14], where the set (S) is iteratively divided into subregions, and those that do not contain the global solution are discarded based on predefined criteria. Many related works have been published in the area of deterministic methods, including the work of Maranas and Floudas, who proposed a deterministic method for chemical problems [15], the TRUST method [16], the method suggested by Evtushenko and Posypkin [17], etc. In the second category, the search for the global minimum is based on randomness. Also, stochastic optimization methods are commonly used because they can be programmed more easily and do not depend on any previous information about the objective problem. Some stochastic optimization methods that have been used by researchers include ant colony optimization [18,19], controlled random search [20,21,22], particle swarm optimization [23,24,25], simulated annealing [26,27,28], differential evolution [29,30], and genetic algorithms [31,32,33]. Finally, there is a plethora of research referring to metaheuristic algorithms [34,35,36], offering new perspectives and solutions to problems in various fields.
The current work proposes a series of modifications in order to effectively parallelize the widely adopted method of genetic algorithms for solving Equation (1). Genetic algorithms, initially proposed by John Holland, constitute a fundamental technique in the field of stochastic methods [37]. Inspired by biology, these algorithms simulate the principles of evolution, including genetic mutation, natural selection, and the exchange of genetic material [38]. The integration of genetic algorithms with machine learning has proven effective in addressing complex problems and validating models. This interaction is highlighted in applications such as the design and optimization of 5G networks, contributing to path loss estimation and improving performance in indoor environments [39]. It is also applied to optimizing the movement of digital robots [40] and conserving energy in industrial robots with two arms [41]. Additionally, genetic algorithms have been employed to find optimal operating conditions for motors [42], optimize the placement of electric vehicle charging stations [43], manage energy [44], and have applications in other fields such as medicine [45,46] and agriculture [47].
Although genetic algorithms have proven to be effective, the optimization process requires significant computational resources and time. This emphasizes the necessity of implementing parallel techniques, as the execution of algorithms is significantly accelerated by the combined use of multiple computational resources. Modern parallel programming techniques include the message-passing interface (MPI) [48] and the OpenMP library [49]. Parallel programming techniques have also been incorporated in various cases into global optimization, such as the combination of simulated annealing and parallel techniques [50], the use of parallel methods in particle swarm optimization [51], the incorporation of radial basis functions in parallel stochastic optimization [52], etc. One of the main advantages of genetic algorithms over other global optimization techniques is that they can be easily parallelized and exploit modern computing units as well as the previously mentioned parallel programming techniques.
In the relevant literature, two major categories of parallel genetic algorithms appear, namely, island genetic algorithms and cellular genetic algorithms [53]. The island model is a parallel genetic algorithm (PGA), which manages several subpopulations on separate islands, and executes the genetic algorithm process on each island simultaneously for a different set of solutions. Island models have been utilized in various cases, such as molecular sequence alignment [54], the quadratic assignment problem [55], the placement of sensors/actuators in large structures [56], etc. Also, recently, Tsoulos et al. proposed an implementation of an island PGA [57]. Regarding the parallel cellular model of genetic algorithms, solutions are organized into a grid. Various diverse operators, such as crossovers and mutations, are applied to neighboring regions within the grid. For each solution, a descendant factor is created, replacing its position within the birth region. The model is flexible regarding the structure of the grid, neighborhood strategies, and settings. Implementations may involve multiple processors or graphical processing units, with information exchange possible through physical communication networks. The theory of parallel genetic algorithms has been thoroughly presented by a number of researchers in the literature [58,59]. Also, parallel genetic algorithms have been incorporated in combinatorial optimization [60].
The proposed method is based on the island technique and suggests a number of improvements to the general scheme of parallel Genetic Algorithms. Among these improvements are a series of techniques for propagating optimal solutions among islands that aim to speed up the convergence of the overall algorithm. In addition, the individual islands of the genetic algorithm periodically apply a local minimization technique with two goals: to discover the most accurate local minima of the objective function and to speed up the convergence of the overall algorithm without wasting computing power on previously discovered function values. Furthermore, an efficient termination rule based on asymptotic considerations, which was validated across a series of global optimization methods, is also incorporated into the current algorithm. The proposed method was applied to a series of problems appearing in the relevant literature. The experimental results indicate that the new method can effectively find the global minimum of the functions in a large percentage of cases, and the above modifications significantly accelerated the discovery of the global minimum as the number of individual islands in the genetic algorithm increased.
The remainder of the article follows this structure: In Section 2, the genetic algorithm is analyzed, and the parallelization, dissemination techniques (PT or migration methodologies), and termination criteria are discussed. Subsequently, in Section 3, the test functions used are presented in detail, along with the experimental results. Finally, in Section 4, some conclusions are outlined, and future explorations are formulated.

2. Method Description

This section begins with a detailed description of the base genetic algorithm and continues providing the details of the suggested modifications.

2.1. The Genetic Algorithm

Genetic algorithms are inspired by natural selection and the process of evolution. In their basic form, they start with an initial population of chromosomes, representing possible solutions to a specific problem. Each chromosome is represented as a “gene”, and its length is equal to the dimension of the problem. The algorithm processes these solutions through iterative steps, replicating and evolving the population of solutions. In each generation, the selected solutions are crossed and mutated to improve their fit to the problem. As generations progress, the population converges toward solutions with improved fit to the problem. Important factors affecting genetic algorithm performance include population size, selection rate, crossover and mutation probabilities, and strategic replacement of solutions. The choice of these parameters affects the ability of the algorithm to explore the solution space and converge to the optimal result. Subsequently, the operation of the genetic algorithm is presented through the replication and advancement of solution populations, step by step [61,62]. The steps of a typical genetic algorithm is shown in Algorithm 1.

2.2. Parallelization of Genetic Algorithm and Propagation Techniques

In the parallel island model of Figure 1, an evolving population is divided into various “islands”, each working concurrently to optimize a specific set of solutions. In this figure, each island implements a separate genetic algorithm as described in Section 2.1. The steps of the overall algorithm are also presented through a series of steps in Algorithm 2. In contrast to classical parallelization, which handles a central population, the island model features decentralized populations evolving independently. Each island exchanges information with others at specific points in evolution through migration, where solutions move from one island to another, influencing the overall convergence toward the optimal solution. Migration settings determine how often they occur and which solutions are selected for exchange. Each island can follow a similar search strategy, but for more variety or faster convergence, different approaches can be employed. Islands may have identical or diverse strategies, providing flexibility and efficiency in exploring the solution space. To implement this parallel model, each island is connected to a computational resource. For instance, as depicted in images of Figure 2, the execution of the parallel island model involves five islands, each managing a distinct set of solutions using five processor units (PUs). During the migration process, information related to solutions is exchanged among PUs. Figure 2 also depicts the four different techniques for spreading the chromosomes with the best functional values. In Figure 2a, we observe the migration of the best chromosomes from one island to another (randomly chosen). In Figure 2b, migration occurs from a randomly chosen island to all others. In Figure 2c, it occurs from all islands to a randomly chosen one, and finally, in Figure 2d, migration occurs from each island to all others.
Algorithm 1 The steps of the genetic algorithm.
  • Initialization step.
    (a)
    Set  N c as the number of chromosomes.
    (b)
    Set  N g the maximum number of allowed generations.
    (c)
    Initialize randomly N c chromosomes in S. Each chromosome denotes a potential solution to the problem of Equation (1).
    (d)
    Set as p s the selection rate of the algorithm, with p s 1 .
    (e)
    Set as p m the mutation rate, with p m 1 .
    (f)
    Set k = 0 as the generation counter.
  • Fitness calculation step.
    (a)
    For every chromosome g i , i = 1 , , N c  Calculate the fitness f i = f g i of chromosome g i .
  • Selection step. The chromosomes are sorted with respect to their fitness values. Denote as N b the integer part of 1 p s × N c chromosomes with the lowest fitness values. These chromosomes will be copied to the next generation. The rest of the chromosomes will be replaced by offspring created in the crossover procedure. Each offspring is created from two chromosomes (parents) of the population through the tournament selection process. The procedure for tournament selection is as follows: A set of N t > 1 randomly selected chromosomes is formed, and the individual with the lowest fitness value from this set is selected as a parent.
  • Crossover step. Two selected solutions (parents) are combined to create new solutions (offspring). During crossover, genes are exchanged between parents, introducing diversity. For each selected pair of parents ( z , w ) , two additional chromosomes, represented by z ˜ and w ˜ , are generated through the following equations.
    z i ˜ = a i z i + 1 a i w i w i ˜ = a i w i + 1 a i z i
    where i = 1 , , n . The values a i are uniformly distributed random numbers, with a i [ 0.5 , 1.5 ]  [63].
  • Replacement step.
    (a)
    For  i = N b + 1 to N c  do
    Replace  g i using the next offspring created in the crossover procedure.
    (b)
    End For
  • Mutation step. Some genes in the offspring are randomly modified. This introduces more diversity into the population and helps identify new solutions.
    (a)
    For every chromosome g i , i = 1 , , N c :
    • For each element j = 1 , , n of g i , a uniformly distributed random number r 0 , 1 is drawn. The element is altered randomly if r p m .
    (b)
    End For
  • Set  k = k + 1 . If the termination criterion defined in the work of Tsoulos [64], which is outlined in Section 2.3, is met, or k > N g , then go to the local search step; otherwise, go to step 2a.
  • Local search step. To improve the success in finding better solutions, a process of local optimization search is implemented. In the present study, the Broyden–Fletcher–Goldfarb–Shanno (BFGS) variant proposed by Powell [65] is employed as the local search procedure. This procedure is applied to the chromosome in the population with the lowest fitness value.
Algorithm 2 The overall algorithm.
  • Set  N I as the total number of parallel processing units.
  • Set  N R as the number of generations, after which, each processing unit will send its best chromosomes to the remaining processing units.
  • Set  N P as the number of migrated chromosomes between the parallel processing units.
  • Set  P T as the propagation technique.
  • Set  k = 0 as the generation number.
  • For  j = 1 , , N I , perform in parallel
    (a)
    Execute a generation of the GA algorithm described in Algorithm 1 on processing unit j.
    (b)
    If  K mod N R = 0 ,  then
    • Obtain the best N P chromosomes from algorithm j.
    • Propagate these N P chromosomes to the rest of the processing units using a propagation scheme that will be subsequently described.
    (c)
    End If
  • End For
  • Update  k = k + 1
  • Check the proposed termination rule. If the termination rule is valid, then go to step 9a; otherwise, go to step 6.
    (a)
    Terminate and report the best value from all processing units. Apply a local search procedure to this located value to enhance the located global minimum.
The migration or propagation techniques, as described in this study, are periodically and synchronously performed in N R iterations on each processing unit. Below are the migration techniques that could be defined:
  • 1to1: Optimal solutions migrate from a random island to another random one, replacing the worst solutions (see Figure 2a).
  • 1toN: Optimal solutions migrate from a random island to all others, replacing the worst solutions (see Figure 2b).
  • Nto1: All islands send their optimal solutions to a random island, replacing the worst solutions (see Figure 2c).
  • NtoN: All islands send their optimal solutions to all other islands, replacing the worst solutions (see Figure 2d).
If we assume that the migration method “1toN” is executed, then a random island will transfer chromosomes to the other islands, except for itself. However, we keep the label “N” instead of “N-1” because the chromosomes exist on the island that sends them. The number of solutions participating in the migration and replacement process is fully customizable and will be discussed in the experiments below.

2.3. Termination Rule

The termination criterion employed in this study was originally introduced in the research conducted by Tsoulos [64] and it is formulated as follows:
  • In each generation k, the chromosome g * with the best functional value f g * is retrieved from the population. If this value does not change for a number of generations, then the algorithm should probably terminate.
  • Consider σ ( k ) as the associated variance of the quantity f g * at generation k. The algorithm terminates when
    k N g or σ ( k ) σ k last 2
    where k last is the last generation where a lower value of f g * is discovered.

3. Experiments

A series of benchmark functions from the relevant literature is introduced here, along with the conducted experiments and a discussion of the experimental results.

3.1. Test Functions

To assess the effectiveness of the proposed method in locating the overall minimum of functions, a set of well-known test functions cited in the relevant literature [66,67] was employed. The functions used here are as follows:
  • The Bent cigar function is defined as follows:
    f ( x ) = x 1 2 + 10 6 i = 2 n x i 2
    with the global minimum f x * = 0 . For the conducted experiments, the value n = 10 was used.
  • The Bf1 function (Bohachevsky 1) is defined as follows:
    f ( x ) = x 1 2 + 2 x 2 2 3 10 cos 3 π x 1 4 10 cos 4 π x 2 + 7 10
    with x [ 100 , 100 ] 2 .
  • The Bf2 function (Bohachevsky 2) is defined as follows:
    f ( x ) = x 1 2 + 2 x 2 2 3 10 cos 3 π x 1 cos 4 π x 2 + 3 10
    with x [ 50 , 50 ] 2 .
  • The Branin function is given by f ( x ) = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π cos ( x 1 ) + 10 with 5 x 1 10 , 0 x 2 15 and with x [ 10 , 10 ] 2 .
  • The CM function. The cosine mixture function is given by the following:
    f ( x ) = i = 1 n x i 2 1 10 i = 1 n cos 5 π x i
    with x [ 1 , 1 ] n . The value n = 4 was used in the conducted experiments.
  • Discus function. The function is defined as follows:
    f ( x ) = 10 6 x 1 2 + i = 2 n x i 2
    with global minimum f x * = 0 . For the conducted experiments, the value n = 10 was used.
  • The Easom function. The function is given by the following equation:
    f ( x ) = cos x 1 cos x 2 exp x 2 π 2 x 1 π 2
    with x [ 100 , 100 ] 2 .
  • The exponential function. The function is given by the following:
    f ( x ) = exp 0.5 i = 1 n x i 2 , 1 x i 1
    The global minimum is situated at x * = ( 0 , 0 , , 0 ) , with a value of 1 . In our experiments, we applied this function for n = 4 , 16 , 64 , and referred to the respective instances as EXP4, EXP16, EXP64, and EXP100.
  • Griewank2 function. The function is given by the following:
    f ( x ) = 1 + 1 200 i = 1 2 x i 2 i = 1 2 cos ( x i ) ( i ) , x [ 100 , 100 ] 2
The global minimum is located at the x * = ( 0 , 0 , , 0 ) with a value of 0.
  • Gkls function. f ( x ) = G k l s ( x , n , w ) is a function with w local minima, described in [68] with x [ 1 , 1 ] n , and n is a positive integer between 2 and 100. The value of the global minimum is −1, and in our experiments, we used n = 2 , 3 and w = 50 , 100 .
  • Hansen function. f ( x ) = i = 1 5 i cos ( i 1 ) x 1 + i j = 1 5 j cos ( j + 1 ) x 2 + j , x [ 10 , 10 ] 2 . The global minimum of the function is −176.541793.
  • Hartman 3 function. The function is given by the following:
    f ( x ) = i = 1 4 c i exp j = 1 3 a i j x j p i j 2
    with x [ 0 , 1 ] 3 and a = 3 10 30 0.1 10 35 3 10 30 0.1 10 35 , c = 1 1.2 3 3.2 and
    p = 0.3689 0.117 0.2673 0.4699 0.4387 0.747 0.1091 0.8732 0.5547 0.03815 0.5743 0.8828
    The value of the global minimum is −3.862782.
  • Hartman 6 function.
    f ( x ) = i = 1 4 c i exp j = 1 6 a i j x j p i j 2
    with x [ 0 , 1 ] 6 and a = 10 3 17 3.5 1.7 8 0.05 10 17 0.1 8 14 3 3.5 1.7 10 17 8 17 8 0.05 10 0.1 14 , c = 1 1.2 3 3.2 and
    p = 0.1312 0.1696 0.5569 0.0124 0.8283 0.5886 0.2329 0.4135 0.8307 0.3736 0.1004 0.9991 0.2348 0.1451 0.3522 0.2883 0.3047 0.6650 0.4047 0.8828 0.8732 0.5743 0.1091 0.0381
    the value of the global minimum is −3.322368.
  • The high-conditioned elliptic function is defined as follows:
    f ( x ) = i = 1 n 10 6 i 1 n 1 x i 2
    Featuring a global minimum at f x * = 0 , the experiments were conducted using the value n = 10 .
  • Potential function. As a test case, the molecular conformation corresponding to the global minimum of the energy of N atoms interacting via the Lennard–Jones potential [69] is utilized. The function to be minimized is defined as follows:
    V L J ( r ) = 4 ϵ σ r 12 σ r 6
    In the current experiments, two different cases were studied: N = 3 , 5 .
  • Rastrigin function. This function is given by the following:
    f ( x ) = x 1 2 + x 2 2 cos ( 18 x 1 ) cos ( 18 x 2 ) , x [ 1 , 1 ] 2
  • Shekel 7 function.
    f ( x ) = i = 1 7 1 ( x a i ) ( x a i ) T + c i
    with x [ 0 , 10 ] 4 and a = 4 4 4 4 1 1 1 1 8 8 8 8 6 6 6 6 3 7 3 7 2 9 2 9 5 3 5 3 , c = 0.1 0.2 0.2 0.4 0.4 0.6 0.3 .
  • Shekel 5 function.
    f ( x ) = i = 1 5 1 ( x a i ) ( x a i ) T + c i
    with x [ 0 , 10 ] 4 and a = 4 4 4 4 1 1 1 1 8 8 8 8 6 6 6 6 3 7 3 7 , c = 0.1 0.2 0.2 0.4 0.4 .
  • Shekel 10 function.
    f ( x ) = i = 1 10 1 ( x a i ) ( x a i ) T + c i
    with x [ 0 , 10 ] 4 and a = 4 4 4 4 1 1 1 1 8 8 8 8 6 6 6 6 3 7 3 7 2 9 2 9 5 5 3 3 8 1 8 1 6 2 6 2 7 3.6 7 3.6 , c = 0.1 0.2 0.2 0.4 0.4 0.6 0.3 0.7 0.5 0.6 .
  • Sinusoidal function. The function is given by the following:
    f ( x ) = 2.5 i = 1 n sin x i z + i = 1 n sin 5 x i z , 0 x i π .
    The global minimum is situated at x * = ( 2.09435 , 2.09435 , , 2.09435 ) with a value of f x * = 3.5 . In the performed experiments, we examined scenarios with n = 4 , 8 and z = π 6 . The parameter (z) is employed to offset the position of the global minimum [70].
  • Test2N function. This function is given by the following equation:
    f ( x ) = 1 2 i = 1 n x i 4 16 x i 2 + 5 x i , x i [ 5 , 5 ] .
    The function has 2 n in the specified range; in our experiments, we used n = 4 , 5 , 6 , 7 , 8 , 9 .
  • Test30N function. This function is given by the following:
    f ( x ) = 1 10 sin 2 3 π x 1 i = 2 n 1 x i 1 2 1 + sin 2 3 π x i + 1 + x n 1 2 1 + sin 2 2 π x n
    with x [ 10 , 10 ] . This function has 30 n local minima in the specified range, and we used n = 3 , 4 in the conducted experiments.

3.2. Experimental Results

To evaluate the performance of the parallel genetic algorithm, a series of experiments were carried out. These experiments varied the number of parallel computing units from 1 to 10. The parallelization was achieved using the freely available OpenMP library [49], and the method was implemented in ANSI C++ within the OPTIMUS optimization package, accessible at https://github.com/itsoulos/OPTIMUS (accessed on 7 June 2024). All experiments were conducted on a system equipped with an AMD Ryzen 5950X processor, 128 GB of RAM, and running the Debian Linux operating system. The experimental settings are shown in Table 1. To ensure the reliability and validity of the research, experiments were conducted 30 times and concerned Table 2, Table 3 and Table 4. In Table 2, the number of objective function invocations for each problem and its solving time for various combinations of processing units (PUs) and chromosomes are provided. In the columns listing objective function invocation values, values in parentheses represent the percentage of executions where the overall optimum was successfully identified. The absence of this fraction indicates a 100% success rate, meaning that the global minimum was found in every run. Generally, across all problems, there is a decrease in the number of objective function invocations and execution time as the number of parallel computing units increases. The number of chromosomes remains constant in each case, e.g., 1PUx500chrom, 2PUx250chrom, etc. This is a positive result, indicating that parallelization improves the performance of the genetic algorithm. Figure 3 and Figure 4 are derived from Table 2. A statistical comparison of objective function invocations, solving times, and execution times similarly shows performance improvements and computation time reductions for problems as the number of computing units increases.
Specifically, in Figure 3, the objective function invocations are halved compared to the initial invocations with only two computational units. This reduction in invocations continues significantly as the number of computational units increases. In Figure 4, we observe similar behavior in the algorithm termination times. In this case, the times are significantly shorter in the parallel process with ten (10) computational units compared to a single computational unit. In the comparisons presented above, there is a reduction in the required computational power, as shown in Figure 3, along with a decrease in the time required to find solutions, as depicted in Figure 4. In Table 2, additional interesting details regarding objective function invocations and computational times are presented, such as minimum, maximum, mean, and standard deviations. In conclusion, as the workload is distributed among an increasing number of computational units, there is a performance improvement. This reinforces the overall methodology.
In Table 3, chromosome migration with the best functional values occurs in every generation, involving a specific number of ten chromosomes, N P = 10, participating in the propagation process. To enhance the implementation of propagation techniques, we increased the local optimization rate applied in Table 3 from 0.1% (as presented in Table 2) to 0.5% LSR. However, the level of local optimization was carefully controlled because an excessive increase could lead to a higher number of calls to the objective function. Conversely, reducing the LSR might lead to a decrease in the success rate concerning the identification of optimal chromosomes. In the statistical representation of Figure 5, we observe the superiority of the ‘1 to N’ propagation, meaning the transfer of ten chromosomes from a random island to all others. The ’N to N’ propagation appears to be equally effective. As a general rule, if we classify migration methods based on their performance, they will be ranked as follows: ‘1toN’ in Figure 2b, ‘NtoN’ in Figure 2d, ‘1to1’ in Figure 2a, and ’Nto1’ in Figure 2c. The first two strategies, where migration occurs across all islands, demonstrate better performance compared to the other two, where migration only affects one island. The success of ‘1toN’ in Figure 2b and ‘NtoN’ in Figure 2d, albeit with a slight difference, appears to be due to the migration of the best chromosomes to all islands. This leads to an improvement in the convergence of the algorithm towards better candidate solutions in a shorter time frame. The actual times are shown in Figure 6. During the conducted experiments, the “1-to-N” and “N-to-N” propagation techniques appear to perform better according to experimental evidence. A common feature of these two techniques is that the optimal solutions are distributed to all computing units, thereby improving the performance of each individual unit and consequently enhancing the overall performance of the general algorithm.
To conduct experiments among stochastic methods of global optimization, including particle swarm optimization (PSO), improved PSO (IPSO) [71], differential evolution with random selection (DE), differential evolution with tournament selection (TDE) [72], genetic algorithm (GA), and parallel genetic algorithm (PGA), certain parameters remained constant. Also, the parallel implementation of the GAlib library [73] was used in the comparative experiments. The population size for all consists of 500 particles, agents, or chromosomes. In PGA, the population consists of 20PU × 25chrom, while all other parameters remain the same as those described in Table 2. Any method employing LSR maintains this parameter at the same value. The double box is a termination rule that is consistent across all methods.
The values resulting from experiments in Table 4 are depicted in Figure 7 and Figure 8. The box plots of Figure 7 reveal the superiority of PGA, as the number of objective function calls remains at approximately 10,000 across all problems. Conversely, IPSO, DE, and TDE (especially DE) show a low number of calls in some problems, while in others, they experience significant increases. Each method has a specific lower limit of calls during initialization and optimization, which varies from method to method. PGA consistently meets this threshold with very small deviations, as illustrated in the same figure. Figure 8 presents the total call values for each method. This work was also compared against the parallel version of GAlib, found in recent literature. Although GAlib achieves a similar success rate in discovering the global minimum of the benchmark functions, it requires significantly more function calls than the proposed method for the same setup parameters.
In Figure 9, it is observed that the collaboration of sub-listing units significantly accelerates the process of finding minima. Additionally, a new experiment was conducted where the number of chromosomes varied from 250 to 1000 and the number of processing units changed from 1 to 10. The total number of function calls for each case is graphically shown in Figure 10. The method maintains the same behavior for any number of chromosomes. This means that the set of required calls is significantly reduced by adding new parallel processing units. Of course, as expected, the total number of calls required increases as the number of available chromosomes increases.

4. Conclusions

According to the relevant literature, despite the high success rate that genetic algorithms exhibit in finding good functional values, they require significant computational power, leading to longer processing times. This manuscript introduces a parallel technique for global optimization, employing a genetic algorithm to solve the problem. Specifically, the initial population of chromosomes is divided into various subpopulations that run on different computational units. During the optimization process, the islands operate independently but periodically exchange chromosomes with good functional values. The number of chromosomes participating in migration is determined by the crossover and mutation rates. Additionally, periodic local optimization is performed on each computational unit, which should not require excessive computational power (function calls).
Experimental results revealed that even parallelization with just two computational units significantly reduces both the number of function calls and processing time, proving to be quite effective even with more computational units. Furthermore, it was observed that the most effective information exchange technique was the so-called ‘1toN’, with a slight difference from the ‘NtoN’, where a randomly selected subpopulation sends information to all other subpopulations. Moreover, the ‘NtoN’ technique—where all subpopulations send information to all other subpopulations—seems to perform equally well.
Similar dissemination techniques have been applied to other stochastic methods, such as the differential evolution (DE) method by Charilogis and Tsoulos [74] and the particle swarm optimization (PSO) method by Charilogis and Tsoulos [75]. In the case of differential evolution, the proposed dissemination technique is ‘1to1’ in Figure 2a and not ‘1toN’ in Figure 2b as suggested in this study. However, in the case of PSO and GA, the recommended dissemination technique is the same.
The parallelization of various methodologies of genetic algorithms or even different stochastic techniques for global optimization can be explored to enhance the methodology. However, in such heterogeneous environments, more efficient termination criteria are required, or even their combined use may be necessary.

Author Contributions

I.G.T. conceptualized the idea and methodology, supervised the technical aspects related to the software, and contributed to manuscript preparation. V.C. conducted the experiments using various datasets, performed statistical analysis, and collaborated with all authors in manuscript preparation. All authors have reviewed and endorsed the conclusive version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

This research was financed by the European Union: Next Generation EU through the Program Greece 2.0 National Recovery and Resilience Plan, under the call RESEARCH—CREATE—INNOVATE, project name “iCREW: Intelligent small craft simulator for advanced crew training using Virtual Reality techniques” (project code: TAEDK-06195).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Törn, A.; Žilinskas, A. Global Optimization; Springer: Berlin/Heidelberg, Germany, 1989; Volume 350, pp. 1–255. [Google Scholar]
  2. Fouskakis, D.; Draper, D. Stochastic optimization: A review. Int. Stat. Rev. 2002, 70, 315–349. [Google Scholar] [CrossRef]
  3. Cherruault, Y. Global optimization in biology and medicine. Math. Comput. Model. 1994, 20, 119–132. [Google Scholar] [CrossRef]
  4. Lee, E.K. Large-Scale Optimization-Based Classification Models in Medicine and Biology. Ann. Biomed. Eng. 2007, 35, 1095–1109. [Google Scholar] [CrossRef]
  5. Liwo, A.; Lee, J.; Ripoll, D.R.; Pillardy, J.; Scheraga, H.A. Protein structure prediction by global optimization of a potential energy function. Biophysics 1999, 96, 5482–5485. [Google Scholar] [CrossRef] [PubMed]
  6. Shin, W.H.; Kim, J.K.; Kim, D.S.; Seok, C. GalaxyDock2: Protein-ligand docking using beta-complex and global optimization. J. Comput. Chem. 2013, 34, 2647–2656. [Google Scholar] [CrossRef] [PubMed]
  7. Duan, Q.; Sorooshian, S.; Gupta, V. Effective and efficient global optimization for conceptual rainfall-runoff models. Water Resour. Res. 1992, 28, 1015–1031. [Google Scholar] [CrossRef]
  8. Yang, L.; Robin, D.; Sannibale, F.; Steier, C.; Wan, W. Global optimization of an accelerator lattice using multiobjective genetic algorithms, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers. Detect. Assoc. Equip. 2009, 609, 50–57. [Google Scholar] [CrossRef]
  9. Iuliano, E. Global optimization of benchmark aerodynamic cases using physics-based surrogate models. Aerosp. Sci. Technol. 2017, 67, 273–286. [Google Scholar] [CrossRef]
  10. Maranas, C.D.; Androulakis, I.P.; Floudas, C.A.; Berger, A.J.; Mulvey, J.M. Solving long-term financial planning problems via global optimization. J. Econ. Dyn. Control 1997, 21, 1405–1425. [Google Scholar] [CrossRef]
  11. Gaing, Z. Particle swarm optimization to solving the economic dispatch considering the generator constraints. IEEE Trans. Power Syst. 2003, 18, 1187–1195. [Google Scholar] [CrossRef]
  12. Liberti, L.; Kucherenko, S. Comparison of deterministic and stochastic approaches to global optimization. Int. Trans. Oper. Res. 2005, 12, 263–285. [Google Scholar] [CrossRef]
  13. Wolfe, M.A. Interval methods for global optimization. Appl. Math. Comput. 1996, 75, 179–206. [Google Scholar] [CrossRef]
  14. Csendes, T.; Ratz, D. Subdivision Direction Selection in Interval Methods for Global Optimization. SIAM J. Numer. Anal. 1997, 34, 922–938. [Google Scholar] [CrossRef]
  15. Maranas, C.D.; Floudas, C.A. A deterministic global optimization approach for molecular structure determination. J. Chem. Phys. 1994, 100, 1247. [Google Scholar] [CrossRef]
  16. Barhen, J.; Protopopescu, V.; Reister, D. TRUST: A Deterministic Algorithm for Global Optimization. Science 1997, 276, 1094–1097. [Google Scholar] [CrossRef]
  17. Evtushenko, Y.; Posypkin, M.A. Deterministic approach to global box-constrained optimization. Optim. Lett. 2013, 7, 819–829. [Google Scholar] [CrossRef]
  18. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  19. Socha, K.; Dorigo, M. Ant colony optimization for continuous domains. Eur. J. Oper. Res. 2008, 185, 1155–1173. [Google Scholar] [CrossRef]
  20. Price, W.L. Global optimization by controlled random search. J. Optim. Theory Appl. 1983, 40, 333–348. [Google Scholar] [CrossRef]
  21. Křivý, I.; Tvrdík, J. The controlled random search algorithm in optimizing regression models. Comput. Stat. Data Anal. 1995, 20, 229–234. [Google Scholar] [CrossRef]
  22. Ali, M.M.; Törn, A.; Viitanen, S. A Numerical Comparison of Some Modified Controlled Random Search Algorithms. J. Glob. Optim. 1997, 11, 377–385. [Google Scholar] [CrossRef]
  23. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  24. Trelea, I.C. The particle swarm optimization algorithm: Convergence analysis and parameter selection. Inf. Process. Lett. 2003, 85, 317–325. [Google Scholar] [CrossRef]
  25. Poli, R.; Kennedy, J.; Blackwell, T. Particle swarm optimization An Overview. Swarm Intell. 2007, 1, 33–57. [Google Scholar] [CrossRef]
  26. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  27. Ingber, L. Very fast simulated re-annealing. Math. Comput. Model. 1989, 12, 967–973. [Google Scholar] [CrossRef]
  28. Eglese, R.W. Simulated annealing: A tool for operational research. Eur. J. Oper. Res. 1990, 46, 271–281. [Google Scholar] [CrossRef]
  29. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  30. Liu, J.; Lampinen, J. A Fuzzy Adaptive Differential Evolution Algorithm. Soft Comput. 2005, 9, 448–462. [Google Scholar] [CrossRef]
  31. Goldberg, D. Genetic Algorithms in Search, Optimization and Machine Learning; Addison-Wesley Publishing Company: Reading, MA, USA, 1989. [Google Scholar]
  32. Michaelewicz, Z. Genetic Algorithms + Data Structures = Evolution Programs; Springer: Berlin/Heidelberg, Germany, 1996. [Google Scholar]
  33. Grady, S.A.; Hussaini, M.Y.; Abdullah, M.M. Placement of wind turbines using genetic algorithms. Renew. Energy 2005, 30, 259–270. [Google Scholar] [CrossRef]
  34. Lepagnot, I.B.J.; Siarry, P. A survey on optimization metaheuristics. Inf. Sci. 2013, 237, 82–117. [Google Scholar]
  35. Dokeroglu, T.; Sevinc, E.; Kucukyilmaz, T.; Cosar, A. A survey on new generation metaheuristic algorithms. Comput. Ind. Eng. 2019, 137, 106040. [Google Scholar] [CrossRef]
  36. Hussain, K.; Salleh, M.N.M.; Cheng, S.; Shi, Y. Metaheuristic research: A comprehensive survey. Artif. Intell. Rev. 2019, 52, 2191–2233. [Google Scholar] [CrossRef]
  37. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  38. Stender, J. Parallel Genetic Algorithms: Theory & Applications; IOS Press: Amsterdam, The Netherlands, 1993. [Google Scholar]
  39. Santana, Y.H.; Alonso, R.M.; Nieto, G.G.; Martens, L.; Joseph, W.; Plets, D. Indoor genetic algorithm-based 5G network planning using a machine learning model for path loss estimation. Appl. Sci. 2022, 12, 3923. [Google Scholar] [CrossRef]
  40. Liu, X.; Jiang, D.; Tao, B.; Jiang, G.; Sun, Y.; Kong, J.; Chen, B. Genetic algorithm-based trajectory optimization for digital twin robots. Front. Bioeng. Biotechnol. 2022, 9, 793782. [Google Scholar] [CrossRef] [PubMed]
  41. Nonoyama, K.; Liu, Z.; Fujiwara, T.; Alam, M.M.; Nishi, T. Energy-efficient robot configuration and motion planning using genetic algorithm and particle swarm optimization. Energies 2022, 15, 2074. [Google Scholar] [CrossRef]
  42. Liu, K.; Deng, B.; Shen, Q.; Yang, J.; Li, Y. Optimization based on genetic algorithms on energy conservation potential of a high speed SI engine fueled with butanol–Gasoline blends. Energy Rep. 2022, 8, 69–80. [Google Scholar] [CrossRef]
  43. Zhou, G.; Zhu, Z.; Luo, S. Location optimization of electric vehicle charging stations: Based on cost model and genetic algorithm. Energy 2022, 247, 123437. [Google Scholar] [CrossRef]
  44. Min, D.; Song, Z.; Chen, H.; Wang, T.; Zhang, T. Genetic algorithm optimized neural network based fuel cell hybrid electric vehicle energy management strategy under start-stop condition. Appl. Energy 2022, 306, 118036. [Google Scholar] [CrossRef]
  45. Doewes, R.I.; Nair, R.; Sharma, T. Diagnosis of COVID-19 through blood sample using ensemble genetic algorithms and machine learning classifier. World J. Eng. 2022, 19, 175–182. [Google Scholar] [CrossRef]
  46. Choudhury, S.; Rana, M.; Chakraborty, A.; Majumder, S.; Roy, S.; RoyChowdhury, A.; Datta, S. Design of patient specific basal dental implant using Finite Element method and Artificial Neural Network technique. J. Eng. Med. 2022, 236, 1375–1387. [Google Scholar] [CrossRef] [PubMed]
  47. Chen, Q.; Hu, X. Design of intelligent control system for agricultural greenhouses based on adaptive improved genetic algorithm for multi-energy supply system. Energy Rep. 2022, 8, 12126–12138. [Google Scholar] [CrossRef]
  48. Graham, R.L.; Woodall, T.S.; Squyres, J.M. Open MPI: A flexible high performance MPI. In Proceedings of the Parallel Processing and Applied Mathematics: 6th International Conference (PPAM 2005), Poznań, Poland, 11–14 September 2005; Revised Selected Papers 6. Springer: Berlin/Heidelberg, Germany, 2006; pp. 228–239. [Google Scholar]
  49. Ayguadé, E.; Copty, N.; Duran, A.; Hoeflinger, J.; Lin, Y.; Massaioli, F.; Zhang, G. The design of OpenMP tasks. IEEE Trans. Parallel Distrib. Syst. 2008, 20, 404–418. [Google Scholar] [CrossRef]
  50. Onbaşoğlu, E.; Özdamar, L. Parallel simulated annealing algorithms in global optimization. J. Glob. Optim. 2001, 19, 27–50. [Google Scholar] [CrossRef]
  51. Schutte, J.F.; Reinbolt, J.A.; Fregly, B.J.; Haftka, R.T.; George, A.D. Parallel global optimization with the particle swarm algorithm. Int. J. Numer. Methods Eng. 2004, 61, 2296–2315. [Google Scholar] [CrossRef] [PubMed]
  52. Regis, R.G.; Shoemaker, C.A. Parallel stochastic global optimization using radial basis functions. J. Comput. 2009, 21, 411–426. [Google Scholar] [CrossRef]
  53. Harada, T.; Alba, E. Parallel Genetic Algorithms: A Useful Survey. ACM Comput. Surv. 2020, 53, 86. [Google Scholar] [CrossRef]
  54. Anbarasu, L.A.; Narayanasamy, P.; Sundararajan, V. Multiple molecular sequence alignment by island parallel genetic algorithm. Curr. Sci. 2000, 78, 858–863. [Google Scholar]
  55. Tosun, U.; Dokeroglu, T.; Cosar, C. A robust island parallel genetic algorithm for the quadratic assignment problem. Int. Prod. Res. 2013, 51, 4117–4133. [Google Scholar] [CrossRef]
  56. Nandy, A.; Chakraborty, D.; Shah, M.S. Optimal sensors/actuators placement in smart structure using island model parallel genetic algorithm. Int. J. Comput. 2019, 16, 1840018. [Google Scholar] [CrossRef]
  57. Tsoulos, I.G.; Tzallas, A.; Tsalikakis, D. PDoublePop: An implementation of parallel genetic algorithm for function optimization. Comput. Phys. Commun. 2016, 209, 183–189. [Google Scholar] [CrossRef]
  58. Shonkwiler, R. Parallel genetic algorithms. In ICGA; Morgan Kaufmann Publishers Inc: San Francisco, CA, USA, 1993; pp. 199–205. [Google Scholar]
  59. Cantú-Paz, E. A survey of parallel genetic algorithms. Calc. Paralleles Reseaux Syst. Repartis 1998, 10, 141–171. [Google Scholar]
  60. Mühlenbein, H. Parallel genetic algorithms in combinatorial optimization. In Computer Science and Operations Research; Elsevier: Amsterdam, The Netherlands, 1992; pp. 441–453. [Google Scholar]
  61. Lawrence, D. Handbook of Genetic Algorithms; Thomson Publishing Group: London, UK, 1991. [Google Scholar]
  62. Yu, X.; Gen, M. Introduction to Evolutionary Algorithms; Springer: Berlin/Heidelberg, Germany, 2010; ISBN 978-1-84996-128-8. e-ISBN 978-1-84996-129-5. [Google Scholar] [CrossRef]
  63. Kaelo, P.; Ali, M.M. Integrated crossover rules in real coded genetic algorithms. Eur. J. Oper. Res. 2007, 176, 60–76. [Google Scholar] [CrossRef]
  64. Tsoulos, I.G. Modifications of real code genetic algorithm for global optimization. Appl. Math. Comput. 2008, 203, 598–607. [Google Scholar] [CrossRef]
  65. Powell, M.J.D. A Tolerant Algorithm for Linearly Constrained Optimization Calculations. Math. Program. 1989, 45, 547–566. [Google Scholar] [CrossRef]
  66. Floudas, C.A.; Pardalos, P.M.; Adjiman, C.; Esposoto, W.; Gümüs, Z.; Harding, S.; Klepeis, J.; Meyer, C.; Schweiger, C. Handbook of Test Problems in Local and Global Optimization; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1999. [Google Scholar]
  67. Ali, M.M.; Khompatraporn, C.; Zabinsky, Z.B. A Numerical Evaluation of Several Stochastic Algorithms on Selected Continuous Global Optimization Test Problems. J. Glob. Optim. 2005, 31, 635–672. [Google Scholar] [CrossRef]
  68. Gaviano, M.; Ksasov, D.E.; Lera, D.; Sergeyev, Y.D. Software for generation of classes of test functions with known local and global minima for global optimization. ACM Trans. Math. Softw. 2003, 29, 469–480. [Google Scholar] [CrossRef]
  69. Lennard-Jones, J.E. On the Determination of Molecular Fields. Proc. R. Soc. Lond. A 1924, 106, 463–477. [Google Scholar]
  70. Zabinsky, Z.B.; Graesser, D.L.; Tuttle, M.E.; Kim, G.I. Global optimization of composite laminates using improving hit and run. In Recent Advances in Global Optimization; Princeton University Press: Princeton, NJ, USA, 1992; pp. 343–368. [Google Scholar]
  71. Charilogis, V.; Tsoulos, I. Toward an Ideal Particle Swarm Optimizer for Multidimensional Functions. Information 2022, 13, 217. [Google Scholar] [CrossRef]
  72. Charilogis, V.; Tsoulos, I.; Tzallas, A.; Karvounis, E. Modifications for the Differential Evolution Algorithm. Symmetry 2022, 14, 447. [Google Scholar] [CrossRef]
  73. Wall, M. GAlib: A C++ Library of Genetic Algorithm Components; Mechanical Engineering Department, Massachusetts Institute of Technology: Cambridge, MA, USA, 1996; p. 54. [Google Scholar]
  74. Charilogis, V.; Tsoulos, I.G. A Parallel Implementation of the Differential Evolution Method. Analytics 2023, 2, 17–30. [Google Scholar] [CrossRef]
  75. Charilogis, V.; Tsoulos, I.G.; Tzallas, A. An Improved Parallel Particle Swarm Optimization. Comput. Sci. 2023, 4, 766. [Google Scholar] [CrossRef]
Figure 1. Parallelization of GA.
Figure 1. Parallelization of GA.
Appliedmath 04 00038 g001
Figure 2. Islands and propagation.
Figure 2. Islands and propagation.
Appliedmath 04 00038 g002
Figure 3. Statistical comparison of function calls with different numbers of processor units.
Figure 3. Statistical comparison of function calls with different numbers of processor units.
Appliedmath 04 00038 g003
Figure 4. Statistical comparison of times with different numbers of processor units.
Figure 4. Statistical comparison of times with different numbers of processor units.
Appliedmath 04 00038 g004
Figure 5. Statistical comparison of function calls with 5 PUs and different propagation technique.
Figure 5. Statistical comparison of function calls with 5 PUs and different propagation technique.
Appliedmath 04 00038 g005
Figure 6. Comparison of times with 5 PUS and different propagation techniques.
Figure 6. Comparison of times with 5 PUS and different propagation techniques.
Appliedmath 04 00038 g006
Figure 7. Statistical comparison of function calls using different stochastic optimization methods.
Figure 7. Statistical comparison of function calls using different stochastic optimization methods.
Appliedmath 04 00038 g007
Figure 8. Comparison of total function calls using different stochastic optimization methods.
Figure 8. Comparison of total function calls using different stochastic optimization methods.
Appliedmath 04 00038 g008
Figure 9. Different variations of the ELP problem.
Figure 9. Different variations of the ELP problem.
Appliedmath 04 00038 g009
Figure 10. Comparison of function calls with different numbers of chromosomes.
Figure 10. Comparison of function calls with different numbers of chromosomes.
Appliedmath 04 00038 g010
Table 1. The following settings were initially used to conduct the experiments.
Table 1. The following settings were initially used to conduct the experiments.
ParameterValueExplanation
N c 500 × 1, 250 × 2, 100 × 5, 50 × 10Chromosomes
N g 200Max generations
N I 1, 2, 5, 10Processing units or islands
N R no propagation in Table 2, 1: in every generation in Table 3Rate of propagation
N P 0 in Table 2, 10: in Table 3Chromosomes for migration
P T no in Table 2, 1to1 Figure 2a, 1toN Figure 2b, Nto1 Figure 2c, NtoN Figure 2dPropagation technique
p s 10%Selection rate
p m 5%Mutation rate
L S R 0.1% in Table 2, 0.5% in Table 3Local search rate
Table 2. Statistical analysis comparing execution times (seconds) and function calls across varying numbers of processor units.
Table 2. Statistical analysis comparing execution times (seconds) and function calls across varying numbers of processor units.
Problems N i = 1   N c = 500 Calls N i = 1   N c = 500 Time N i = 2   N c = 250 Calls N i = 2   N c = 250 Time N i = 5   N c = 100 Calls N i = 5   N c = 100 Time N i = 10   N c = 50 Calls N i = 10   N c = 50 Time
BF110,5780.55710,5550.19310,5330.12610,5110.121
BF210,5680.55410,5450.19210,5230.12710,5330.119
BRANIN46,7932.30831,2310.56211,1250.13410,5330.169
CAMEL26,5371.33815,8750.2915,8330.18810,8610.123
CIGAR1010,5021.08910,5770.38310,5830.22210,5410.206
CM410,6141.05410,5830.24910,5810.15110,5560.139
DISCUS1010,5481.0910,5320.38210,5000.22210,5020.205
EASOM100,7624.504100,6101.6694,5411.08922,8450.248
ELP1010,6011.1510,5900.43610,5740.2610,5570.242
EXP416,6211.09210,5870.24910,5600.1510,5440.143
EXP1610,6801.33610,6540.5310,6430.28710,6260.258
EXP6410,8572.33310,8291.23510,8140.82510,8300.728
EXP10010,8553.51710,9011.76310,8681.2510,8871.052
GKLS25050,8042.82525,8320.60711,7110.19410,870 (93)0.198
GKLS35040,7072.32723,7200.52217,6460.2614,1300.202
GRIEWANK2105550.565105320.19710,5170.12610,4920.118
GRIEWANK1010,6791.07910,6290.40710,6130.23910,6090.22
POTENTIAL339,6072.05734,3270.88118,3130.3415,4710.279
PONTENTIAL533,5421.653337371.07412,0400.3411,0820.291
PONTENTIAL628,901 (3)1.5626,419 (16)1.01814,265 (3)0.47811,109 (10)0.356
PONTENTIAL1042,644 (13)3.31637,897 (23)2.53814,080 (10)0.93711,319 (6)0.66
HANSEN46,894 (90)2.49428,191 (80)0.57511,085 (56)0.15311,0650.158
HARTMAN322,2351.52519,0300.37916,4630.21212,0480.146
HARTMAN618,3521.50515,9020.42916,7260.27912,2430.196
RASTRIGIN16,5670.85510,5430.19310,5210.12510,5060.116
ROSENBROCK810,8630.91610,7000.33310,6980.19910,7720.196
POSENBROCK1610,9181.371109460.51610,8670.30410,8860.271
SHEKEL532,319 (50)2.06917,913 (50)0.41211,185 (36)0.15911,010 (40)0.15
SHEKEL751,183 (73)3.27714,981 (53)0.34211,457 (60)0.16311,035 (50)0.154
SHEKEL1047,337 (70)2.97746,927 (76)1.11316,310 (56)0.2311,329 (70)0.152
SINU466,625 (83)4.34431,511 (86)0.7713,979 (73)0.21111,004 (43)0.161
SINU829,7052.5727,6130.98724,5920.54911,4220.236
TEST2N425,5531.55817,7010.39724,7630.35913,2170.178
TEST2N520,2971.32718,4400.45716,7590.26511,4830.168
TEST2N620,4501.31120,8370.56618,1230.31511,9880.194
TEST2N726,1131.92423,9400.72320,8250.38411,3390.196
TEST2N818,8461.45418,5490.58516,7000.32911,6580.218
TEST2N918,1541.58218,8030.64917,1000.36813,2990.262
TEST30N349,2352.4624,1290.45814,7430.18812,3450.152
TEST30N429,6671.55317,5010.35813,3670.18611,7780.151
SUM1,105,26874.376851,31925.61633,12612.923465,8359.532
MINIMUM10,5020.55410,5320.19210,5000.125104920.116
MAXIMUM100,7624.504100,6102.53894,5411.2522,8451.052
AVERAGE27,631.71.85921,282.9750.64015,828.150.32311,645.8750.238
STDEV19,305.7840.97215,829.0200.48213,335.5090.2602109.2300.180
Table 3. Evaluating function calls and times (seconds) using various propagation techniques for comparison.
Table 3. Evaluating function calls and times (seconds) using various propagation techniques for comparison.
ProblemsNo Propagation CallsNo Propagation Time1to1 Calls1to1 Time1toN Calls1toN TimeNto1 CallsNto1 TimeNtoN CallsNtoN Time
BF110,8090.12310,7410.12710,7700.12610,7460.12710,8080.136
BF210,7250.12410,7730.12610,7640.1310,7830.12610,7310.136
BRANIN48,3640.5631,4700.39718,7760.25135,3670.44819,2240.284
CAMEL29,0870.33718,5970.2314,4290.18524,9770.31319,3410.286
CIGAR1010,8540.23310,8800.21610,9150.22210,8900.2210,8690.235
CM410,9110.14710,9230.1510,9410.1510,9180.1510,9150.163
DISCUS1010,6510.22210,6320.21310,6510.21710,6410.2210,6060.231
EASOM99,5691.094100,1631.106100,1601.121100,1551.13998,3361.156
ELP1010,8320.27610,9020.26110,8290.26610,8110.2610,9520.278
EXP410,8030.15112,0370.16712,6950.18311,4160.16410,8190.158
EXP1611,2280.27211,2590.27611,2620.285112530.2811,2600.294
EXP6412,1270.83712,2040.84812,1840.8512,1510.84912,1990.877
EXP10012,3961.39712,3761.412,3721.3612,4601.38712,4141.42
GKLS25048,6720.81355,5860.94931,4930.56458,6381.00727,8400.532
GKLS35055,2310.81542,1000.63628,6090.45946,9230.7225,3410.428
GRIEWANK210,6820.12710,6700.12510,6970.12610,6830.12710,6840.134
GRIEWANK1011,1440.23911,1020.23211,1230.23911,1710.22911,1530.254
POTENTIAL345,7480.83233,5980.64317,2760.34732,6030.63116,8700.358
PONTENTIAL541,9461.15641,1121.17919,9120.59737,6871.08919,6220.614
PONTENTIAL646,5071.63940,5181.44921,9410.81736,1381.31521,5280.844
PONTENTIAL1047,0313.445,1663.36140,2123.23942,0573.18334,7502.883
HANSEN63,1300.8565,4140.91839,6490.59567,3690.94731,1490.507
HARTMAN319,1700.24820,3390.27416,2800.22620,0010.26514,5870.219
HARTMAN623,7250.42316,8560.28514,1410.23316,9550.28813,9640.239
RASTRIGIN11,2640.14711,2560.13210,6520.12610,6680.12811,2900.145
ROSENBROCK811,7270.20411,8920.211,6810.20311,7080.19911,8820.217
POSENBROCK16123720.4212,1870.30412,3940.31312,4380.32412,4550.324
SHEKEL544,8930.64554,1840.75134,9370.49153,2770.75540,8590.621
SHEKEL745,7220.63855,1090.77833,4400.47249,0290.70246,0660.696
SHEKEL1058,3610.85449,4000.72132,6910.47152,7980.78338,3050.608
SINU464,5840.97259,4140.92236,0520.59162,9240.97252,9370.857
SINU832,5720.79325,5520.6319,4610.46228,7440.71618,1730.445
TEST2N423,4300.33920,4740.317,0010.26121,4680.31618,4360.294
TEST2N522,6620.35820,6140.3316,1710.26219,6970.31616,4210.282
TEST2N621,6630.36518,7210.32316,6000.28919,5560.33914,6330.299
TEST2N724,4010.45618,9900.35415,7920.320,9670.40513,9950.28
TEST2N821,0170.41818,5320.36916,6440.33920,1390.41313,9800.298
TEST2N922,6840.48818,5380.407163020.35318,9290.42114,6200.344
TEST30N324,5240.31822,7990.29620,4360.29723,1860.31119,9680.316
TEST30N421,0900.2825,1600.35821,2160.31919,4440.27616,7110.267
Total 1,164,30824.011,088,24022.74829,55118.331,097,76522.86836,69318.95
Table 4. Comparison of function calls using different stochastic optimization methods.
Table 4. Comparison of function calls using different stochastic optimization methods.
PROBLEMSPSOIPSORDETDEGAGAlibPGA
BF150,39811,4787943 (86)553510,57811,64110,501
BF250,39711,2928472 (76)553910,56811,32110,510
BRANIN44,80010,8495513551446,79334,48710,838
CAMEL48,24211,0515555551426,53717,32111,087
CIGAR1050,58112,3315586100,57310,50211,567 (50)10,566
CM448,55911,7675550553810,61411,118 (70)10,548
DISCUS1050,52314,32818,187100,51810,54810,98810,503
EASOM21,78610,93829,25624,691100,76279,68910,797
ELP1049,837432311,933100,58410,60111,67310,559
EXP448,52311,04146,75219,46716,62116,04510,503
EXP1650,51810,973553769,49410,68010,50010,595
GKLS25043,92510,86941,01611,43050,80431,29810,893 (76)
GKLS35048,20210,75056,22016,83140,70729,897 (96)11,555 (96)
GRIEWANK244,02113,5145538553310,55514,419 (67)10,498
GRIEWANK1050,557 (3)12,258 (86)5612 (13)85,742 (3)10,67910,80010,576
POTENTIAL349,21312,1245530552339,60733,45211,039
PONTENTIAL550,54816,0275587556933,54231,28511,134
PONTENTIAL650,558 (3)24,414 (66)5607 (6)5588 (3)28,901 (3)28,444 (10)11,143 (10)
PONTENTIAL1050,641 (6)31,4345670 (3)5661 (6)42,644 (13)38,883 (20)11,290 (20)
HANSEN47,29613,1315522552146,894 (90)45,44011,055
HARTMAN347,77810,9615525552222,23519,43411,097
HARTMAN650,088 (33)11,085 (86)5536 (83)553618,35218,444 (60)11,273
RASTRIGIN47,43311,5945542552416,56716,286 (96)10,506
ROSENBROCK850,54913,48772,088100,50310,86311,41910,645
POSENBROCK1650,58412,65921,51710,64510,91811,68110,957
SHEKEL549,944 (33)13,058 (93)5532 (86)5524 (93)32,319 (50)29,28710,883 (43)
SHEKEL750,062 (53)12,134 (96)5533 (96)552351183 (73)47,245 (77)10,926 (53)
SHEKEL1050,124 (63)14,1765535 (90)552347,337 (70)45,911 (77)11,207 (80)
SINU449,23911,3495527551066,625 (83)66,38311,063 (76)
SINU850,22411,2955537 (80)552029,70529,23411,378
TEST2N450,112 (93)13,1735529551925,55319,91311049
TEST2N950,517 (13)17,510 (60)5546 (6)5535 (56)18,15415,37611,145
TEST30N344,30119,6385515551149,23549,23411,051
TEST30N449,17720,8395514551129,66733,42811,301
TOTAL1,639,257457,850446,562767,771997,850903,547370,671
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Charilogis, V.; Tsoulos, I.G. Introducing a Parallel Genetic Algorithm for Global Optimization Problems. AppliedMath 2024, 4, 709-730. https://doi.org/10.3390/appliedmath4020038

AMA Style

Charilogis V, Tsoulos IG. Introducing a Parallel Genetic Algorithm for Global Optimization Problems. AppliedMath. 2024; 4(2):709-730. https://doi.org/10.3390/appliedmath4020038

Chicago/Turabian Style

Charilogis, Vasileios, and Ioannis G. Tsoulos. 2024. "Introducing a Parallel Genetic Algorithm for Global Optimization Problems" AppliedMath 4, no. 2: 709-730. https://doi.org/10.3390/appliedmath4020038

Article Metrics

Back to TopTop