Next Article in Journal
Combination of a Synthetic Bioceramic Associated with a Polydioxanone-Based Membrane as an Alternative to Autogenous Bone Grafting
Next Article in Special Issue
Multi-Strategy Improved Dung Beetle Optimization Algorithm and Its Applications
Previous Article in Journal
Clap-and-Fling Mechanism of Climbing-Flight Coccinella Septempunctata
Previous Article in Special Issue
A Multi-Objective Optimization Problem Solving Method Based on Improved Golden Jackal Optimization Algorithm and Its Application
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Approach to Combinatorial Problems: Binary Growth Optimizer Algorithm

Escuela de Ingeniería Informática, Pontificia Universidad Católica de Valparaíso, Avenida Brasil 2241, Valparaíso 2362807, Chile
*
Author to whom correspondence should be addressed.
Biomimetics 2024, 9(5), 283; https://doi.org/10.3390/biomimetics9050283
Submission received: 11 April 2024 / Revised: 30 April 2024 / Accepted: 2 May 2024 / Published: 9 May 2024
(This article belongs to the Special Issue Nature-Inspired Metaheuristic Optimization Algorithms 2024)

Abstract

:
The set-covering problem aims to find the smallest possible set of subsets that cover all the elements of a larger set. The difficulty of solving the set-covering problem increases as the number of elements and sets grows, making it a complex problem for which traditional integer programming solutions may become inefficient in real-life instances. Given this complexity, various metaheuristics have been successfully applied to solve the set-covering problem and related issues. This study introduces, implements, and analyzes a novel metaheuristic inspired by the well-established Growth Optimizer algorithm. Drawing insights from human behavioral patterns, this approach has shown promise in optimizing complex problems in continuous domains, where experimental results demonstrate the effectiveness and competitiveness of the metaheuristic compared to other strategies. The Growth Optimizer algorithm is modified and adapted to the realm of binary optimization for solving the set-covering problem, resulting in the creation of the Binary Growth Optimizer algorithm. Upon the implementation and analysis of its outcomes, the findings illustrate its capability to achieve competitive and efficient solutions in terms of resolution time and result quality.

1. Introduction

There are a series of real-world problems that exhibit high complexity [1,2,3,4,5]. These are characterized by the presence of multiple local optima and a global optimum that, due to the high number of variables and current computing capabilities, would take unimaginable amounts of time to solve the mathematical functions that model these problems. Therefore, techniques have been studied and developed to efficiently obtain optimal solutions, with metaheuristic algorithms being among the most popular. These algorithms stand out mainly because of their flexibility, their ease of implementation, and the lack of a need for gradients when solving a problem.
The set-covering problem is an optimization challenge in the fields of computer theory and operations research. In this problem, the goal is to find the most efficient way to cover a set of elements using a smaller set of subsets, where each subset has an associated cost. The objective is to minimize the total cost by selecting an appropriate set of subsets so that all elements are covered at least once. The set-covering problem has applications in various fields, such as route planning, resource allocation, and general decision-making. Solving this problem involves striking a balance between the number of selected subsets and the total cost required to achieve complete coverage [6].
One of the most relevant characteristics that a metaheuristic should have is the ability to possess operators that allow both exploration and exploitation of the search space. Exploitation refers to the algorithm’s ability to perform a local search, while exploration refers to its ability to perform global searches, thus enabling the heuristic to find optima throughout the entire search space.
The main aims of our work are the following:
  • Implement a binary version for Growth Optimizer.
  • Solve the set-covering problem with our proposal.
  • Carry out a deep analysis of the behavior of our proposal. We will analyze the convergence, execution times, best results obtained, distribution of fitness in the independent runs executed, and balance of exploration and exploitation.
The present work is organized as follows. Section 2 defines the combinatorial optimization problems. Section 3 describes the concept of metaheuristics. Section 4 describes the theoretical background and the set-covering problem along with its mathematical model. Section 5 defines the Growth Optimizer algorithm and the mathematical model, and the used operators are presented. Section 6 presents our proposal: a Binary Growth Optimizer for solving the SCP. In Section 7, an analysis and a discussion of the obtained results are carried out. Finally, Section 8 provides the conclusions and future work.

2. Combinatorial Optimization Problems

Optimization problems are a type of problem in mathematics and computer science that involve finding the best solution from a set of possible solutions, according to certain criteria or constraints. In these problems, the objective is to maximize or minimize an objective function, which usually represents a measure of quality or efficiency [7].
One of the major challenges in combinatorial optimization is the current computing capacity, which often cannot deliver optimal solutions efficiently. Among the most famous problems is the Traveling Salesman Problem [5], where individuals must decide the shortest route to visit n cities to optimize their fuel costs. The number of possibilities to calculate to solve this problem is equal to “n” factorial, meaning that the number of possible solutions grows factorially with the number of cities. Using the most powerful computer available today to solve a problem like this for 20 cities would require approximately 15,000 years to provide the optimal solution. In 1954, techniques such as linear programming, heuristics, and branch and bound were first used to solve instances of the Traveling Salesman Problem. These techniques have been the most successful methods for solving these types of problems to this day.
In order to classify combinatorial optimization problems by complexity, various research efforts have led to four basic categorizations: P-type, NP-type, NP-complete, and NP-hard. P-type problems are those that can be solved in polynomial time “n” by deterministic algorithms, where “n” is the problem size. NP-type problems can be solved in polynomial time of the same degree as the problem size “n” by nondeterministic algorithms. NP-complete problems are at least as difficult as NP problems but are considered the most challenging within this classification. They can be solved in polynomial time through a polynomial reduction. Lastly, NP-hard problems are at least as difficult as NP problems, but no algorithm is known that can solve them in polynomial time. It is important to note that all NP-complete problems are within the NP-hard classification, but not all NP-hard problems fall under the NP-complete category [8].

2.1. Continuous Optimization Problems

Continuous optimization problems are those in which the goal is to minimize an objective function within a search space defined by continuous boundaries or constraints. In these problems, solutions are continuous values, and the objective function can be computationally expensive to evaluate. There is no requirement for discrete data structures, and the search space is continuous [9].

2.2. Discrete Optimization Problems

Discrete optimization problems are characterized by having solutions represented as discrete data structures rather than continuous values. These data structures can include ordinal, categorical, or binary variables, permutations, strings, trees, or other discrete representations. In discrete optimization problems, the continuous boundaries or constraints are often not necessary. These problems involve finding the best combination or configuration of elements within a discrete set of options [9].

3. Metaheuristics

Metaheuristics, also known as stochastic search algorithms, are characterized by an iterative search that uses stochastic procedures to generate the next iterations. The next iteration may contain a candidate solution to be the best local optimum.
These algorithms are considered robust and easy to implement because they do not rely on structural information from an objective function, such as gradient information or convexity. This feature has contributed to their popularity in the field of combinatorial optimization. However, many of these algorithms require specifying various configuration parameters, such as population sizes, variation operators, or distribution functions, making it necessary to fine-tune them to solve different problems.
The most basic metaheuristics are instance-based. These algorithms maintain a single solution or a population of candidate solutions. The construction of new candidate solutions depends explicitly on the solutions generated previously. Prominent examples of representatives in this category include simulated annealing [10], evolutionary algorithms (EAs) [11], and tabu search [12]. To delve deeper into metaheuristics, the book by El-Ghazali Talbi is recommended [13].

4. The Set-Covering Problem (SCP)

The SCP is a classical optimization problem defined by a binary matrix, denoted as A. In this matrix, each cell is represented as a binary value, where a i j [ 0 , 1 ] , and i and j are the size of m-rows and n-columns, respectively:
a 11 a 12 a 1 n a 21 a 22 a 2 n a m 1 a m 2 a m n
Let i 1 , 2 , , m and j 1 , 2 , , n represent the sets of rows and columns, respectively. The primary objective of the problem is to minimize the cost associated with the subset S J , subject to the constraint that all rows i I must be covered by at least one column j J . It is important to note that the inclusion of column j in the subset of solution S is represented as 1, and 0 otherwise. The problem at hand can be formally defined as the set-covering problem (SCP), which seeks to
Minimize Z = j J c j · x j
Subject to Equations (2) and (3),
j J a i j · x j 1 i I ( each row must be covered )
x j { 0 , 1 } j J ( binary variables )

4.1. Applications

The SCP has a variety of real-world applications, making it highly relevant in the optimization field. These real-world problems often pose a significant computational burden on teams, necessitating the development of techniques to obtain feasible solutions within a reasonable timeframe. Examples of these real-world applications include the following.

4.1.1. Organization of the Serbian Postal Network

Efficiently organizing a postal network that ensures global service coverage is a challenging task. It involves the strategic placement of physical stations or access points where local residents can send parcels or deposit items. This problem is subject to additional constraints related to population density, access point placement costs, and city size. The primary objective is to minimize the number of permanent postal units within the postal network. This optimization process reduces the operational costs for the postal operator and minimizes the total number of employees involved in the service [14].

4.1.2. Sibling Relationship Reconstruction

In the field of genetics, there is a challenge in modeling and reconstructing sibling relationships among individuals of a single generation when parental genetic information is unavailable. This problem holds significant importance, as knowledge of familial relationships is crucial in biological applications, such as studies of mating systems, the population management of endangered species, or the estimation of hereditary traits [15].

4.1.3. Periodic Vehicle Routing Problem

In this context, the problem involves determining the minimum-cost routes for each day within a given planning horizon. These routes come with constraints that require each customer to be visited a specified number of times (chosen from a set of valid day combinations) and ensure that the required quantity of products is delivered during each visit. Another critical constraint is that the number of daily routes (each respecting the vehicle’s capacity) should not exceed the total number of available vehicles [16].

4.2. Solving Set-Covering Problem Review

The set-covering problem seeks a subset of decision variables that satisfy a minimum cost, and Crawford et al. proposed an improved binary monkey search algorithm (MSA) to handle the SCP [6]. The algorithm employs a novel climbing process to enhance exploration capability and a new cooperative evolution to reduce the number of infeasible solutions. Jaszkiewicz compared the computational efficiency of three state-of-the-art multi-objective metaheuristic algorithms in the SCP [17], and computational effort was compared in achieving the same solution quality by calculating average of scalarizing functions in representative samples. Kılıç and Yüzgeç proposed an antlion optimization (ALO) algorithm for the quadratic assignment problem based on contest selection [18]. In the random walking process of ALO, a tournament selection strategy is introduced to replace the roulette method, and several equations in ALO are modified.
The minimum labeling spanning tree (MLST) is an NP-hard problem commonly applied in communication networks and data compression. To address this problem, Lin et al. introduced a binary FA that repairs infeasible solutions and eliminates redundant tags [19], and the algorithm is more suitable for discrete optimization. Vehicular ad hoc networks (VANETs) require robust paths connecting all nodes to achieve reliable and efficient information transmission, but classic graph theory only yields a minimum spanning tree (MST). Zhang and Zhang proposed a binary-coded ABC algorithm to solve the construction of spanning trees and applied the algorithm to roadside-vehicle communication [20]. Da et al. proposed an improved maximum vertex cover algorithm to meet the strict time complexity constraint of mixed-integer linear programs (MILP), and multi-start local search handles it by combining the proposed algorithm with local search [21], more state-of-the-art is shown in the Table 1.

5. The Growth Optimizer Algorithm

5.1. Inspiration

Growth Optimization (GO) is a metaheuristic inspired by human behavior and how individuals develop in their surroundings [33]. In this metaheuristic, each potential solution is associated with an individual within a population. These individuals are ranked based on their Growth Resistance G R , which corresponds to the value of the objective function when evaluating the solution. This ranking divides the population into three groups based on a parameter P 1 : the upper level (positions 1 to P 1 , with the first being the leader), the middle level (positions P 1 + 1 to N P 1 ), and the lower rank (positions N P 1 to N).

5.2. Mathematical Modeling

The GO algorithm consists of two main phases for generating solutions: the learning phase and the reflection phase, as shown in Algorithm 1.
Algorithm 1 The pseudocode of the GO algorithm.
  • Input:  N ( p o p u l a t i o n s i z e ) , D ( p o p u l a t i o n d i m e n s i o n ) , u b , l b , P 1 = 5 , P 2 = 0.001 , P 3 = 0.3 , F E s = 0
  • Output: Global optimal solution: g b e s t x
  • 1. Initialize the population using x = l b + ( u b l b ) · rand ( N , D ) and evaluate        ( i = 1 , , N )
  • while  F E s MaxFEs  do
  •      [ , i n d ] = sort ( G R )
  •      x b e s t = x ( i n d ( 1 ) , : )
  •     %Learning phase:
  •     for  i = 1 to N do
  •          x b e t t e r = x ( i n d ( randi ( [ 2 , 1 + P 1 ] , 1 ) ) , : )
  •          x w o r s e = x ( i n d ( randi ( [ N P 1 , N ] , 1 ) ) , : )
  •         Find two random individuals that are different from x i : x L 1 and x L 2
  •         Compute Gap k , k = 1 , 2 , 3 , 4 by Equation (4)
  •         Compute LF k , k = 1 , 2 , 3 , 4 according to Equation (5)
  •         Compute S F i according to Equation (6)
  •         Compute K A k , k = 1 , 2 , 3 , 4 according to Equation (8)
  •         Complete the learning process for the i-th individual once according to Equation (7)
  •         Complete the update of the i-th individual according to Equation (9)
  •         Real-time update g b e s t x
  •          F E s = F E s + 1
  •     end for
  •     %Reflection phase:
  •     for  i = 1 to N do
  •         Complete the reflection process for the i-th individual once according to Equations (10) and (11)
  •         Complete the update of the i-th individual according to Equation (9)
  •         Real-time update g b e s t x
  •          F E s = F E s + 1
  •     end for
  • end while
  • Output:  g b e s t x

5.2.1. Learning Phase

In this phase, the algorithm generates movements by using the differences between individuals to reflect how much an individual should learn based on their knowledge gap compared to others, described in Equation (4)
G a p 1 = x b e s t x b e t t e r G a p 2 = x b e s t x w o r s e G a p 3 = x b e t t e r x w o r s e G a p 4 = x L 1 x L 2
where x b e s t represents the best solution, while x b e t t e r represents one of the next P 1 1 best individuals. x w o r s e is one of the P 1 lowest ranked individuals in the population. Both x L 1 and x L 2 are random individuals different from the ith individual.
Metrics like learning factor L F k in Equation (5) and S F i in Equation (6) are used to control how much an individual should learn based on the knowledge gap and their resistance to change. L F k measures the influence of gap k on individual i, while S F i evaluates an individual’s resistance to change compared to the rest of the population.
L F k = G a p k k = 1 4 G a p k } , ( k = 1 , 2 , 3 , 4 )
S F i = G R i G R m a x
To represent the acquired knowledge and generate a new candidate solution in Equation (7), the knowledge acquisition K A formula is used, allowing each individual i to absorb knowledge from various gaps using Equation (8)
x i I t + 1 = x i I t + K A 1 + K A 2 + K A 3 + K A 4
K A k = S F i · L F k · G a p k , ( k = 1 , 2 , 3 , 4 )
where I t is the number of current iterations, and  x i is the ith individual who absorbs the knowledge acquired.
Subsequently, an adjustment phase is carried out, where an evaluation is carried out of whether the solutions in the next iteration are better than the previous ones, modeled here as a minimization problem. If not, these solutions can be retained with a small retention probability, controlled by the parameters P 2 and r 1 , a uniformly distributed random number in the range [ 0 , 1 ] , preventing the loss of an individual’s effort, modeled as follows Equation (9):
x i I t + 1 = x i I t + 1 if f ( x i I t + 1 ) < f ( x i I t ) x i I t + 1 if r 1 < P 2 x i I t else else

5.2.2. Reflection Phase

In this phase, individuals seek to compensate for or overcome their deficiencies. u b and l b are the upper and lower bounds of the search domain, P 3 is the parameter that controls the probability of reflection, and r 2 , r 3 , r 4 , and  r 5 are uniformly distributed random numbers in the range [ 0 , 1 ] used to determine how individuals adjust their solutions, modeled in Equations (10) and (11).
x i j t + 1 = l b + r 4 · ( u b l b ) if r 3 < A F x i j I t + r 5 · ( R j x i j I t ) else if r 2 < P 3 x i j I t else
A F = 0.01 + 0.99 × 1 F E s M a x F E s
The algorithm also incorporates an Attenuation Factor A F that depends on the current number of evaluations F E and the maximum number of evaluations maxFE’s. As the algorithm progresses, the  A F value tends to converge to 0.001, indicating that individuals avoid frequent reinitialization and make the most of their progress. R denotes an individual at the high level, and it serves as a reflective learning guide for the current individual i. R j is the knowledge of the jth aspect of R .

6. A New Binary Growth Optimizer

The binarization techniques used in continuous MHs involve transferring continuous domain values to binary domains, with the aim of maintaining the quality of moves and generating high-quality binary solutions. While some MHs operate on binary domains without a binary scheme, studies have demonstrated that continuous MHs supported by a binary scheme perform exceptionally well on multiple NP-hard combinatorial problems [34]. Examples of such MHs include the binary bat algorithm [35,36], binary particle swarm optimization [37], binary sine–cosine algorithm [38,39], binary salp swarm algorithm [40], binary grey wolf optimizer [39,41], binary dragonfly algorithm [42,43], binary whale optimization algorithm [39], and binary magnetic optimization algorithm [44]. In the scientific literature, two main groups of binary schemes used to solve combinatorial problems can be identified [45]. The first group refers to operators that do not cause alterations in the operations related to different elements of the MH. Within this group, two-step techniques stand out as the most widely used in recent years, as they are considered to be the most efficient in terms of convergence and their ability to find optimal solutions. These techniques are based on modifying the solution in the first step and discretizing it into a 0 or a 1 in the second step [34]. In addition, the angle modulation technique is also used in this group as it has been shown to be effective in solving combinatorial problems [46]. On the other hand, the second group of binary schemes includes methods that alter the normal operation of an MH. For example, the quantum binary approach, which is based on the application of quantum mechanisms to solve combinatorial problems [47]. In addition, also included in this group are set-based approaches, which focus on the selection of solution sets to improve the efficiency of the MH. Finally, clustering-based techniques, such as the k-means approach [48,49], are also considered in this second group, as they modify the normal operation of the MH to improve its ability to find optimal solutions.
Unlike the original approach of GO, where candidate solutions are found in the continuous domain, the proposed Binary Growth Optimizer (BWO) uses binary strings to represent solutions, where each decision variable corresponds to either 0 or 1, as shown in Figure 1.
This new proposal, detailed in pseudocode Algorithm 2, features the same moves as GO, with the addition of three key steps for obtaining binary solutions. First, the initialization of the initial population is generated in a binary manner, and binarization in two steps is carried out after the learning and reflection phases. These moves involve new parameters and functions in the algorithm that need to be adjusted as well.
Algorithm 2 The pseudocode of the BGO algorithm.
  • Input:  N ( p o p u l a t i o n s i z e ) , D ( p o p u l a t i o n d i m e n s i o n ) , u b , l b , P 1 = 5 , P 2 = 0.001 , P 3 = 0.3 , F E s = 0
  • Output: Global optimal solution: g b e s t x
  • 1. Initialize the population using x = randint ( N , D ) and evaluate ( i = 1 , , N )
  • while  F E s MaxFEs  do
  •      [ , i n d ] = sort ( G R )
  •      x b e s t = x ( i n d ( 1 ) , : )
  •     %Learning phase:
  •     for  i = 1 to N do
  •          x b e t t e r = x ( i n d ( randi ( [ 2 , 1 + P 1 ] , 1 ) ) , : )
  •          x w o r s e = x ( i n d ( randi ( [ N P 1 , N ] , 1 ) ) , : )
  •         Find two random individuals that are different from x i : x L 1 and x L 2
  •         Compute Gap k , k = 1 , 2 , 3 , 4 by Equation (4)
  •         Compute LF k , k = 1 , 2 , 3 , 4 according to Equation (5)
  •         Compute S F i according to Equation (6)
  •         Compute K A k , k = 1 , 2 , 3 , 4 according to Equation (8)
  •         Complete the learning process for the i-th individual once according to Equation (7)
  •         Complete the update of the i-th individual according to Equation (9)
  •         Real-time update g b e s t x
  •          F E s = F E s + 1
  •     end for
  •     %Reflection phase:
  •     for  i = 1 to N do
  •         Complete the reflection process for the i-th individual once according to Equations (10) and (11)
  •         Complete the update of the i-th individual according to Equation (9)
  •         Real-time update g b e s t x
  •          F E s = F E s + 1
  •     end for
  •     %Two-Step Binarization:
  •     for  i = 1 to N do
  •         Compute the transfer function of the i-th individual
  •         Complete the binarization of the i-th individual based on the binarization function selected
  •     end for
  • end while
  • Output:  g b e s t x

6.1. Two-Step Binarization

In the scientific community, two-step binarization schemes are very relevant [50]. They have been widely used to solve a variety of combinatorial problems [51]. As the name suggests, this binarization scheme consists of two stages. The first stage involves the application of a transfer function [52], which transfers the values generated by the continuous MH to a continuous interval between 0 and 1. The second stage consists of the application of a binarization rule, which discretizes the numbers within that interval into binary values. This technique has been shown to be effective in solving combinatorial problems, since it allows the quality moves of the continuous MH to be preserved while generating high-quality binary solutions.

6.1.1. Transfer Function

In 1997, Kennedy and his team [53] introduced the concept of transfer functions in the field of optimization. The significant advantage of these functions lies in their ability to provide probability values in a low computational cost range of 0 to 1. There are several types of transfer functions, with “S” and “V” forms being among the most popular [52,54]. Their utility is derived from their capacity to transform values generated by continuous metaheuristics into a continuous interval from 0 to 1.
It is important to note that there is no one-size-fits-all transfer function. This is due to the well-known “no free lunch” theorem [55], which states that there is no universal optimization algorithm that excels in all situations. Consequently, this theorem allows for experimentation and the development of new optimization algorithms.

6.1.2. Binarization Rules

In the binarization stage, the conversion of discrete values into binary values, specifically 0 or 1, is obtained. Binarization rules are applied to the probabilities obtained through the transfer function to yield binary values. The choice of which binarization rule to use is crucial as it directly influences the effectiveness of the solutions in the binary metaheuristic context.

7. Results and Discussion

7.1. Experimental Setup

For the experiments, BWO was compared with three widely known metaheuristics, the grey wolf optimizer (GWO) [56], Pendulum Search Algorithm (PSA) [57], and sine–cosine algorithm (SCA) [58], and a selection of 49 different SCP instances was made. Each instance underwent a total of 31 experiments with V-type transfer function, the main features are shown in the Table 2, where the density corresponds to the percentage of non-zero in the matrix.
Regarding the configuration of the metaheuristics, we used the following strategies:
  • Solution initialization: As a solution initialization strategy, we used the generation of random solutions. Since we were solving a binary problem, each decision variable was randomly assigned a value of 1 or 0.
  • Termination conditions: In the literature, we found different term criteria, such as calls to the objective function [59] or the total number of iterations [60,61,62] that the metaheuristic will perform. We considered the total number of iterations as the completion criterion in our work. After previous experiments, it was determined to use 30 as the total number of iterations.
  • Population size: As we worked with population metaheuristics, defining the number of individuals to use in the experimentation was key. After previous experiments, it was determined that 40 individuals would be used as the population size.
The hardware used in the experiments included an Intel Core i5-9400F processor operating at 2.90 GHz, 16.00 GB of RAM, and a 64-bit operating system with an x64 processor. Since we were experimenting with stochastic algorithms, we ran each experiment 31 times independently. All the code used for our experimentation can be found in [63].
These experiments were subjected to an analysis that delivered comparative convergence graphs, boxplots, time graphs, and exploration vs. exploitation charts. Additionally, a statistical analysis was conducted to assess the behavior of the Binary Growth Optimizer and enable a comparison with other implemented metaheuristics, thereby establishing a common framework for comparison with typical techniques.

7.2. Experimental Results

Table 3 shows the results obtained in the experimentation. Table 3 has the following format: The first column refers to the resolved SCP instance. The second column refers to the optimum of the instance. If the instance has no known optimum, it is marked with “-”. The third, fourth, and fifth columns repeat each metaheuristic used. The first of these three columns refers to the best result obtained in the 31 independent executions. The second of these three columns refers to the fitness average obtained in the 31 independent executions. The third of these three columns refers to the Relative Percentage Distance (RPD) calculated based on Equation (12),
RPD = 100 · B e s t O p t O p t .
where O p t corresponds to the optimum of the instance and B e s t corresponds to the best value obtained for the experiment.
By reviewing Table 3, we can see that the BGO obtains competitive results compared to the other algorithms. We can even highlight that the BGO reaches the optimum in six instances.

7.3. Convergence vs. Iterations

One crucial aspect when evaluating the performance of metaheuristics is the speed at which they converge towards an optimal or near-optimal solution. In Figure 2, a consistent trend is observed: the Binary Growth Optimizer demonstrates a remarkable ability to converge more rapidly compared to the other three metaheuristics. Therefore, it can be highlighted for each of these:
  • Binary Growth Optimizer: In all cases, the Binary Growth Optimizer exhibited faster convergence in fewer iterations. This suggests that this metaheuristic can find high-quality solutions in fewer iterations, which could be beneficial in practical applications where efficiency is crucial.
  • Grey wolf optimizer: While the grey wolf optimizer did not reach the convergence speed of the Growth Optimizer, it still stood out as the second-best option in terms of convergence speed. This indicates its ability to find reasonable solutions within a reasonable time frame.
  • Sine–cosine algorithm: This algorithm showed a moderate convergence speed compared to the two aforementioned metaheuristics. Although it may not be the fastest choice, it remains an effective metaheuristic for addressing SCP instances.
  • Pendulum Search Algorithm: In the graphs, the PSA exhibited slower convergence compared to the other three metaheuristics. This suggests that it may require more iterations to reach comparable solutions.

7.4. Fitness Distribution

For the instance scp43 in Figure 3c, significant differences in the results are observed among the various metaheuristics. The BGO displays a boxplot that is close to a line, indicating rapid convergence towards low fitness values. On the other hand, the other metaheuristics maintain more uniform boxplots, suggesting more consistent performance in this particular instance.
For the instance scp52 in Figure 3l, all the metaheuristics exhibit fairly similar performance, with minimal differences in the maximum and minimum values obtained. Particularly, the PSA presents higher maximum values compared to the others, while the GWO shows the lowest minimum values. This suggests that all the metaheuristics converge within a range of similar solutions in this instance, although the PSA tends to explore solutions with a higher maximum fitness value, indicating a broader search for high-quality solutions.
For the instance scpb4 in Figure 3ah, the results are mostly similar among the different metaheuristics, although it is observed that the PSA displays a slightly higher boxplot compared to the others. This suggests that in this particular instance, the PSA may have a tendency to explore solutions with a moderately higher level of fitness compared to the other metaheuristics.
For the instance scpb5 in Figure 3ai, an interesting pattern is observed in the results of the metaheuristics. All the metaheuristics show a distribution of fitness values that remains within a relatively narrow range. However, it is important to note that the PSA and GWO exhibit higher maximum fitness values compared to the other metaheuristics in this particular instance. This suggests that both the PSA and GWO have the capability to explore solutions with a higher maximum fitness value in this specific problem configuration.
This variation in the results emphasizes the importance of considering the characteristics and conditions of each individual instance when selecting the most suitable metaheuristic to address optimization problems.
For the instance scpnrg5 in Figure 3au, it is observed that all the metaheuristics maintain boxplots with high performance values. However, it is noteworthy that the BGO displays the lowest minimum values in this instance.
For the instance scpnrh2 in Figure 3av, it is observed that the SCA and BGO maintain similars boxplots, while the GWO suggests a tendency to converge towards lower fitness solutions while retaining the capability to reach higher values.
For the instance scpnrh4 in Figure 3aw, all the metaheuristics exhibit fairly similar performance, with minimal differences in the maximum and minimum values obtained.
For the instance scpnrh5 in Figure 3ax, interesting patterns are observed in the results of the different metaheuristics. In particular, the following are true:
  • The SCA exhibits a complete range of maximum and minimum fitness values, suggesting a broad exploration of solutions.
  • The PSA stands out for having higher maximum values than the other metaheuristics, although it also presents exceptionally low whiskers. This indicates its ability to reach high-quality solutions but also a tendency towards less optimal solutions.
  • The GWO closely follows the SCA, covering a similar fitness range but with more lower solutions, indicating effective exploration of the search space.
  • The BGO has values slightly lower than the average and shows whiskers at the upper end, suggesting a tendency to converge towards better optimal fitness solutions in this specific instance.
These results highlight how each metaheuristic has its own way of approaching the problem based on its specific characteristics and parameters, which can lead to varied results in different problem configurations.

7.5. Exploration vs. Exploitation

Next, Figure 4 presents the exploration and exploitation of the BGO algorithm when solving the scp43 instance. This allows us to analyze the movements made by the metaheuristic and how it finds new solutions.
In the graph, quick convergence to 93 percent for exploitation and 7 percent for exploration is shown. This early convergence indicates that the metaheuristic is capable of finding a neighborhood of solutions close to the optimum in a few iterations.

7.6. Time vs. Iterations

In Figure 5 are graphs that display time as a function of iterations for different metaheuristics in problem resolution. One notable finding is that the BGO requires more time in the initial iterations compared to the other metaheuristics.
This suggests that the BGO makes more time-consuming initial moves. However, this initial time investment has a significant benefit because the BGO tends to converge more rapidly towards high-quality solutions compared to the other metaheuristics. In other words, the BGO achieves early convergence towards an optimum in fewer iterations compared to the other MH.

7.7. Statistical Tests

Before conducting the statistical test, it was essential to determine the appropriate type of test to be employed, distinguishing between a parametric and a non-parametric test.
A non-parametric statistical test was deemed suitable for our analysis as the data originated from machines rather than exhibiting a normal distribution inherent in natural occurrences. Given the independence of our sample sets, the Wilcoxon–Mann–Whitney test [64,65] was selected for the statistical analysis. This test allows for the identification of algorithms that are significantly better than others. It offers detailed information about pairs of algorithms with significant differences in terms of performance.
Utilizing the scipy Python library, the chosen statistical test could be applied using the function scipy.stats.mannwhitneyu. Within this function, the “alternative” parameter was specified as “less”. The evaluation involved contrasting two distinct metaheuristics. The resulting p-value, less than 0.05, indicated that sample MhA was statistically smaller than sample MhB. Consequently, the following hypotheses were formulated in Equations (13) and (14):
H 0 = MhA MhB
H 1 = MhA < MhB
In the event that the obtained p-value from the statistical test was less than 0.05, it could not be asserted that MhA exhibited inferior performance to MhB, leading to the rejection of H 0 . This comparison was made in consideration of the problem being a minimization problem. To verify the findings presented, a Wilcoxon–Mann–Whitney test for each instance was conducted.
In Table 4, we can see the statistical test result when we compare BGO against the other three metaheuristics in each solved instance. When BGO is statistically better than another metaheuristic, we will indicate the p-value obtained in green and bold. When BGO is statistically worse than another metaheuristic, we will indicate the p-value obtained in red and in bold. In another case, there is no statistical difference between BGO and the other metaheuristic.
Based on the findings presented in Table 4, the summary in Table 5 illustrates the total occurrences where the BGO exhibits statistically lesser results (win) and statistically greater results (loss), and where there is no significant difference compared to the other metaheuristics.
Based on Table 5, the instances where the null hypothesis of the statistical test can be rejected (which states that there is no significant difference between the compared results) are are more clearly observable. It can be noted that in 61 instances, the BGO was statistically superior to its competitors, and in 85 instances, there was no statistically significant difference. Only in four instances did the obtained results favor the compared metaheuristic. These findings underscore the robust performance of the BGO, achieving strongly competitive and even superior outcomes.
The analysis of results across the twelve instances of the SCP reveals some interesting trends.
  • Rapid Convergence of the Binary Growth Optimizer (BGO): The BGO demonstrated swift convergence towards solutions with low fitness compared to the other metaheuristics. This suggests that the BGO can find high-quality solutions in fewer iterations.
  • High Competitiveness of BGO: Comparing the results obtained from Table 5, there is statistical evidence of the BGO’s performance compared to well-recognized metaheuristics, achieving very good and often superior results.
  • Behaviors across Different Instances: As expected, similar to its competitors, the BGO showcases varying behaviors across different studied instances, sometimes displaying highly precise fitness and sometimes not. It is crucial to consider this factor when implementing the BGO for problem-solving in real-world scenarios beyond controlled laboratory settings.
  • Success of Implementing Growth Optimizer with Binary Growth Optimizer: A Growth Optimizer is a high-performing metaheuristic in continuous optimization spaces, where various parameter configurations were tested to find the best-performing one. In this work, a Binary Growth Optimizer was employed using recommended parameters. However, altering these parameters might produce different, potentially superior outcomes.
  • Binarization Strategies: The use of binarization strategies is crucial for solving SCPs; utilizing the V-type transfer function yielded excellent results. Other transfer function types remain to be explored to observe the BGO’s behavior comprehensively.
In summary, each metaheuristic displayed its unique characteristics and strategies based on the specific instances of the SCP. The BGO stood out significantly for its rapid convergence in few iterations, high competitiveness, and excellent results.

7.8. Underlying Mechanisms

Next, the unique features of the BGO that facilitate such performance are shown, setting it apart from other metaheuristic methods.
  • Integration of Learning and Reflection: The BGO distinguishes itself by its dual approach that combines two fundamental phases: the learning phase and the reflection phase. In the learning phase, the algorithm explores the search space by selecting “better” and “worse” solutions for each individual, fostering diversity and global exploration. On the other hand, in the reflection phase, the BGO employs reflection on decision variables to generate new random positions, promoting local exploitation and convergence towards optimal solutions.
  • Adaptive Dynamics and Escape from Local Minima: The BGO introduces adaptive dynamics that dynamically adjust the probability of accepting worse solutions during the search. This feature allows the algorithm to adapt to the changing nature of the search landscape, increasing exploration in early stages and focusing on exploitation in later stages, thus enhancing the algorithm’s ability to find high-quality solutions in different types of optimization problems. To address the challenge of local minima, the BGO incorporates diversification mechanisms that enable the occasional acceptance of worse solutions with a small probability. This strategy helps the algorithm avoid getting stuck in suboptimal local minima and explore new regions of the search space, increasing the chances of finding globally optimal solutions.
  • Well-defined Exploration and Exploitation Stages: The BGO is characterized by well-defined exploration and exploitation stages, allowing it to conduct extensive sampling of the search space in the early iterations and then focus on promising regions to refine solutions. This alternation between exploration and exploitation contributes to an efficient search and rapid convergence towards optimal solutions.
  • Initialization with Longer Time and Rapid Convergence Compared to other metaheuristics: The BGO may require more time in initialization due to its dual approach and adaptive dynamics. However, once the search is underway, the BGO demonstrates faster convergence towards optimal solutions thanks to its ability to effectively explore and exploit the search space.

8. Conclusions

This analysis of twelve instances of the set-covering problem (SCP) revealed distinct trends among various metaheuristics. The Binary Growth Optimizer (BGO) emerged as a standout performer, showcasing rapid convergence towards solutions with low fitness in comparison to its counterparts. This rapid convergence suggests the BGO’s ability to attain high-quality solutions within a reduced number of iterations, signifying its efficiency in problem-solving.
The BGO’s success can be attributed to its unique mechanisms, including the integration of learning and reflection, adaptive dynamics, and well-defined exploration and exploitation stages. These mechanisms enable the BGO to adapt dynamically to changing landscapes, effectively explore diverse regions of the search space, and converge towards high-quality solutions in a reduced number of iterations. The BGO consistently achieved excellent and often superior results compared to established metaheuristics, showcasing its high competitiveness.
Moreover, statistical comparisons against well-established metaheuristics demonstrated the BGO’s high competitiveness, consistently achieving excellent and often superior results. However, it is essential to note that the BGO displayed diverse behaviors across different instances, implying variability in its performance. Understanding these variances is crucial when implementing the BGO for real-world problem-solving outside controlled environments.
The successful adaptation from the Growth Optimizer to the Binary Growth Optimizer highlights the potential for parameter optimization, indicating that fine-tuning parameters could potentially enhance the BGO’s performance further.
Additionally, the importance of binarization strategies in solving SCPs was underscored, particularly the success observed with the V-type transfer function. However, further exploration of alternative transfer function types remains an area for potential investigation to comprehensively understand the BGO’s behavior.
In essence, the BGO’s distinguishing traits of rapid convergence, competitiveness, and consistently excellent outcomes position it as a promising metaheuristic for solving SCPs while acknowledging the need for deeper exploration and parameter optimization for its maximal utilization. The unique approach of the BGO makes it suitable for a wide range of multidimensional optimization problems across various domains, including engineering, logistics, planning, and bioinformatics, given its ability to find high-quality solutions in complex search spaces. This versatility renders it a valuable tool for researchers and professionals facing optimization challenges in their respective fields.

Author Contributions

Conceptualization, B.C. and F.C.-C.; methodology, B.C., F.C.-C. and R.S.; software, F.C.-C., D.L. and B.R.-T.; validation, D.L. and B.R.-T.; formal analysis, B.C., F.C.-C. and R.S.; investigation, D.L., B.R.-T., B.C., F.C.-C. and R.S.; resources, D.L., B.R.-T. and F.C.-C.; writing—original draft D.L., B.R.-T., B.C., F.C.-C. and R.S.; writing—review and editing, D.L., B.R.-T. and B.C.; supervision, B.C., F.C.-C. and R.S.; funding acquisition, B.C. and R.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

All the results of this work are available at the GitHub repository (https://github.com/BenjaminAleRamosT/BGO/tree/main (accessed on 30 April 2024)).

Acknowledgments

The work of Broderick Crawford Labrin was supported by the Spanish Ministry of Science and Innovation Project PID2019-109891RB-I00, under the European Regional Development Fund (FEDER). Felipe Cisternas-Caneo was supported by the National Agency for Research and Development (ANID)/Scholarship Program/DOCTORADO NACIONAL/2023-21230203.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MHMetaheuristic
SCPSet-covering problem
PPolynomial time
NPNondeterministic polynomial time
EAsEvolutionary algorithms
MSABinary monkey search algorithm
ALOAntlion optimization
CSACrow search algorithm
GAGenethic Algorithm
MLSTMinimum labeling spanning tree
VANETsVehicular ad hoc networks
MSTMinimum spanning tree
MILPMixed-integer linear programs
ESNEcho state network
ELMExtreme Learning Machines
MREMagnetorheological elastomer
CNNConvolutional neural network
NNNeural network
DEDifferential evolution
GOGrowth optimizer
BGOBinary Growth Optimizer
GRGrowth resistance
I t Current iteration
LFLearning factor
KAKnowledge acquisition
AFAttenuation factor
FECurrent number of evaluations
MaxFEMaximum number of evaluations
ubDomain upper bound
lbDomain lower bound
DPopulation dimension
GWOGrey wolf optimizer
PSAPendulum Search Algorithm
SCASine–cosine algorithm

References

  1. Karp, R.M. On the Computational Complexity of Combinatorial Problems. Networks 1975, 5, 45–68. [Google Scholar] [CrossRef]
  2. Karp, R.M. Reducibility Among Combinatorial Problems. In 50 Years of Integer Programming; Springer: Berlin/Heidelberg, Germany, 2009; pp. 219–241. [Google Scholar]
  3. Cook, S.C. The Complexity of Theorem Proving Procedures. In Proceedings of the 3rd Annual ACM Symposium on Theory of Computing, Shaker Heights, OH, USA, 3–5 May 1971; pp. 151–158. [Google Scholar]
  4. Cook, S.C. Characterizations of Pushdown Machines in Terms of Time-Bounded Computers. J. ACM 1971, 18, 4–18. [Google Scholar] [CrossRef]
  5. Jünger, M.; Reinelt, G.; Rinaldi, G. Chapter 4: The Traveling Salesman Problem. In Handbooks in Operations Research and Management Science; Elsevier: Amsterdam, The Netherlands, 1995; pp. 225–330. [Google Scholar]
  6. Crawford, B.; Soto, R.; Olivares, R.; Embry, G.; Flores, D.; Palma, W.; Castro, C.; Paredes, F.; Rubio, J.M. A Binary Monkey Search Algorithm Variation for Solving the Set Covering Problem. Nat. Comput. 2019, 19, 825–841. [Google Scholar] [CrossRef]
  7. Xia, T.; Zhang, M.; Wang, S. Dynamic System Stability Modeling Approach with Sparrow-Inspired Meta-Heuristic Optimization Algorithm. Biomimetics 2023, 8, 424. [Google Scholar] [CrossRef] [PubMed]
  8. Garey, M.R.; Johnson, D.S. Computers and Intractability: A Guide to the Theory of NP-Completeness; W. H. Freeman and Company: New York, NY, USA, 1979; pp. 13–45+190–194. [Google Scholar]
  9. Bartz-Beielstein, T.; Zaefferer, M. Model-based Methods for Continuous and Discrete Global Optimization. Appl. Soft Comput. 2017, 55, 154–167. [Google Scholar] [CrossRef]
  10. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by Simulated Annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  11. Sloss, A.N.; Gustafson, S. 2019 Evolutionary Algorithms Review. arXiv 2019, arXiv:1906.08870. [Google Scholar]
  12. Glover, F.; Laguna, M.; Marti, R. Tabu Search; Springer: New York, NY, USA, 1997; Volume 16. [Google Scholar] [CrossRef]
  13. Talbi, E.G. Metaheuristics: From Design to Implementation; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
  14. Šarac, D.; Kopić, M.; Mostarac, K.; Kujačić, M.; Jovanović, B. Application of Set Covering Location Problem for Organizing the Public Postal Network. Promet-Traffic Transp. 2016, 28, 403–413. [Google Scholar] [CrossRef]
  15. Chaovalitwongse, W.; Berger-Wolf, T.; Dasgupta, B.; Ashley, M. Set covering approach for reconstruction of sibling relationships. Optim. Methods Softw. 2007, 22, 11–24. [Google Scholar] [CrossRef]
  16. Cacchiani, V.; Hemmelmayr, V.; Tricoire, F. A set-covering based heuristic algorithm for the periodic vehicle routing problem. Discret. Appl. Math. 2014, 163, 53–64. [Google Scholar] [CrossRef] [PubMed]
  17. Jaszkiewicz, A. Do multiple-objective metaheuristics deliver on their promises? A computational experiment on the set-covering problem. IEEE Trans. Evol. Comput. 2003, 7, 133–143. [Google Scholar] [CrossRef]
  18. Kılıç, H.; Yüzgeç, U. Tournament selection based antlion optimization algorithm for solving quadratic assignment problem. Eng. Sci. Technol. 2019, 22, 673–691. [Google Scholar] [CrossRef]
  19. Lin, M.; Liu, F.; Zhao, H.; Chen, J. A novel binary firefly algorithm for the minimum labeling spanning tree problem. Comput. Model. Eng. Sci. 2020, 125, 197–214. [Google Scholar]
  20. Zhang, X.; Zhang, X. A binary artificial bee colony algorithm for constructing spanning trees in vehicular ad hoc networks. Ad Hoc Netw. 2017, 58, 198–204. [Google Scholar] [CrossRef]
  21. Da Silva, T.; Queiroga, E.; Ochi, L.; Cabral, L.A.F.; Gueye, S.; Michelon, P. A hybrid metaheuristic for the minimum labeling spanning tree problem. Eur. J. Oper. Res. 2019, 274, 22–34. [Google Scholar] [CrossRef]
  22. Jimenez, R.; Jurado-Pina, R. A simple genetic algorithm for calibration of stochastic rock discontinuity networks. Rock Mech. Rock Eng. 2012, 45, 461–473. [Google Scholar] [CrossRef]
  23. Taormina, R.; Chau, K. Data-driven input variable selection for rainfall-runoff modeling using binary-coded particle swarm optimization and extreme learning machines. J. Hydrol. 2015, 529, 1617–1632. [Google Scholar] [CrossRef]
  24. Taormina, R.; Chau, K.; Sivakumar, B. Neural network river forecasting through baseflow separation and binary-coded swarm optimization. J. Hydrol. 2015, 529, 1788–1797. [Google Scholar] [CrossRef]
  25. Vuolio, T.; Visuri, V.; Sorsa, A.; Paananen, T.; Fabritius, T. Genetic algorithm-based variable selection in prediction of hot metal desulfurization kinetics. Steel Res. Int. 2019, 90, 1900090. [Google Scholar] [CrossRef]
  26. Liu, J.; Sun, T.; Luo, Y.; Yang, S.; Cao, Y.; Zhai, J. Echo state network optimization using binary grey wolf algorithm. Neurocomputing 2020, 385, 310–318. [Google Scholar] [CrossRef]
  27. Yu, Y.; Li, Y.; Li, J.; Gu, X.; Royel, S. Nonlinear characterization of the MRE isolator using binary-coded discrete CSO and ELM. Int. J. Struct. Stab. Dyn. 2018, 18, 1840007. [Google Scholar] [CrossRef]
  28. Joshi, S.; Kumar, R.; Dwivedi, A. Hybrid DSSCS and convolutional neural network for peripheral blood cell recognition system. IET Image Process. 2020, 14, 4450–4460. [Google Scholar] [CrossRef]
  29. Ahmad, M.; Abdullah, M.; Moon, H.; Yoo, S.J.; Han, D. Image classification based on automatic neural architecture search using binary crow search algorithm. IEEE Access 2020, 8, 189891–189912. [Google Scholar] [CrossRef]
  30. Zhang, L.; Li, H.; Kong, X. Evolving feedforward artificial neural networks using a two-stage approach. Neurocomputing 2019, 360, 25–36. [Google Scholar] [CrossRef]
  31. Sadiq, A.; Tahir, M.; Ahmed, A.; Alghushami, A. Normal parameter reduction algorithm in soft set based on hybrid binary particle swarm and biogeography optimizer. Neural Comput. Appl. 2020, 32, 12221–12239. [Google Scholar] [CrossRef]
  32. Zhang, M.; Yang, F.; Zhang, D.; Tang, P. Research on charging and discharging control strategy for electric vehicles as distributed energy storage devices. IOP Conf. Ser. Earth Environ. Sci. 2018, 121, 042019. [Google Scholar] [CrossRef]
  33. Zhang, Q.; Gao, H.; Zhan, Z.H.; Li, J.; Zhang, H. Growth Optimizer: A powerful metaheuristic algorithm for solving continuous and discrete global optimization problems. Knowl.-Based Syst. 2023, 261, 110206. [Google Scholar] [CrossRef]
  34. Crawford, B.; Soto, R.; Astorga, G.; García, J.; Castro, C.; Paredes, F. Putting Continuous Metaheuristics to Work in Binary Search Spaces. Complexity 2017, 2017, 8404231. [Google Scholar] [CrossRef]
  35. Mirjalili, S.; Mirjalili, S.; Yang, X. Binary Bat Algorithm. Neural Comput. Appl. 2014, 25, 663–681. [Google Scholar] [CrossRef]
  36. Rodrigues, D.; Pereira, L.; Nakamura, R.; Costa, K.; Yang, X.; Souza, A.; Papa, J. A Wrapper Approach for Feature Selection Based on Bat Algorithm and Optimum-Path Forest. Expert Syst. Appl. 2014, 41, 2250–2258. [Google Scholar] [CrossRef]
  37. Khanesar, M.A.; Teshnehlab, M.; Shoorehdeli, M.A. A novel binary particle swarm optimization. In Proceedings of the 2007 Mediterranean Conference on Control & Automation, Athens, Greece, 27–29 June 2007; pp. 1–6. [Google Scholar] [CrossRef]
  38. Cisternas-Caneo, F.; Crawford, B.; Soto, R.; de la Fuente-Mella, H.; Tapia, D.; Lemus-Romani, J.; Castillo, M.; Becerra-Rozas, M.; Paredes, F.; Misra, S. A Data-Driven Dynamic Discretization Framework to Solve Combinatorial Problems Using Continuous Metaheuristics. In Innovations in Bio-Inspired Computing and Applications; Abraham, A., Sasaki, H., Rios, R., Gandhi, N., Singh, U., Ma, K., Eds.; Springer: Cham, Switzerland, 2021; pp. 76–85. [Google Scholar]
  39. Lemus-Romani, J.; Becerra-Rozas, M.; Crawford, B.; Soto, R.; Cisternas-Caneo, F.; Vega, E.; Castillo, M.; Tapia, D.; Astorga, G.; Palma, W.; et al. A Novel Learning-Based Binarization Scheme Selector for Swarm Algorithms Solving Combinatorial Problems. Mathematics 2021, 9, 2887. [Google Scholar] [CrossRef]
  40. Faris, H.; Mafarja, M.; Heidari, A.; Aljarah, I.; Ala’M, A.; Mirjalili, S.; Fujita, H. An Efficient Binary Salp Swarm Algorithm with Crossover Scheme for Feature Selection Problems. Knowl.-Based Syst. 2018, 154, 43–67. [Google Scholar] [CrossRef]
  41. Sharma, P.; Sundaram, S.; Sharma, M.; Sharma, A.; Gupta, D. Diagnosis of Parkinson’s Disease Using Modified Grey Wolf Optimization. Cogn. Syst. Res. 2019, 54, 100–115. [Google Scholar] [CrossRef]
  42. Mafarja, M.; Aljarah, I.; Heidari, A.; Faris, H.; Fournier-Viger, P.; Li, X.; Mirjalili, S. Binary Dragonfly Optimization for Feature Selection Using Time-Varying Transfer Functions. Knowl.-Based Syst. 2018, 161, 185–204. [Google Scholar] [CrossRef]
  43. Eluri, R.; Devarakonda, N. Binary Golden Eagle Optimizer with Time-Varying Flight Length for Feature Selection. Knowl.-Based Syst. 2022, 247, 108771. [Google Scholar] [CrossRef]
  44. Mirjalili, S.; Hashim, S. BMOA: Binary Magnetic Optimization Algorithm. Int. J. Mach. Learn. Comput. 2012, 2, 204. [Google Scholar] [CrossRef]
  45. Lemus-Romani, J.; Crawford, B.; Cisternas-Caneo, F.; Soto, R.; Becerra-Rozas, M. Binarization of Metaheuristics: Is the Transfer Function Really Important? Biomimetics 2023, 8, 400. [Google Scholar] [CrossRef] [PubMed]
  46. Leonard, B.; Engelbrecht, A.; Cleghorn, C. Critical Considerations on Angle Modulated Particle Swarm Optimisers. Swarm Intell. 2015, 9, 291–314. [Google Scholar] [CrossRef]
  47. Zhang, G. Quantum-Inspired Evolutionary Algorithms: A Survey and Empirical Study. J. Heuristics 2011, 17, 303–351. [Google Scholar] [CrossRef]
  48. García, J.; Crawford, B.; Soto, R.; Castro, C.; Paredes, F. A K-Means Binarization Framework Applied to Multidimensional Knapsack Problem. Appl. Intell. 2018, 48, 357–380. [Google Scholar] [CrossRef]
  49. García, J.; Moraga, P.; Valenzuela, M.; Crawford, B.; Soto, R.; Pinto, H.; Peña, A.; Altimiras, F.; Astorga, G. A Db-Scan Binarization Algorithm Applied to Matrix Covering Problems. Comput. Intell. Neurosci. 2019, 2019, 3238574. [Google Scholar] [CrossRef]
  50. Becerra-Rozas, M.; Lemus-Romani, J.; Cisternas-Caneo, F.; Crawford, B.; Soto, R.; Astorga, G.; Castro, C.; García, J. Continuous Metaheuristics for Binary Optimization Problems: An Updated Systematic Literature Review. Mathematics 2022, 11, 129. [Google Scholar] [CrossRef]
  51. Saremi, S.; Mirjalili, S.; Lewis, A. How Important Is a Transfer Function in Discrete Heuristic Algorithms. Neural Comput. Appl. 2015, 26, 625–640. [Google Scholar] [CrossRef]
  52. Mirjalili, S.; Lewis, A. S-shaped Versus V-shaped Transfer Functions for Binary Particle Swarm Optimization. Swarm Evol. Comput. 2013, 9, 58–68. [Google Scholar] [CrossRef]
  53. Kennedy, J.; Eberhart, R. A Discrete Binary Version of the Particle Swarm Algorithm. In Proceedings of the 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation, Orlando, FL, USA, 12–15 October 1997; Volume 5, pp. 4104–4108. [Google Scholar] [CrossRef]
  54. Rajalakshmi, N.; Padma Subramanian, D.; Thamizhavel, K. Performance Enhancement of Radial Distributed System with Distributed Generators by Reconfiguration Using Binary Firefly Algorithm. J. Inst. Eng. Ser. B 2015, 96, 91–99. [Google Scholar] [CrossRef]
  55. Wolpert, D.; Macready, W. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  56. Mirjalili, S.; Mirjalili, S.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  57. Ab. Aziz, N.A.; Ab. Aziz, K. Pendulum search algorithm: An optimization algorithm based on simple harmonic motion and its application for a vaccine distribution problem. Algorithms 2022, 15, 214. [Google Scholar] [CrossRef]
  58. Taghian, S.; Nadimi-Shahraki, M. Binary Sine Cosine Algorithms for Feature Selection from Medical Data. arXiv 2019, arXiv:1911.07805. [Google Scholar] [CrossRef]
  59. Lanza-Gutierrez, J.M.; Crawford, B.; Soto, R.; Berrios, N.; Gomez-Pulido, J.A.; Paredes, F. Analyzing the effects of binarization techniques when solving the set covering problem through swarm optimization. Expert Syst. Appl. 2017, 70, 67–82. [Google Scholar] [CrossRef]
  60. Cisternas-Caneo, F.; Crawford, B.; Soto, R.; Giachetti, G.; Paz, Á.; Peña Fritz, A. Chaotic Binarization Schemes for Solving Combinatorial Optimization Problems Using Continuous Metaheuristics. Mathematics 2024, 12, 262. [Google Scholar] [CrossRef]
  61. Al-Tashi, Q.; Abdul Kadir, S.J.; Rais, H.M.; Mirjalili, S.; Alhussian, H. Binary Optimization Using Hybrid Grey Wolf Optimization for Feature Selection. IEEE Access 2019, 7, 39496–39508. [Google Scholar] [CrossRef]
  62. Mafarja, M.M.; Mirjalili, S. Hybrid whale optimization algorithm with simulated annealing for feature selection. Neurocomputing 2017, 260, 302–312. [Google Scholar] [CrossRef]
  63. Python Code Experimentation. Available online: https://github.com/BenjaminAleRamosT/BGO/tree/main (accessed on 29 April 2024).
  64. Mann, H.; Whitney, D. On a test of whether one of two random variables is stochastically larger than the other. Ann. Math. Stat. 1947, 18, 50–60. [Google Scholar] [CrossRef]
  65. García, S.; Molina, D.; Lozano, M.; Herrera, F. A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: A case study on the CEC’2005 special session on real parameter optimization. J. Heuristics 2009, 15, 617–644. [Google Scholar] [CrossRef]
Figure 1. Solution representation.
Figure 1. Solution representation.
Biomimetics 09 00283 g001
Figure 2. Fitness convergence curves.
Figure 2. Fitness convergence curves.
Biomimetics 09 00283 g002aBiomimetics 09 00283 g002bBiomimetics 09 00283 g002cBiomimetics 09 00283 g002d
Figure 3. Boxplots.
Figure 3. Boxplots.
Biomimetics 09 00283 g003aBiomimetics 09 00283 g003bBiomimetics 09 00283 g003cBiomimetics 09 00283 g003dBiomimetics 09 00283 g003e
Figure 4. “Exploration vs. Exploitation” graph of the BGO metaheuristic when solving the scp43 instance.
Figure 4. “Exploration vs. Exploitation” graph of the BGO metaheuristic when solving the scp43 instance.
Biomimetics 09 00283 g004
Figure 5. “Time vs. Iterations” graphs for the metaheuristics.
Figure 5. “Time vs. Iterations” graphs for the metaheuristics.
Biomimetics 09 00283 g005aBiomimetics 09 00283 g005bBiomimetics 09 00283 g005cBiomimetics 09 00283 g005d
Table 1. Examples of metaheuristics used for set-covering problem resolutions and their application.
Table 1. Examples of metaheuristics used for set-covering problem resolutions and their application.
Ref.Optimization Algorithm           ApplicationConvergenceComplexity
[22]GAParameter calibrationLowLow
[23]Binary PSOInput variable selection in ELMMediumHigh
[24]Binary PSOParameter optimization in ELMMediumHigh
[25]GAVariable selection in hot metal desulfurization kineticsLowLow
[26]Binary GWOESNLowHigh
[27]Binary CSOParameter optimization in MRE isolatorLowHigh
[28]CSO and salp swarm algorithmCNNMediumHigh
[29]Binary CSACNNLowHigh
[30]DE and binary DENNMediumHigh
[31]Binary PSO and BBOSet ret reductionFastHigh
[32]Binary PSOParameter optimization in Electric VehiclesMediumLow
Table 2. Description of instances.
Table 2. Description of instances.
Instance SetNumber of InstancesmnCost RangeDensity (%)Optimal Solution
4102001000[1, 100]2Known
5102002000[1, 100]2Known
652001000[1, 100]5Known
A53003000[1, 100]2Known
B53003000[1, 100]5Known
C54004000[1, 100]2Known
D54004000[1, 100]5Known
NRE55005000[1, 100]10Unknown
NRF55005000[1, 100]20Unknown
NRG5100010,000[1, 100]2Unknown
NRH5100010,000[1, 100]5Unknown
Table 3. Results obtained.
Table 3. Results obtained.
SCAPSAGWOBGO
Inst.Opt.MinAvgRPDMinAvgRPDMinAvgRPDMinAvgRPD
41429431.0433.750.466431.0433.78120.4662433.0434.00.9324433.0433.031250.9324
425125235272.1484517528.29030.9766525526.22582.5391518525.70971.1719
43516520521.06250.7752520521.46880.7752520520.87500.7752520520.40630.7752
44494496504.45160.4049500506.58061.2146499505.41941.0121499504.48391.0121
45512514518.29030.3906518519.67741.1719518518.12901.17195185181.1719
46560564567.80650.71435655690.8929564567.77420.7143567567.87101.2500
47430432434.29030.4651433434.25810.69774324340.4651433433.96770.6977
48492493494.06450.2033493494.38710.2033493493.83870.2033493493.29030.2033
49641655663.77422.1841656667.51612.3401654662.77422.0281653662.09681.8721
410514517522.67740.5837518523.61290.7782519523.41940.9728517524.06450.5837
51253267267.83875.5336267267.77425.5336257267.03231.5810267267.48395.5336
52302315318.65634.3046315319.54.3046313319.1253.6424315319.09384.3046
53226229231.90321.3274230232.03231.7699229231.83871.32742322322.6549
54242244247.83870.8264244248.32260.8264244247.90320.8264244248.09680.8264
55211212213.51610.4739212214.45160.4739212213.41940.4739212213.06450.4739
56213216223.16131.4085216223.35481.4085216223.35481.4085216221.90321.4085
57293299301.67742.0478296302.19351.0239297301.80651.3652299301.29032.0478
58288290295.70970.6944290297.51610.6944290297.61290.6944290297.38710.6944
59279284287.51611.7921284288.12901.7921284288.25811.7921284286.22581.7921
510265272274.19352.6415272274.41942.6415272273.87102.6415273274.03233.0189
61138141143.38712.1739141144.09682.1739141143.03232.1739141142.22582.1739
62146148150.48391.3699148151.06451.3699148150.09681.3699148150.41941.3699
63145148149.22582.0690148150.03232.0690147149.22581.3793148148.61292.0690
64131134135.38712.2901135135.38713.0534135135.12903.0534134135.22582.2901
65161172174.87106.8323165174.80652.4845172174.51616.8323171174.48396.2112
a1253257257.54841.5810257257.67741.58102572571.5810257257.06451.5810
a2252258262.25812.3810258263.12902.3810258261.38712.3810258261.12902.3810
a32322352411.2931236241.77421.7241237240.83872.1552235240.32261.2931
a4234236237.48390.8547236237.06450.8547236236.74190.8547236236.61290.8547
a5236237239.32260.4237237238.67740.4237237238.77420.4237237238.45160.4237
b1696970.483906970.645206970.258106970.22580
b2767676.580607677.161307676.354807676.19350
b3808081.225808081.258108081.129008181.16131.2500
b4797981.096807981.806507980.580607980.51610
b5727272.387107272.548407272.290307272.54840
c1227232234.09682.2026231234.35481.7621232233.51612.2026232233.41942.2026
c2219221224.51610.91322212250.9132221224.16130.9132221223.74190.9132
c3243245249.77420.8230247252.25811.6461245248.03230.8230245247.77420.8230
c4219224226.96772.2831224228.83872.2831221226.58060.9132222225.51611.3699
c5215217219.61290.9302216219.51610.4651216218.83870.4651216219.06450.4651
d1606061.935506061.806506062.129006162.16131.6667
d2666768.22581.51526768.09681.51526768.12901.51526767.77421.5152
d3727375.80651.38897376.29031.38897475.67742.77787475.83872.7778
d4626263.096806263.677406263.258106262.83870
d5616363.16133.27876263.29031.63936363.16133.27876363.03233.2787
nre5-2828.2813-2828.5313-2828.3125-2828.1563-
nrg5-171175.3125-168176.3125-170175.0313-172175.0625-
nrh2-6465.7419-6466.0323-6465.3226-6465.2581-
nrh4-5960.9355-6061.1935-5960.4516-5960.3548-
nrh5-5557.1613-5557.5161-5557-5556.5484-
Table 4. Matrix of p-values from the Wilcoxon–Mann–Whitney test for the BGO against the SCA, PSA, and GWO in different instances. In green the p-values lower than 0.05 where BGO exhibits statistically lesser results, and in red the p-values greater than 0.95 where BGO exhibits statistically greater results compared to other metaheuristics.
Table 4. Matrix of p-values from the Wilcoxon–Mann–Whitney test for the BGO against the SCA, PSA, and GWO in different instances. In green the p-values lower than 0.05 where BGO exhibits statistically lesser results, and in red the p-values greater than 0.95 where BGO exhibits statistically greater results compared to other metaheuristics.
SCAPSAGWO
410.0325260.0176520.00505
420.0834160.0084530.178185
430.1423260.0179250.040147
440.6738950.3586080.17129
450.0007770.0000750.020949
460.7771210.1010890.947336
470.0264260.0270540.221067
480.0498130.0152550.092276
490.2313350.0003730.457734
4100.9712420.8001050.837222
510.084220.2049320.769852
520.6564740.2354860.368845
530.8490240.5091730.926297
540.3788810.0375740.588779
550.0207660.0002280.174886
560.1947710.0693210.212655
570.0082870.0067620.049149
580.9902980.4247290.538867
590.0256650.0035790.027499
5100.3794960.1102180.883339
610.0136890.001030.052988
620.3968780.1254160.666065
630.1153850.090210.154446
640.2971050.0346550.552952
650.1850460.1474010.482672
a10.0002530.0083880.926324
a20.1240530.0022650.431456
a30.0163080.0002560.100149
a40.0033710.0129220.550884
a50.0005240.0823150.023963
b10.2130580.1513250.43297
b20.1466660.0381430.375167
b30.3453720.2459590.612847
b40.0908970.0023520.33707
b50.8871490.5484250.975947
c10.0029070.0018790.385972
c20.1149050.0289430.175642
c30.0003980.00.539888
c40.0020550.0000060.039399
c50.0265750.3207870.991447
d10.8925950.9504550.506028
d20.0105290.060740.033358
d30.313890.0044010.887789
d40.0394190.0003850.025442
d50.5046750.0970370.250647
nre50.11690.0008880.072876
nrg50.2029660.0090970.41873
nrh20.0136740.0036890.440986
nrh40.0048210.0005040.335568
nrh50.0116390.0003060.052988
Table 5. Matrix of ocurrences from Wilcoxon–Mann–Whitney tests.
Table 5. Matrix of ocurrences from Wilcoxon–Mann–Whitney tests.
InstanceWinNo Significant DifferenceLoss
41300
42120
43210
44030
45300
46030
47210
48210
49120
410021
51030
52030
53030
54120
55210
56030
57300
58021
59300
510030
61210
62030
63030
64120
65030
a1210
a2120
a3210
a4210
a5210
b1030
b2120
b3030
b4120
b5021
c1210
c2120
c3210
c4300
c5111
d1030
d2210
d3120
d4300
d5120
nre5120
nrg5120
nrh2210
nrh4210
nrh5210
Total61854
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Leiva, D.; Ramos-Tapia, B.; Crawford, B.; Soto, R.; Cisternas-Caneo, F. A Novel Approach to Combinatorial Problems: Binary Growth Optimizer Algorithm. Biomimetics 2024, 9, 283. https://doi.org/10.3390/biomimetics9050283

AMA Style

Leiva D, Ramos-Tapia B, Crawford B, Soto R, Cisternas-Caneo F. A Novel Approach to Combinatorial Problems: Binary Growth Optimizer Algorithm. Biomimetics. 2024; 9(5):283. https://doi.org/10.3390/biomimetics9050283

Chicago/Turabian Style

Leiva, Dante, Benjamín Ramos-Tapia, Broderick Crawford, Ricardo Soto, and Felipe Cisternas-Caneo. 2024. "A Novel Approach to Combinatorial Problems: Binary Growth Optimizer Algorithm" Biomimetics 9, no. 5: 283. https://doi.org/10.3390/biomimetics9050283

Article Metrics

Back to TopTop