Next Article in Journal
Enhanced Estimation of Root Zone Soil Moisture at 1 km Resolution Using SMAR Model and MODIS-Based Downscaled AMSR2 Soil Moisture Data
Next Article in Special Issue
Application of NSGA-II to Obtain the Charging Current-Time Tradeoff Curve in Battery Based Underwater Wireless Sensor Nodes
Previous Article in Journal
Lightweight and Efficient Dynamic Cluster Head Election Routing Protocol for Wireless Sensor Networks
Previous Article in Special Issue
Teamwork Optimization Algorithm: A New Optimization Approach for Function Minimization/Maximization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cat and Mouse Based Optimizer: A New Nature-Inspired Optimization Algorithm

by
Mohammad Dehghani
1,
Štěpán Hubálovský
2 and
Pavel Trojovský
1,*
1
Department of Mathematics, Faculty of Science, University of Hradec Králové, 500 03 Hradec Králové, Czech Republic
2
Department of Applied Cybernetics, Faculty of Science, University of Hradec Králové, 500 03 Hradec Králové, Czech Republic
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(15), 5214; https://doi.org/10.3390/s21155214
Submission received: 4 July 2021 / Revised: 22 July 2021 / Accepted: 29 July 2021 / Published: 31 July 2021
(This article belongs to the Collection Artificial Intelligence in Sensors Technology)

Abstract

:
Numerous optimization problems designed in different branches of science and the real world must be solved using appropriate techniques. Population-based optimization algorithms are some of the most important and practical techniques for solving optimization problems. In this paper, a new optimization algorithm called the Cat and Mouse-Based Optimizer (CMBO) is presented that mimics the natural behavior between cats and mice. In the proposed CMBO, the movement of cats towards mice as well as the escape of mice towards havens is simulated. Mathematical modeling and formulation of the proposed CMBO for implementation on optimization problems are presented. The performance of the CMBO is evaluated on a standard set of objective functions of three different types including unimodal, high-dimensional multimodal, and fixed-dimensional multimodal. The results of optimization of objective functions show that the proposed CMBO has a good ability to solve various optimization problems. Moreover, the optimization results obtained from the CMBO are compared with the performance of nine other well-known algorithms including Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), Teaching-Learning-Based Optimization (TLBO), Grey Wolf Optimizer (GWO), Whale Optimization Algorithm (WOA), Marine Predators Algorithm (MPA), Tunicate Swarm Algorithm (TSA), and Teamwork Optimization Algorithm (TOA). The performance analysis of the proposed CMBO against the compared algorithms shows that CMBO is much more competitive than other algorithms by providing more suitable quasi-optimal solutions that are closer to the global optimal.

1. Introduction

1.1. Motivation

Optimization is the adjustment and modification of the inputs and properties of a device, a mathematical process, or an experimental experiment in order to obtain the best output or result. Each optimization problem has three main parts: decision variables, constraints, and objective functions [1]. Decision variables should be adjusted and quantified in such a way that the objective function of the problem is optimized according to the constraints. In fact, there are several solutions to an optimization problem where finding the best solution is the main challenge in optimizing the objective function [2].

1.2. Literature Review

Optimization problem solving methods from the general point of view are grouped into two categories: (i) deterministic methods and (ii) stochastic methods [3].
Deterministic methods also are grouped into two categories: (i) gradient-based and (ii) non-gradient-based methods. Gradient-based methods are valid and easy to use for simple cost functions. Many complex problems can be transformed into functions with a little modification that can be solved using these methods. However, with increasing dimensions of the problem, as well as in nonlinear search spaces, these methods are simply stuck in local optimal solutions and are not able to provide a suitable solution. Non-gradient-based methods use condition and objective function evaluation to converge to the solution. However, the main disadvantage of these methods is that they are very dependent on the initial conditions and their implementation requires high experience and knowledge of mathematics [4].

Population-Based Optimization Algorithms

Population-based optimization algorithms is one of the most widely used methods for solving optimization problems, which belongs to the group of stochastic methods [5]. Population-based optimization algorithms without the need to derivative and gradients information and based on search operators and collective intelligence are able to provide appropriate solutions to optimization problems by randomly scanning the search space [6]. Optimization algorithms have been developed based on the ideation of various natural phenomena, the natural behaviors of animals and living organisms, the laws of physics, the genetic sciences, the rules of games, and other processes that have the potential to evolve.
Genetic Algorithm (GA) is one of the oldest and most widely used optimization methods in solving optimization problems, which is developed based on Darwin’s theory of evolution and reproduction process simulation. In GA, three operators of selection, crossover, and mutation is applied to model reproduction according to the law of survival of the fittest and the evolution of the offspring [7]. The advantages of GA are that it has simple and understandable concepts, but having control parameters that must be well adjusted and also time-consuming implementation are the most important disadvantages of this algorithm.
Particle Swarm Optimization (PSO) is another widely used algorithm which is based on the imitation of bird and fish swarm motion. In PSO, the strategy of moving particles and updating search agents is based on the best personal experience of each particle and the global experience of the entire population [8]. The simplicity of mathematical equations and their easy implementation are the main advantages of PSO. The main disadvantages of PSO algorithm are falling into the trap of local optimal, reduced population diversity, and low convergence speed.
Gravitational Search algorithm (GSA) is a physics-based algorithm which is inspired by gravitational force and Newton’s laws of motion. In GSA, the gravitational force is modeled between different objects that are actually members of the algorithm population and are at different distances from each other. The acceleration, velocity, and displacement of objects are then updated according to Newton’s laws of motion [9]. Fast convergence in simple problems, easy implementation, and low computational cost are the main advantages of GSA. Among the disadvantages of the GSA are slow convergence, time-consumption, and the tendency to become trapped in local optimal solutions.
Teaching Learning-based Optimization (TLBO) is a population-based technique which is developed based on modeling behaviors and interactions between students and the teacher in the classroom. TLBO updates the algorithm population in two phases of teacher and learner. In the teacher phase, the educational behavior of the teacher, who is the best member of the population, towards the students is modeled. In the learner phase, students share their knowledge and information with each other [10]. Good global search, simplicity, and no requirement to control parameters are the main advantages of TLBO. Disadvantages of TLBO is that consumes lot of memory space and involves lot of iterations so is a time-consuming method.
Grey Wolf Optimizer (GWO) is inspired by social life and hunting strategy of the gray wolves in nature. In GWO, the hierarchical behavior of leadership in gray wolves is modeled using four types of wolves: alpha, beta, delta, and omega. The hunting strategy is also simulated in three stages including search for prey, encircling prey, and attacking prey [11]. Easily implementation, fewer storage and computational requirements are the main advantages of GWO. Slow convergence, low solving precision, having controller parameters, and bad local searching ability are the main disadvantages of GWO.
Whale Optimization Algorithm (WOA) is a nature-inspired algorithm which is developed based on social behavior of humpback whales and bubble-net hunting strategy. WOA have three operators to simulate the search for prey, encircling prey, and bubble-net foraging behavior of humpback whales [12]. Appropriate balance between exploration and exploitation is the main advantage of WOA. The main disadvantages of WOA are slow convergence speed, weak exploring search space, and easy falling into local optimal.
Marine Predators Algorithm (MPA) is introduced based on the movement strategies that marine predators use when trapping their prey in the oceans. MPA performance is simulated based on the behavior and strategy of search and pursuit of marine predators due to the different speeds of predators and prey in three phases. Phase (i): When the prey moves faster than the predator; Phase (ii): When the prey and the predator move at almost the same speed; and Phase (iii): When the predator is moving faster than the prey [13]. Good global search and fast convergence are the main advantages of MPA. The main disadvantages of MPA are lack of escaping from the local optimization, the inability to produce a diverse initial population with high productivity, and lack of broadly and widely exploration of the search space.
Tunicate Swarm Algorithm (TSA) is a bio-inspired method which is introduced based on simulation of jet propulsion and swarm behaviors of tunicates during the navigation and foraging process. In TSA the jet propulsion behavior is simulated considering three conditions including movement towards the position of best search agent, avoid the conflicts between search agents, and remains close to the best search agent [14]. Good global search and appropriate balance between exploration and exploitation are the main advantages of TSA. Low convergence rate and weakness in local search are the main disadvantages of TSA.
Teamwork Optimization Algorithm (TOA) is a population-based approach which is developed based on mathematical modeling of relationships and interactions between team members in doing a teamwork to achieve the goal of that team. In TOA, team members are updated on each iteration in three phases: supervisor guidance, information sharing, and individual activity [15]. Although TOA has advantages such as not requiring any parameter controlling, good global search, having appropriate balance between exploration and exploitation, and fast convergence, fall to local optimal solutions in solving high-dimensional multimodal problems is the most important drawback of this algorithm.
In addition, several well-known optimization algorithms in the recent literature are represented in Table 1.

1.3. Research Gap and Question

Every optimization problem has a basic solution called global optimal solution. The important thing about optimization algorithms is that there is no guarantee that the solutions obtained from these methods necessarily be global optimal solution. For this reason, the solutions that are obtained using optimization algorithms for optimization problems are called quasi-optimal solutions [51].
At best, the quasi-optimal solution is equal to the global optimal solution; otherwise, it must be close to it. Therefore, in analyzing the performance of several optimization algorithms in solving an optimization problem, the algorithm that is able to provide a quasi-optimal solution closer to the global optimal solution is the superior algorithm for solving that optimization problem. Another point is that the optimization algorithm may work very well in solving the optimization problem, but it will not be able to solve another optimization problem. That is why researchers have developed many optimization algorithms to achieve quasi-optimal solutions that are more appropriate and closer to the global optimal solution.
In order to evaluate the performance of optimization algorithms in achieving quasi-optimal solutions, optimization algorithms are implemented on standard optimization problems as benchmark functions whose optimal solution is already known. The criterion of superiority of optimization algorithms over each other is to provide a solution closer to the global optimal. Therefore, it is always possible to design a new optimization algorithm that provides better performance than existing algorithms in optimizing optimization problems. In this regard, the main research question of this paper is whether it is possible to design a new optimization algorithm that can provide a quasi-optimal solution closer to a global optimal solution.

1.4. Contribution and Applications

In this paper, a new stochastic method called Cat and Mouse Optimization Algorithm (CMBO) is introduced to solve various optimization problems and provide suitable quasi-optimal solutions. The contributions proposed by this paper are as follows:
(i)
(CMBO is designed based on the simulation of natural interactions between cat and mouse.
(ii)
The various steps and theory of the proposed CMBO are described and its mathematical model is presented to use in optimizing objective functions.
(iii)
The capability of the CMBO in solving optimization problems has been tested on twenty-three standard objective functions.
(iv)
The results obtained from the CMBO are also compared with the performance of nine well-known optimization algorithms.
Optimization algorithms are used in all disciplines and real-world problems where the optimization process or problem is designed and defined. The proposed CMBO can be used to minimize or maximize various objective functions. CMBO can be used in engineering sciences and optimal designs where decision variables must be well selected to optimize device performance. In medical science, data mining, clustering, and in general in any application that faces optimization, the proposed CMBO can be used.

1.5. Paper Organization

The rest of this paper is organized in such a way that the proposed CMBO is introduced in Section 2. Simulation studies and evaluation of the CMBO are presented in Section 3. The discussion and analysis of the results is presented in Section 4. Finally, in Section 5, conclusions as well as several suggestions for future studies are provided.

2. Cat and Mouse Optimization Algorithm

In this section, the theory of the Cat and Mouse Optimization Algorithm (CMBO) is stated, then its mathematical model is presented in order to use in optimizing various problems.
The CMBO is a population-based algorithm which is designed by inspiration from the natural behaviors of a cat attacks on mouse and mouse escape to the haven. The search agents in the proposed algorithm are divided into two groups of cats and mice that scan the problem search space with random movements. The proposed algorithm updates population members in two phases. In the first phase, the movement of cats towards mice is modeled, and in the second phase, the escape of mice to havens to save its lives is modeled.
From a mathematical point of view, each member of the population is a proposed solution to the problem. In fact, a member of the population specifies values for the problem variables according to its position in the search space. Thus, each member of the population is a vector whose values determine the variables of the problem. The population of the algorithm is determined using a matrix called the population matrix in Equation (1).
X = X 1 X i X N N × m = x 1 , 1 x 1 , d x 1 , m x i , 1   x i , d x i , m x N , 1 x N , d x N , m N × m ,
where X is the population matrix of CMBO, X i is the ith search agent, x i , d is the value for the dth problem variable obtained by the ith search agent, N is the number of population members, and m is the number of problem variables.
As mentioned, each member of the population determines the proposed values for the problem variables. Therefore, for each member of the population, a value is specified for the objective function. The values obtained for the objective function are denoted using a vector in Equation (2).
F = F 1 F i F N N × 1 ,
where F is the vector of objective function values and F i is the objective function value for the ith search agent.
Based on the values obtained for the objective functions, the members of the population are ranked from the best member with the lowest value of the objective function to the worst member of the population with the highest value of the objective function. The sorted population matrix as well as the sorted objective function are determined using Equations (3) and (4).
X S = X 1 S X i S X N S N × m = x 1 , 1 s x 1 , d s x 1 , m s x i , 1 s   x i , d s x i , m s x N , 1 s x N , d s x N , m s N × m ,
F S = F 1 S m i n F F N S max F N × 1 ,
where X S is the sorted population matrix based on objective function value, X i S is the ith member of sorted population matrix, x i , d s is the value for the dth problem variable obtained by the ith search agent of sorted population matrix, and F S is the sorted vector of an objective function.
The population matrix in the proposed CMBO consists of two groups of cats and mice. In the CMBO, it is assumed that half of the population members who provided better values for the objective function constitute the population of mice and the other half of the population members who provided lower values for the objective function constitute the cat population. Based on this concept, the populations of mice and cats are determined in Equations (5) and (6), respectively.
M = M 1 = X 1 S M i = X i S M N m = X N m S N m × m = x 1 , 1 s x 1 , d s x 1 , m s x i , 1 s   x i , d s x i , m s x N m , 1 s x N m , d s x N m , m s N m × m ,
C = C 1 = X N m + 1 S C j = X N m + j S C N c = X N m + N c S N c × m = x N m + 1 , 1 s x N m + 1 , d s x N m + 1 , m s x N m + j , 1 s   x N m + j , d s x N m + j , m s x N m + N c , 1 s x N m + N c , d s x N m + N c , m s N c × m ,
where M is the population matrix of mice, N m is the number of mice, M i is the jth mouse, C is the population matrix of cats, N c is the number of cats, and C j is the ith cat.
In order to update the search factors, in the first phase, the change of position of cats is modeled based on the natural behavior of cats and movement towards mice. This phase of the update of the proposed CMBO is mathematically modeled using Equations (7)–(9).
C j n e w :   c j , d n e w = c j , d + r × m k , d I × c j , d   &   j = 1 : N c ,   d = 1 : m ,   k 1 : N m ,
I = r o u n d 1 + r a n d ,
C j = C j n e w ,   | F j c , n e w < F j c C j ,   | e l s e ,
Here, C j n e w is the new status of the jth cat, c j , d n e w is the new value for the dth problem variable obtained by the jth cat, r is a random number in interval 0 , 1 , m k , d is the dth dimension of the kth mouse, F j c , n e w is the objective function value based on new status of the jth cat.
In the second phase of the proposed CMBO, the escape of mice to havens is modeled. In CMBO, it is assumed that there is a random haven for each mouse, and mice take refuge in these havens. The position of the havens in the search space is randomly created based on patterning the positions of different members of the algorithm. This phase of updating the position of mice is mathematically modeled using Equations (10)–(12).
H i : h i , d = x l , d   &   i = 1 : N m ,   d = 1 : m ,   l 1 : N ,
M i n e w :   m i , d n e w = m i , d + r × h i , d I × m i , d × s i g n F i m F i H   &   i = 1 : N m ,   d = 1 : m ,
M i = M i n e w ,   | F i m , n e w < F i m M i ,   | e l s e ,
Here,   H i is the haven for the ith mouse and F i H is its objective function value. M i n e w is the new status of the ith mouse and F i m , n e w is its objective function value.
After all members of the algorithm population have been updated, the algorithm enters the next iteration and, based on Equations (5)–(12), the iterations of the algorithm continue until the stop condition is reached. The condition for stopping optimization algorithms can be a certain number of iterations, or by defining an acceptable error between obtained solutions in consecutive iterations. Moreover, the condition for stopping the algorithm may be a certain period of time. Upon completion of the iterations and full implementation of the algorithm on the optimization problem, the CMBO provides the best obtained quasi-optimal solution. Flowcharts of different stages of the proposed CMBO are specified in Figure 1 and its pseudocode is also presented in Algorithm 1.
Algorithm 1. Pseudocode of CMBO
Start CMBO.
Input problem information: variables, objective function, and constraints.
Set number of search agents (N) and iterations (T).
Generate an initial population matrix at random.
Evaluate the objective function.
For t = 1:T
Sort population matrix based on objective function value using Equations (3) and (4).
Select population of mice M using Equation (5).
Select population of cats C using Equation (6).
Phase 1: update status of cats.
For j = 1:Nc
Update status of the jth cat using Equations (7)–(9).
end
Phase 2: update status of mice.
For i = 1:Nm
Create haven for the ith mouse using Equation (10).
Update status of the ith mouse using Equations (11) and (12).
end
End
Output best quasi-optimal solution obtained with the CMBO.
End CMBO

Step-by-Step Example

In this subsection, a step-by-step example of how to implement the proposed CMBO is provided to explain it in more detail. In this example, CMBO is applied to optimize the sphere function. In this example, it is assumed that the number of problem variables is 2, the number of population members is 10, and the condition of stopping the algorithm is 50 iterations. The mathematical model and information of the sphere function are as follows:
Sphere function:
F X = d = 1 m x d 2 = F x 1 , x 2 = x 1 2 + x 2 2 s u b j e c t   t o : 100 x 1 , x 2 100
Step 1:
In this step, the initial population of feasible solutions is created randomly. The following general formula is used to define the initial random population:
X i : x d = x l o + r a n d × x h i x l o   w h e r e   i = 1 : N ,   d = 1 : m
For example:
X 1 : x 1 = 100 + r a n d × 100 100 :   x 1 = 69.00641 x 2 = 100 + r a n d × 100 100 :   x 2 = 74.5553
Step 2:
In this step, each member of the population is evaluated in the objective function of the problem. In fact, each member of the population proposes values for the problem variables based on which the objective function can be evaluated.
For example:
F 1 : F X 1 = F 69.00641 , 74.5553 = 10320.38
Step 3:
In this step, based on comparing the values obtained for the objective function, the population members are sorted from the best solution (minimum value of the objective function) to the worst solution (maximum value of the objective function). Thus, the sort criterion is the value of the objective function.
Step 4:
In this step, the population of mice (first half of the population with better objective function values) and the population of cats (second half of the population with worse objective function values) are determined according to Equations (5) and (6).
Step 5:
In this step, the position of the cats is updated based on Equations (7)–(9).
Step 6:
In this step, the position of the mice is updated based on Equations (10)–(12).
Step 7:
The third to sixth steps of the algorithm are repeated until the stop condition is met. Finally, after the full implementation of the proposed algorithm on the objective function, the best proposed solution using CMBO is presented for the problem.
The calculations of the different steps of CMBO for the first iteration are presented in Table 2. The final solution for the intended problem after full implementation is also specified in this table.

3. Simulation Study and Results

In this section, the efficiency and ability of the proposed CMBO in solving various optimization problems and providing quasi-optimal solutions are evaluated. For this purpose, a standard set consisting of twenty-three objective functions of different types in three groups of unimodal, high-dimensional multimodal, and fixed-dimensional multimodal is applied. Complete information on these functions is provided in Appendix A and Table A1, Table A2 and Table A3.
In order to analyze the quality of the proposed algorithm, the results obtained from the CMBO are compared with nine other optimization algorithms including (i) popular and widely used algorithms: Genetic Algorithm (GA), Particle Swarm Optimization (PSO); (ii) highly cited algorithms: Gravitational Search Algorithm (GSA), Teaching-Learning-Based Optimization (TLBO), Grey Wolf Optimizer (GWO), Whale Optimization Algorithm (WOA); and (iii) recently published algorithms: Tunicate Swarm Algorithm (TSA), Marine Predators Algorithm (MPA), and Teamwork Optimization Algorithm (TOA). The performance results of optimization algorithms are presented using two indicators of average of the best quasi-optimal solutions (ave) and standard deviation of the best quasi-optimal solutions (std). The used values for the parameters of the optimization algorithms are specified in Table 3.

3.1. Evaluation of Unimodal Objective Functions

Objective functions F1 to F7 are considered to analyze and evaluate the ability of optimization algorithms to solve and optimize unimodal optimization problems. The results of the implementation of the proposed CMBO as well as nine compared optimization algorithms are presented in Table 4. The proposed algorithm provides the global optimal solution for F6. In addition, CMBO performs very well in the F1, F2, F3, F4, F5, and F7 functions and provides quasi-optimal solutions that are close to the global optimal. Analysis and comparison of the results obtained from the proposed algorithm against the other nine optimization algorithms shows that the CMBO has a higher ability to solve unimodal optimization problems.

3.2. Evaluation of High-Dimensional Objective Functions

Six F8 to F13 objective functions of the high-dimensional multi-model functions are selected to evaluate the ability of optimization algorithms to provide optimal quasi-optimal solutions. The results of optimization of these objective functions using the proposed CMBO and nine compared algorithms are presented in Table 5. CMBO provides the global optimal solution for the objective functions of F9 and F11. For F12 and F13 functions, CMBO provides the best performance and provides suitable quasi-optimal solutions. The optimization results show that CMBO obtains very competitive results in majority of the objective functions than other algorithms.

3.3. Evaluation of Fixed-Dimensional Objective Functions

F14 to F23 objective functions are selected to evaluate the ability of optimization algorithms to provide suitable solutions for fixed-dimensional multimodal optimization problems. The results of the implementation of optimization algorithms on this type of objective functions are presented in Table 6. CMBO provides good performance in all F14 to F23 objective functions and provides appropriate quasi-optimal solutions for these objective functions. In addition, comparison and analysis of the results show that the proposed algorithm is provided more appropriate solutions in most cases. On the other hand, in functions where CMBO has a similar performance in index “ave” with some algorithms, it is able to solve these optimization problems more effectively with a more appropriate index “std”.
In order to further analyze and visually compare the performance of the optimization algorithms, the boxplot of results for each algorithm and objective function is shown in Figure 2. In Table 4, Table 5 and Table 6, the bold results indicate an algorithm that has performed better in optimizing the specified function.

3.4. Statistical Analysis

Presentation and analysis of optimization results using the two indicators of the average of the best results and the standard deviation of the best results provide valuable and useful information about the performance of optimization algorithms. However, even with a very low probability, the superiority of one algorithm over several other algorithms may be coincidental. In this regard, in this subsection, a statistical analysis called Wilcoxon rank-sum test is presented in order to further evaluate and analyze the performance of optimization algorithms as well as the proposed CMBO. The Wilcoxon rank-sum test is one of the nonparametric tests which is used in statistical analysis.
In the Wilcoxon test, a p-value determines whether the considered optimization algorithm is statistically significant or not. If the p-value of the algorithm is less than 0.05, the result is that the algorithm is statistically significant. Table 7 presents the simulation results of statistical analysis using Wilcoxon rank-sum test. What can be concluded from the comparison and analysis of the values presented in this table is that the proposed CMBO has a significant superiority over the compared algorithm in cases where the p-value is less than 0.05. In fact, a p-value indicates whether the proposed CMBO has significant superiority over the compared algorithms. Based on the simulation results, the proposed CMBO has a significant superiority over MPA, WOA, GSA, PSO and GA in optimizing the F1 to F7 unimodal function group. In the second group of objective functions including F8 to F13, CMBO has a significant superiority over TSA, MPA, WOA, GWO, and GSA. The proposed CMBO in optimizing the objective functions of the third group, including F14 to F23, has a significant superiority over all TSA, MPA, WOA, GWO, TLBO, GSA, PSO, GA.

3.5. Sensitivity Analysis

In this subsection, the sensitivity analysis of the proposed CMBO with respect to the two parameters of the number of population members of the algorithm and the maximum number of iterations of the algorithm is presented.
In order to sensitivity analyze of the performance of the CMBO to the number of parameters of population members, it has been implemented on all twenty-three objective functions for different populations with 20, 30, 50, and 80 members. The results of this analysis are presented in Table 8, and also the behavior of convergence curves due to changes in the number of population members is presented in Figure 3. What has been concluded from the simulation results of the sensitivity analysis to the number of population member’s parameter is that as the number of members of the algorithm increases, the proposed CMBO converges to more suitable quasi-optimal solutions and the values of the objective function decrease.
In order to sensitivity analyze of the performance of the CMBO to the maximum number of iterations parameter, the proposed algorithm has been run independently on all twenty-three objective functions for the maximum number of iterations equal to 100, 500, 800, and 1000. Table 9 presents the evaluation results of this analysis and the behavior of convergence curves under the influence of changes in the maximum number of iterations is presented in Figure 4. The simulation results of the sensitivity analysis of the proposed CMBO with respect to the maximum number of iterations parameter indicate that increasing the maximum number of iterations has led the CMBO to converge to solutions closer to the global optimal.

4. Discussion

Exploitation and exploration are two important criteria that play a valuable role in evaluating and determining the quality of optimization algorithms. Optimization algorithms must have a favorable situation in these two criteria in order to be able to have acceptable performance in solving optimization problems.
The concept of exploitation means the ability of optimization algorithms to achieve a suitable quasi-optimal solution that is close to the global optimal. In fact, an optimization algorithm must provide a suitable quasi-optimal solution to an optimization problem after fully implemented. Therefore, in analyzing the effectiveness of several optimization algorithms in solving an optimization problem, the algorithm that suggests a better quasi-optimal solution for that problem has a higher quality in the criterion of exploitation. This criterion is especially important for optimization problems that have only one main solution. The F1 to F7 objective functions, which are selected as unimodal functions, have only one main optimal solution and no optimal local areas. These types of functions are suitable for evaluating the exploitation criterion because of this feature. The results of optimization of these objective functions using the proposed CMBO as well as nine compared algorithms are presented in Table 4. The analysis of these results indicates that the CMBO with high exploitation capability has been able to provide suitable quasi-optimal solutions for F1 to F7 functions, which have a much higher quality than similar algorithms. Therefore, the CMBO is in a much better position than the nine compared algorithms in the exploitation criterion.
The concept of exploration means the ability of optimization algorithms to accurately and appropriately scan the search space of an optimization problem. In fact, optimization algorithms must be able to search different areas of the search space in order to achieve solutions closer to the global optimization. Therefore, in analyzing the performance of several optimization algorithms, an algorithm has a higher quality in the exploration index that provides a more suitable quasi-optimal solution by accurately scanning the search space. This indicator is especially important in optimization problems that have local optimal solutions in addition to the main optimal solution. The F8 to F13 high-dimensional multimodal functions and the F14 to F23 fixed-dimensional multimodal functions have optimal local solutions in addition to basic optimal solution; therefore, these functions are suitable for evaluating the exploration power of optimization algorithms. The optimization results of F8 to F13 objective functions and F14 to F23 objective functions are presented in Table 5 and Table 6, respectively, using the proposed CMBO as well as nine compared algorithms. Based on the simulation results, it is determined that the CMBO with high ability to scan the search space is able to converge to quasi-optimal solutions without getting stuck in local optimal points. Therefore, the proposed CMBO has a high capability in the exploration index and is much more competitive than the competing algorithms.

Execution Time Analysis

In this subsection, studies of the execution time of optimization algorithms in solving objective functions are presented. The experimentation and algorithms are implemented in Matlab R2014a (8.3.0.532) version and run in the environment of Microsoft Windows 10 with 64 bits on Core i-7 processor with 2.40 GHz and 6 GB memory. The average execution time (ave_time) in seconds and the standard deviation for execution time (std_time) are computed as the metrics of performance. To generate and report the results, for each objective function, optimization algorithms utilize 20 independent runs where each run employs 1000 times of iterations.
The results of execution time analysis for all twenty-three objective functions are presented in Table 10. What can be deduced from the simulation results of this analysis is that the proposed CMBO is implemented on optimization problems in less time and has provided quasi-optimal solutions. A comparative review of the CMBO and compared algorithms is presented in Table 11.

5. Conclusions and Feature Works

Designed optimization problems in different sciences should be solved using appropriate methods. Optimization algorithms are one of the most widely used and effective methods to provide appropriate solutions to optimization problems. In this paper, a new optimizer called Cat and Mouse-Based Optimizer (CMBO) has been presented that mimics the natural behavior between cats and mice. The mathematical model of the proposed CMBO has been presented based on simulating the cats attack on mice and the escape of mice to shelters. The performance of the CMBO in optimization was tested on a standard set consisting of twenty-three objective functions and the results were compared with the performance of nine algorithms Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), Teaching-Learning-Based Optimization (TLBO), Grey Wolf Optimizer (GWO), Whale Optimization Algorithm (WOA), Marine Predators Algorithm (MPA), Tunicate Swarm Algorithm (TSA), and Teamwork Optimization Algorithm (TOA). The results of optimization of Unimodal objective functions showed that the proposed CMBO has a high capability in solving this type of optimization problems and has a very good exploitation power. The results of the implementation of the proposed algorithm on the objective functions of high-dimensional and fixed-dimensional multimodal showed the high exploration power of the proposed CMBO in order to accurately scan the search space of optimization problems. Moreover, analyzing the results and comparing the performance of the mentioned algorithms with the performance of the CMBO showed the superiority and more competitiveness of the proposed algorithm.
The conclusions presented in this section about the performance and ability of the proposed CMBO to solve optimization problems were based on the optimization of twenty-three standard objective functions. From a general point of view, in optimization studies, it cannot be claimed that a particular optimization algorithm is the best optimizer to solve all optimization problems. In fact, the algorithm should be used to solve the problems, and based on the results, it should be stated whether the proposed algorithm is generally better than the existing methods or for a set of problems that need to be identified. The important thing about all optimization algorithms is that it is always possible to develop new optimization algorithms that can provide more desirable quasi-optimal solutions that are also closer to the global optimal.
The authors present several ideas as potentials for future studies, including the design of a multi-objective version as well as a binary version of the CMBO. In addition, the application of the proposed CMBO in solving real-life problems and other optimization problems in various sciences is suggestions for further research.

Author Contributions

Conceptualization, M.D. and P.T.; methodology, M.D.; software, Š.H.; validation, M.D., Š.H. and P.T.; formal analysis, M.D.; investigation, P.T.; resources, Š.H.; data curation, Š.H.; writing—original draft preparation, P.T.; writing—review and editing, Š.H.; visualization, M.D.; supervision, P.T.; project administration, M.D.; funding acquisition, P.T. All authors have read and agreed to the published version of the manuscript.

Funding

The research was supported by the Excellence Project PřF UHK No. 2208/2021–2022.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data present in this study are available on request from the author M.D.

Acknowledgments

The authors would like to thank anonymous referees for their careful corrections and their comments that helped to improve the quality of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Complete information and details on the standard objective functions of unimodal, high-dimensional multimodal, and fixed-dimensional multimodal are provided in Table A1, Table A2 and Table A3, respectively.
Table A1. Unimodal test functions.
Table A1. Unimodal test functions.
Objective FunctionRangeDimensionsFmin
F 1 x = i = 1 m x i 2 100 , 100 300
F 2 x = i = 1 m x i + i = 1 m x i 10 , 10 300
F 3 x = i = 1 m j = 1 i x i 2 100 , 100 300
F 4 x = m a x x i   ,   1 i m 100 , 100 300
F 5 x = i = 1 m 1 100 x i + 1 x i 2 2 + x i 1 2 ) 30 , 30 300
F 6 x = i = 1 m x i + 0.5 2 100 , 100 300
F 7 x = i = 1 m i x i 4 + r a n d o m 0 , 1 1.28 , 1.28 300
Table A2. High-dimensional multimodal test functions.
Table A2. High-dimensional multimodal test functions.
Objective FunctionRangeDimensionsFmin
F 8 x = i = 1 m x i   sin x i 500 , 500 30−12,569
F 9 x = i = 1 m   x i 2 10 cos 2 π x i + 10 5.12 , 5.12 300
F 10 x = 20 exp 0.2 1 m i = 1 m x i 2 exp 1 m i = 1 m cos 2 π x i + 20 + e 32 , 32 300
F 11 x = 1 4000 i = 1 m x i 2 i = 1 m c o s x i i + 1 600 , 600 300
F 12 x = π m   { 10 sin π y 1 + i = 1 m y i 1 2 1 + 10 sin 2 π y i + 1 + y n 1 2 } + i = 1 m u x i , 10 , 100 , 4 u x i , a , i , n = k x i a n ,                x i > a ; 0 ,                      a < x i < a ; k x i a n ,            x i < a . 50 , 50 300
F 13 x = 0.1 {   sin 2 3 π x 1 + i = 1 m x i 1 2 [ 1 + sin 2 3 π x i + 1 ] + x n 1 2 [ 1 + sin 2 2 π x m ] } + i = 1 m u x i , 5 , 100 , 4 50 , 50 300
Table A3. Fixed-dimensional multimodal test functions.
Table A3. Fixed-dimensional multimodal test functions.
Objective FunctionRangeDimensionsFmin
F 14 x = 1 500 + j = 1 25 1 j + i = 1 2 x i a i j 6 1 65.53 , 65.53 20.998
F 15 x = i = 1 11 a i x 1 b i 2 + b i x 2 b i 2 + b i x 3 + x 4 2 5 , 5 40.00030
F 16 x = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 5 , 5 2−1.0316
F 17 x = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π c o s x 1 + 10 [−5,10] × [0,15]20.398
F 18 x = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] × [ 30 + 2 x 1 3 x 2 2 × ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] 5 , 5 23
F 19 x = i = 1 4 c i exp j = 1 3 a i j x j P i j 2 0 , 1 3−3.86
F 20 x = i = 1 4 c i exp j = 1 6 a i j x j P i j 2 0 , 1 6−3.22
F 21 x = i = 1 5 X a i X a i T + 6 c i 1 0 , 10 4−10.1532
F 22 x = i = 1 7 X a i X a i T + 6 c i 1 0 , 10 4−10.4029
F 23 x = i = 1 10 X a i X a i T + 6 c i 1 0 , 10 4−10.5364

References

  1. Dehghani, M.; Montazeri, Z.; Dehghani, A.; Ramirez-Mendoza, R.A.; Samet, H.; Guerrero, J.M.; Dhiman, G. MLO: Multi leader optimizer. Int. J. Intell. Eng. Syst. 2020, 13, 364–373. [Google Scholar] [CrossRef]
  2. Dhiman, G. SSC: A hybrid nature-inspired meta-heuristic optimization algorithm for engineering applications. Knowl. Based Syst. 2021, 222, 106926. [Google Scholar] [CrossRef]
  3. Sadeghi, A.; Doumari, S.A.; Dehghani, M.; Montazeri, Z.; Trojovský, P.; Ashtiani, H.J. A New “Good and Bad Groups-Based Optimizer” for Solving Various Optimization Problems. Appl. Sci. 2021, 11, 4382. [Google Scholar] [CrossRef]
  4. Cavazzuti, M. Deterministic Optimization. In Optimization Methods: From Theory to Design Scientific and Technological Aspects in Mechanics; Springer: Berlin/Heidelberg, Germany, 2013; pp. 77–102. [Google Scholar]
  5. Dhiman, G.; Kumar, V. Spotted hyena optimizer: A novel bio-inspired based metaheuristic technique for engineering applications. Adv. Eng. Softw. 2017, 114, 48–70. [Google Scholar] [CrossRef]
  6. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef] [Green Version]
  7. Goldberg, D.E.; Holland, J.H. Genetic Algorithms and Machine Learning. Mach. Learn. 1988, 3, 95–99. [Google Scholar] [CrossRef]
  8. Kennedy, J.; Eberhart, R. In Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  9. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  10. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput. Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  11. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  12. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  13. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  14. Kaur, S.; Awasthi, L.K.; Sangal, A.L.; Dhiman, G. Tunicate Swarm Algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell. 2020, 90. [Google Scholar] [CrossRef]
  15. Dehghani, M.; Trojovský, P. Teamwork Optimization Algorithm: A New Optimization Approach for Function Minimization/Maximization. Sensors 2021, 21, 4567. [Google Scholar] [CrossRef]
  16. Yang, X.; Suash, D. Cuckoo Search via Lévy flights. In Proceedings of the 2009 World Congress on Nature and Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar]
  17. Abualigah, L.; Yousri, D.; Elaziz, M.A.; Ewees, A.A.; Al-Qaness, M.A.; Gandomi, A.H. Aquila Optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  18. Yazdani, M.; Jolai, F. Lion Optimization Algorithm (LOA): A nature-inspired metaheuristic algorithm. J. Comput. Des. Eng. 2016, 3, 24–36. [Google Scholar] [CrossRef] [Green Version]
  19. Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper Optimisation Algorithm: Theory and application. Adv. Eng. Softw. 2017, 105, 30–47. [Google Scholar] [CrossRef] [Green Version]
  20. Dhiman, G.; Kumar, V. Emperor penguin optimizer: A bio-inspired algorithm for engineering problems. Knowl. Based Syst. 2018, 159, 20–50. [Google Scholar] [CrossRef]
  21. Chu, S.-C.; Tsai, P.-W.; Pan, J.-S. Cat swarm optimization. In Proceedings of the 9th Pacific Rim International Conference on Artificial Intelligence, Guilin, China, 7–11 August 2006; pp. 854–858. [Google Scholar]
  22. Kallioras, N.A.; Lagaros, N.D.; Avtzis, D.N. Pity beetle algorithm—A new metaheuristic inspired by the behavior of bark beetles. Adv. Eng. Softw. 2018, 121, 147–166. [Google Scholar] [CrossRef]
  23. Jahani, E.; Chizari, M. Tackling global optimization problems with a novel algorithm—Mouth Brooding Fish algorithm. Appl. Soft Comput. 2018, 62, 987–1002. [Google Scholar] [CrossRef]
  24. Shadravan, S.; Naji, H.R.; Bardsiri, V.K. The Sailfish Optimizer: A novel nature-inspired metaheuristic algorithm for solving constrained engineering optimization problems. Eng. Appl. Artif. Intell. 2019, 80, 20–34. [Google Scholar] [CrossRef]
  25. Dehghani, M.; Mardaneh, M.; Malik, O.P. FOA: ‘Following’ Optimization Algorithm for solving Power engineering optimization problems. J. Oper. Autom. Power Eng. 2020, 8, 57–64. [Google Scholar]
  26. Dehghani, M.; Montazeri, Z.; Dehghani, A.; Samet, H.; Sotelo, C.; Sotelo, D.; Ehsanifar, A.; Malik, O.P.; Guerrero, J.M.; Dhiman, G.; et al. DM: Dehghani Method for Modifying Optimization Algorithms. Appl. Sci. 2020, 10, 7683. [Google Scholar] [CrossRef]
  27. Storn, R.; Price, K.V. Differential Evolution—A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  28. Beyer, H.-G.; Schwefel, H.-P. Evolution strategies—A comprehensive introduction. Nat. Comput. 2002, 1, 3–52. [Google Scholar] [CrossRef]
  29. Simon, D. Biogeography-Based Optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef] [Green Version]
  30. Huang, G. Artificial infectious disease optimization: A SEIQR epidemic dynamic model-based function optimization algorithm. Swarm Evol. Comput. 2016, 27, 31–67. [Google Scholar] [CrossRef] [PubMed]
  31. Labbi, Y.; Ben Attous, D.; Gabbar, H.A.; Mahdad, B.; Zidan, A. A new rooted tree optimization algorithm for economic dispatch with valve-point effect. Int. J. Electr. Power Energy Syst. 2016, 79, 298–311. [Google Scholar] [CrossRef]
  32. Baykasoğlu, A.; Akpinar, Ş. Weighted Superposition Attraction (WSA): A swarm intelligence algorithm for optimization problems—Part 1: Unconstrained optimization. Appl. Soft Comput. 2017, 56, 520–540. [Google Scholar] [CrossRef]
  33. Akyol, S.; Alatas, B. Plant intelligence based metaheuristic optimization algorithms. Artif. Intell. Rev. 2016, 47, 417–462. [Google Scholar] [CrossRef]
  34. Salmani, M.H.; Eshghi, K. A Metaheuristic Algorithm Based on Chemotherapy Science: CSA. J. Optim. 2017, 2017. [Google Scholar] [CrossRef]
  35. Cheraghalipour, A.; Hajiaghaei-Keshteli, M.; Paydar, M.M. Tree Growth Algorithm (TGA): A novel approach for solving optimization problems. Eng. Appl. Artif. Intell. 2018, 72, 393–414. [Google Scholar] [CrossRef]
  36. van Laarhoven, P.J.M.; Aarts, E.H.L. (Eds.) Simulated annealing. In Simulated Annealing: Theory and Applications; Springer: Dordrecht, The Netherland, 1987; pp. 7–15. [Google Scholar]
  37. Eskandar, H.; Sadollah, A.; Bahreininejad, A.; Hamdi, M. Water cycle algorithm—A novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput. Struct. 2012, 110, 151–166. [Google Scholar] [CrossRef]
  38. Kaveh, A.; Bakhshpoori, T. Water Evaporation Optimization: A novel physically inspired optimization algorithm. Comput. Struct. 2016, 167, 69–85. [Google Scholar] [CrossRef]
  39. Muthiah-Nakarajan, V.; Noel, M.M. Galactic Swarm Optimization: A new global optimization metaheuristic inspired by galactic motion. Appl. Soft Comput. 2016, 38, 771–787. [Google Scholar] [CrossRef]
  40. Dehghani, M.; Montazeri, Z.; Dehghani, A.; Seifi, A. Spring search algorithm: A new meta-heuristic optimization algorithm inspired by Hooke’s law. In Proceedings of the 2017 IEEE 4th International Conference on Knowledge-Based Engineering and Innovation (KBEI), Tehran, Iran, 22 December 2017; pp. 210–214. [Google Scholar]
  41. Zhang, Q.; Wang, R.; Yang, J.; Ding, K.; Li, Y.; Hu, J. Collective decision optimization algorithm: A new heuristic optimization method. Neurocomputing 2017, 221, 123–137. [Google Scholar] [CrossRef]
  42. Vommi, V.B.; Vemula, R. A very optimistic method of minimization (VOMMI) for unconstrained problems. Inf. Sci. 2018, 454–455, 255–274. [Google Scholar] [CrossRef]
  43. Dehghani, M.; Samet, H. Momentum search algorithm: A new meta-heuristic optimization algorithm inspired by momentum conservation law. SN Appl. Sci. 2020, 2. [Google Scholar] [CrossRef]
  44. Hashim, F.A.; Hussain, K.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W. Archimedes optimization algorithm: A new metaheuristic algorithm for solving optimization problems. Appl. Intell. 2021, 51, 1531–1551. [Google Scholar] [CrossRef]
  45. Dehghani, M.; Montazeri, Z.; Malik, O.P. DGO: Dice game optimizer. GAZI Univ. J. Sci. 2019, 32, 871–882. [Google Scholar] [CrossRef] [Green Version]
  46. Dehghani, M.; Montazeri, Z.; Malik, O.P.; Dhiman, G.; Kumar, V. BOSA: Binary orientation search algorithm. Int. J. Innov. Technol. Explor. Eng. 2019, 9, 5306–5310. [Google Scholar]
  47. Dehghani, M.; Montazeri, Z.; Saremi, S.; Dehghani, A.; Malik, O.P.; Al-Haddad, K.; Guerrero, J. HOGO: Hide objects game optimization. Int. J. Intell. Eng. Syst. 2020, 13, 216–225. [Google Scholar] [CrossRef]
  48. Dehghani, M.; Mardaneh, M.; Guerrero, J.M.; Malik, O.; Kumar, V. Football game based optimization: An application to solve energy commitment problem. Int. J. Intell. Eng. Syst. 2020, 13, 514–523. [Google Scholar] [CrossRef]
  49. Dehghani, M.; Montazeri, Z.; Givi, H.; Guerrero, J.M.; Dhiman, G. Darts game optimizer: A new optimization technique based on darts game. Int. J. Intell. Eng. Syst. 2020, 13, 286–294. [Google Scholar] [CrossRef]
  50. Dehghani, M.; Montazeri, Z.; Malik, O.; Givi, H.; Guerrero, J. Shell Game Optimization: A Novel Game-Based Algorithm. Int. J. Intell. Eng. Syst. 2020, 13, 246–255. [Google Scholar] [CrossRef]
  51. Dehghani, M.; Montazeri, Z.; Hubálovský, Š. GMBO: Group Mean-Based Optimizer for Solving Various Optimization Problems. Mathematics 2021, 9, 1190. [Google Scholar] [CrossRef]
Figure 1. Flowchart of CMBO.
Figure 1. Flowchart of CMBO.
Sensors 21 05214 g001
Figure 2. Boxplot of composition objective functions results for different optimization algorithms.
Figure 2. Boxplot of composition objective functions results for different optimization algorithms.
Sensors 21 05214 g002aSensors 21 05214 g002b
Figure 3. Sensitivity analysis of CMBO for number of population members.
Figure 3. Sensitivity analysis of CMBO for number of population members.
Sensors 21 05214 g003aSensors 21 05214 g003b
Figure 4. Sensitivity analysis of CMBO for maximum number of iterations.
Figure 4. Sensitivity analysis of CMBO for maximum number of iterations.
Sensors 21 05214 g004aSensors 21 05214 g004b
Table 1. Proposed well-known optimization algorithms in the recent literature.
Table 1. Proposed well-known optimization algorithms in the recent literature.
Ref.AlgorithmMain Idea (Inspiration Source)
[16]Cuckoo SearchBehavior of cuckoo
[17]Aquila OptimizerBehavior of Aquila in nature during the process of catching the prey
[18]Lion Optimization AlgorithmBehavior of lion
[19]Grasshopper Optimization AlgorithmGrasshopper behavior
[20]Emperor Penguin OptimizerThe behavior of emperor penguin
[21]Cat Swarm Optimization AlgorithmBehaviors of cats
[22]Pity Beetle AlgorithmAggregation behavior, searching for nest and food
[23]Mouth Brooding FishThe behavior of mouthbrooding fish
[24]Sailfish OptimizerGroup of hunting sailfish
[25]Following Optimization AlgorithmRelationships between members and the leader of a community
[26]Multi-Leader OptimizerThe presence of several leaders simultaneously for the population members
[27]Differential Evolutionthe natural phenomenon of evolution
[28]Evolution StrategyDarwinian evolution theory
[29]Biogeography-Based OptimizerBiogeographic concepts
[30]Artificial Infectious DiseaseSEIQR epidemic model
[31]Rooted Tree OptimizationPlant roots movement looking for water
[32]Weighted Superposition AttractionWeighted superposition of active fields
[33]Plant IntelligencePlants nervous system
[34]Chemotherapy ScienceChemotherapy method
[35]Tree Growth AlgorithmTrees competition for acquiring light and foods
[36]Simulated AnnealingMetal annealing process
[37]Water Cycle AlgorithmsWater cycle process and how rivers and streams flow to the sea in the real world
[38]Water Evaporation OptimizationEvaporation of water molecules
[39]Galactic Swarm Optimized MotionThe motion of stars, galaxies
[40]Spring Search AlgorithmsHooke’s law
[41]Collective Decision OptimizationThe social behavior of human beings
[42]Very Optimistic MethodReal-life practices of successful persons
[43]Momentum Search AlgorithmMomentum law and Newton’s laws of motion
[44]Archimedes Optimization AlgorithmLaw of physics Archimedes’ Principle which imitates the principle of buoyant force exerted upward on an object
[45]Dice Game OptimizerRules governing the game of dice and the impact of players on each other
[46]Orientation Search AlgorithmGame of orientation, in which players move in the direction of a referee
[47]Hide Objects Game OptimizationBehavior and movements of players to find a hidden object
[48]Football Game Based Optimization Simulation of behavior of clubs in football league.
[49]Darts Game OptimizerRules of the Darts game
[50]Shell Game OptimizationRules of the shell game
Table 2. The various steps of the proposed CMBO for the first iteration in sphere function solving.
Table 2. The various steps of the proposed CMBO for the first iteration in sphere function solving.
Step 1Step 2Step 3Step 4Step 5Step 6
XF(X)XSFS(X)CatsMiceCFCMFm
x1x2 x 1 S x 2 S c1c2m1m2
X169.00641−74.555310,320.38−36.788919.513631734.208 M1 −36.788919.513631734.208
X218.96709−45.97732473.65918.96709−45.97732473.659 M2 18.96709−45.97732473.659
X399.35621−34.779711,081.2846.2260318.418162476.074 M3 −12.78453.444065175.3053
X4−36.788919.513631734.20841.24409−45.89683807.588 M4 32.03547−45.89683132.784
X546.2260318.418162476.07468.51469−19.82875087.439 M5 −3.47199−20.895448.6563
X6−91.479576.2678414,185.2957.4835158.621866740.877C1 52.12189−42.98354564.272
X768.51469−19.82875087.43969.00641−74.555310,320.38C2 −24.7895−40.98222294.059
X8−64.1203−80.215810,545.99−64.1203−80.215810,545.99C3 −51.4096−50.60345203.653
X941.24409−45.89683807.58899.35621−34.779711,081.28C4 87.51208−30.5418591.116
X1057.4835158.621866740.877−91.479576.2678414,185.29C5 −16.755452.755743063.913
Full implementation
Best Solution: x1 = 3.51 × 10−12, x2 = 6.73 × 10−12 and F(X) = 5.7626 × 10−23
Table 3. Parameter values for the compared algorithms.
Table 3. Parameter values for the compared algorithms.
AlgorithmParameterValue
GA
TypeReal coded
SelectionRoulette wheel (Proportionate)
CrossoverWhole arithmetic (Probability = 0.8,
α 0.5 ,   1.5 )
MutationGaussian (Probability = 0.05)
PSO
TopologyFully connected
Cognitive and social constant(C1, C2) = (2, 2)
Inertia weightLinear reduction from 0.9 to 0.1
Velocity limit10% of dimension range
GSA
Alpha, G0, Rnorm, Rpower20, 100, 2, 1
TLBO
TF: teaching factor T F = round 1 + r a n d
random numberrand is a random number in the range 0 1 .
GWO
Convergence parameter (a)a: Linear reduction from 2 to 0.
WOA
Convergence parameter (a)a: Linear reduction from 2 to 0.
r is a random vector in 0 , 1 .
l is a random number in 1 , 1 .
TSA
Pmin and Pmax1, 4
C1, C2, C3random numbers, which lie in the range 0 1 .
MPA
Constant numberp  = 0.5
Random vectorR is a vector of uniform random numbers in the range 0 1 .
Fish Aggregating Devices (FADs)FADs = 0.2
Binary vectorU = 0 or 1
TOA
Update index I = round 1 + r a n d
rr is a uniform random number in the range 0 1 .
Table 4. Optimization results of CMBO and other algorithms on unimodal function.
Table 4. Optimization results of CMBO and other algorithms on unimodal function.
CMBOTOAMPATSAWOAGWOTLBOGSAPSOGA
F1ave2.69 × 10−23603.2715 × 10−217.71 × 10−382.1741 × 10−91.09 × 10−588.3373 × 10−602.0255 × 10−171.7740 × 10−513.2405
std004.6153 × 10−217.00 × 10−217.3985 × 10−255.1413 × 10−744.9436 × 10−761.1369 × 10−326.4396 × 10−214.7664 × 10−15
F2ave6.88 × 10−12101.57 × 10−128.48 × 10−390.54621.2952 × 10−347.1704 × 10−352.3702 × 10−80.34112.4794
std2.46 × 10−13501.42 × 10−125.92 × 10−411.7377 × 10−161.9127 × 10−506.6936 × 10−505.1789 × 10−247.4476 × 10−172.2342 × 10−15
F3ave2.44 × 10−6000.08641.15 × 10−211.7634 × 10−87.4091 × 10−152.7531 × 10−15279.3439589.4921536.8963
std1.82 × 10−6700.14446.70 × 10−211.0357 × 10−235.6446 × 10−302.6459 × 10−311.2075 × 10−137.1179 × 10−136.6095 × 10−13
F4ave1.04 × 10−9302.6 × 10−81.33 × 10−232.9009 × 10−51.2599 × 10−149.4199 × 10−153.2547 × 10−93.96342.0942
std2.09 × 10−10809.25 × 10−91.15 × 10−221.2121 × 10−201.0583 × 10−292.1167 × 10−302.0346 × 10−241.9860 × 10−162.2342 × 10−15
F5ave24.8701126.247646.04928.861541.776726.8607146.456436.1069550.26245310.4273
std1.91 × 10−143.26 × 10−140.42194.76 × 10−32.5421 × 10−1401.9065 × 10−143.0982 × 10−141.5888 × 10−142.0972 × 10−13
F6ave000.3987.10 × 10−211.6085 × 10−90.64230.4435020.2514.55
std000.19141.12 × 10−254.6240 × 10−256.2063 × 10−174.2203 × 10−1601.25643.1776 × 10−15
F7ave0.0027099.92 × 10−060.00183.72 × 10−40.02050.00080.00170.02060.11345.6799 × 10−3
std1.94 × 10−191.74 × 10−200.0015.09 × 10−51.5515 × 10−187.2730 × 10−203.87896 × 10−192.7152 × 10−184.3444 × 10−177.7579 × 10−19
Table 5. Optimization results of CMBO and other algorithms on high-dimensional function.
Table 5. Optimization results of CMBO and other algorithms on high-dimensional function.
CMBOTOAMPATSAWOAGWOTLBOGSAPSOGA
F8ave−6561.15−9631.41−3594.1632−5740.3388−1663.9782−5885.1172−7408.6107−2849.0724−6908.6558−8184.4142
std1.83 × 10−123.86 × 10−12811.3265141.5716.3492467.5138513.5784264.3516625.6248833.2165
F9ave00140.12385.70 × 10−34.20118.5265 × 10−1510.248516.267557.061362.4114
std0026.31241.46 × 10−34.3692 × 10−155.6446 × 10−305.5608 × 10−153.1776 × 10−156.3552 × 10−152.5421 × 10−14
F10ave4.44 × 10−158.88 × 10−169.6987 × 10−129.80 × 10−140.32931.7053 × 10−140.27573.5673 × 10−92.15463.2218
std006.1325 × 10−124.51 × 10−121.9860 × 10−162.7517 × 10−292.5641 × 10−153.6992 × 10−257.9441 × 10−165.1636 × 10−15
F11ave0001.00 × 10−70.11890.00370.60823.73750.04621.2302
std0007.46 × 10−78.9991 × 10−171.2606 × 10−181.9860 × 10−162.7804 × 10−153.1031 × 10−188.4406 × 10−16
F12ave1.10 × 10−080.24630.08510.03681.74140.03720.02030.03620.48060.047
std1.66 × 10−227.45 × 10−170.00521.5461 × 10−28.1347 × 10−124.3444 × 10−177.7579 × 10−196.2063 × 10−181.8619 × 10−164.6547 × 10−18
F13ave1.78 × 10−071.250.49012.95750.34560.57630.32930.0020.50841.2085
std3.10 × 10−184.47 × 10−160.19321.5682 × 10−123.25391 × 10−122.4825 × 10−152.1101 × 10−144.2617 × 10−144.9650 × 10−173.2272 × 10−16
Table 6. Optimization results of CMBO and other algorithms on fixed-dimensional function.
Table 6. Optimization results of CMBO and other algorithms on fixed-dimensional function.
CMBOTOAMPATSAWOAGWOTLBOGSAPSOGA
F14ave0.9980.99800.9981.99230.9983.74082.27213.59132.17350.9986
std04.72 × 10−164.2735 × 10−162.6548 × 10−79.4336 × 10−166.4545 × 10−151.9860 × 10−167.9441 × 10−167.9441 × 10−161.5640 × 10−15
F15ave0.0003070.0003070.0030.00040.00490.00630.00330.00240.05355.3952 × 10−2
std1.21 × 10−201.16 × 10−184.0951 × 10−159.0125 × 10−43.4910 × 10−181.1636 × 10−181.2218 × 10−172.9092 × 10−193.8789 × 10−197.0791 × 10−18
F16ave−1.03163−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316
std1.47 × 10−161.99 × 10−164.4652 × 10−162.6514 × 10−169.9301 × 10−163.9720 × 10−161.4398 × 10−155.9580 × 10−163.4755 × 10−167.9441 × 10−16
F17ave0.39780.39780.39790.39910.40470.39780.39780.39780.78540.4369
std09.93 × 10−179.1235 × 10−152.1596 × 10−162.4825 × 10−178.6888 × 10−177.4476 × 10−179.9301 × 10−174.9650 × 10−174.9650 × 10−17
F18ave3333333.0009334.3592
std001.9584 × 10−152.6528 × 10−155.6984 × 10−152.0853 × 10−151.5888 × 10−156.9511 × 10−163.6741 × 10−155.9580 × 10−16
F19ave−3.86278−3.86278−3.8627−3.8066−3.8627−3.8621−3.8609−3.8627−3.8627−3.85434
std1.83 × 10−162.68 × 10−164.2428 × 10−152.6357 × 10−153.1916 × 10−152.4825 × 10−157.3483 × 10−158.3413 × 10−158.9371 × 10−159.9301 × 10−17
F20ave−3.322−3.322−3.3211−3.3206−3.2424−3.2523−3.2014−3.0396−3.2619−2.8239
std1.59 × 10−161.69 × 10−151.1421 × 10−115.6918 × 10−157.9441 × 10−162.1846 × 10−151.7874 × 10−152.1846 × 10−142.9790 × 10−163.97205 × 10−16
F21ave−10.1532−10.1532−10.1532−5.5021−7.4016−9.6452−9.1746−5.1486−5.3891−4.3040
std1.15 × 10−161.39 × 10−152.5361 × 10−115.4615 × 10−132.3819 × 10−116.5538 × 10−158.5399 × 10−152.9790 × 10−161.4895 × 10−151.5888 × 10−15
F22ave−10.4029−10.4029−10.4029−5.0625−8.8165−10.4025−10.0389−9.0239−7.6323−5.1174
std1.39 × 10−163.18 × 10−152.8154 × 10−118.4637 × 10−146.7524 × 10−151.9860 × 10−151.5292 × 10−141.6484 × 10−121.5888 × 10−151.2909 × 10−15
F23ave−10.5364−10.5364−10.5364−10.3613−10.0003−10.1302−9.2905−8.9045−6.1648−6.5621
std1.35 × 10−167.94 × 10−163.9861 × 10−117.6492 × 10−129.1357 × 10−154.5678 × 10−151.1916 × 10−157.1497 × 10−142.7804 × 10−153.8727 × 10−15
Table 7. Statistical analysis results from the Wilcoxon test (p ≥ 0.05).
Table 7. Statistical analysis results from the Wilcoxon test (p ≥ 0.05).
Compared AlgorithmsUnimodalHigh-Dimensional
Multi Modal
Fixed-Dimensional
Multi Modal
CMBO vs. TOA0.43750.43750.625
CMBO vs. TSA0.1093750.06250.0625
CMBO vs. MPA0.0156250.031250.003906
CMBO vs. WOA0.0156250.031250.007813
CMBO vs. GWO0.156250.031250.011719
CMBO vs. TLBO0.156250.43750.005859
CMBO vs. GSA0.031250.031250.019531
CMBO vs. PSO0.0156250.43750.003906
CMBO vs. GA0.0156250.43750.001953
Table 8. Results of the algorithm sensitivity analysis to the number of population members.
Table 8. Results of the algorithm sensitivity analysis to the number of population members.
Objective FunctionsNumber of Population Members
20305080
F13.7 × 10−2011.4 × 10−2142.7 × 10−2361.1 × 10−243
F25.4 × 10−1371.4 × 10−1266.9 × 10−1214.3 × 10−119
F38.69 × 10−748.34 × 10−602.44 × 10−602.42 × 10−57
F45.5 × 10−1081.23 × 10−981.04 × 10−934.96 × 10−92
F526.8616225.8790824.8701124.58636
F60000
F70.0085170.0066390.0027090.001691
F8−4696−7900.45−6561.15−7142.03
F90000
F104.44 × 10−154.44 × 10−154.44 × 10−154.44 × 10−15
F110000
F120.0386140.0031711.1 × 10−081.36 × 10−09
F131.2817980.3051441.78 × 10−075.22 × 10−09
F141.5932341.1964140.9980.998004
F150.0004180.0003110.0003070.000307
F16−1.03163−1.03163−1.03163−1.03163
F170.3978870.3978870.39780.397887
F189.75333
F19−3.82413−3.86278−3.86278−3.86278
F20−3.3005−3.30416−3.322−3.322
F21−8.71417−8.61749−10.1532−10.1532
F22−7.84302−8.35605−10.4029−10.4029
F23−8.57191−9.61404−10.5364−10.5364
Table 9. Results of the algorithm sensitivity analysis to the maximum number of iterations.
Table 9. Results of the algorithm sensitivity analysis to the maximum number of iterations.
Objective FunctionsMaximum number of iterations
1005008001000
F19.54 × 10−206.7 × 10−1159.3 × 10−1872.7 × 10−236
F29.48 × 10−113.22 × 10−591.47 × 10−956.9 × 10−121
F30.0477347.69 × 10−247.75 × 10−412.44 × 10−60
F45.41 × 10−081.27 × 10−458.96 × 10−741.04 × 10−93
F527.7823826.0463825.6213124.87011
F60000
F70.0120140.0059670.0051160.002709
F8−3642.94−4496.48−5014.61−6561.15
F90000
F107.19 × 10−114.44 × 10−154.44 × 10−154.44 × 10−15
F110000
F120.0239370.0001291.23 × 10−051.1 × 10−08
F130.3241950.0301450.0162471.78 × 10−07
F141.0968720.9980040.9980040.998
F150.0005290.0003410.0003080.000307
F16−1.03163−1.03163−1.03163−1.03163
F170.3978870.3978870.3978870.3978
F183333
F19−3.86278−3.86278−3.86278−3.86278
F20−3.31584−3.32199−3.322−3.322
F21−8.93555−9.64077−10.1532−10.1532
F22−9.13251−10.4027−10.4029−10.4029
F23−10.4926−10.5364−10.5362−10.5364
Table 10. Comparison of average execution time (ave_time) in seconds and the standard deviation for execution time (std_time).
Table 10. Comparison of average execution time (ave_time) in seconds and the standard deviation for execution time (std_time).
CMBOTOAMPATSAWOAGWOTLBOGSAPSOGA
F1ave_time2.063272.2880472.8289342.274562.8564463.1423793.6624299.0248973.6854163.876233
std_time0.0088020.1036010.0217660.008820.0397180.014580.0083650.1108430.016220.044052
F2ave_time2.1494182.334192.2221832.4963593.1434953.0003223.7474639.6023113.4259363.34707
std_time0.0099760.056750.006190.0041590.0125510.0012260.002050.0290980.0018690.001112
F3ave_time3.7596936.0145736.4748185.80588310.966796.13273613.3104211.6522314.6816712.16981
std_time0.0493430.1848780.0565530.0039130.0202910.0021260.0054860.055220.0236140.063015
F4ave_time2.0957072.3034962.8302112.4290612.827912.8844213.4756639.3990653.2983363.218456
std_time0.0004250.0581680.0063820.0013130.0025220.0007740.0001460.0956840.0020260.001128
F5ave_time2.3199932.6681943.2711112.8374734.0762673.3626684.7432169.856184.5103914.74454
std_time0.0011940.0980080.0208550.0018890.0048840.00120.0132860.0006350.0044730.015477
F6ave_time2.0518422.1428972.7784432.3888512.6366992.8847863.5438289.7397832.8266273.964025
std_time0.0007340.0538950.0025640.0005310.0001030.0005450.0030820.0817560.000790.002177
F7ave_time2.7993083.4266965.3617784.2084857.731674.6249969.501111.038910.251128.948355
std_time0.0002980.1380220.0106840.0013810.0137620.0007480.011310.0027950.0622750.003807
F8ave_time2.3681022.7870533.542123.0381164.4705133.451665.93495810.150336.1895015.35854
std_time0.0017740.1464710.0285880.0018720.0089980.0009470.0244040.0026140.0271430.006958
F9ave_time2.1052242.1414183.2070982.703112.9982422.9629395.07571110.079014.6975194.632448
std_time0.0004570.0224950.009470.0005740.0014510.0010750.0078710.0342990.0063830.001877
F10ave_time2.1196332.2138273.1997522.6851573.2504682.9515374.6635699.8623254.4652134.851987
std_time0.0006750.1066570.0082340.0015020.0025340.0004940.0028450.0303740.0033110.000453
F11ave_time2.3822372.7029943.6270423.0299264.3411683.5442345.78395210.090655.9624965.935665
std_time0.0005210.0909010.0023330.0064470.0047610.0112320.0048140.0012810.0048780.010718
F12ave_time4.6897576.4014089.7864927.74679616.081148.08127722.2794913.0226423.2449418.18917
std_time0.0005820.1368960.1213410.0036360.0107870.0058130.1453350.0016830.1434790.016131
F13ave_time4.6833136.3170499.7150687.88248816.034268.21649221.9327712.8760422.9382717.26237
std_time0.001640.1420560.0591730.0155650.0091750.0401210.2385770.0807750.0409520.026452
F14ave_time6.7765911.0570310.1738811.2504428.3540410.6724938.5824310.4320342.0315331.15071
std_time0.0243940.4959390.1328180.0077290.0431970.0040330.4633090.009560.7936640.187898
F15ave_time1.1625172.0364341.6749531.1439772.662521.1680714.2282574.3815122.4072913.348114
std_time0.00060.0764430.0042680.0012270.0037290.0004740.0046350.0008910.0038950.000252
F16ave_time0.9811541.9657471.5534081.0142952.5548950.987974.1552643.8951432.1834573.277226
std_time0.0004870.0979360.0137040.0014890.0020580.0003260.0026320.0068330.0009410.000117
F17ave_time0.9768541.8184051.2358740.869382.1279320.8654443.5968983.3914291.7547792.926444
std_time0.0257550.0766910.0056550.0006660.0019130.0005840.0050770.0021620.0017030.000751
F18ave_time0.8957121.8283831.1615980.7781061.9438410.8278213.4742543.325341.5535843.013364
std_time0.0001710.1150630.0009965.56E-050.0001930.0003810.0041980.0071570.0028310.002876
F19ave_time1.157482.2913431.5003191.1650462.8521.197114.2324313.8118852.85763.788615
std_time0.0021340.2659540.001410.0003640.003640.0007110.0075880.0091340.0092840.000403
F20ave_time1.2826072.1249991.579681.3846333.1742241.3593684.2179684.1556253.8024934.105984
std_time0.0004010.0683020.0015070.0012330.0042480.0004640.0039290.002070.0821650.003145
F21ave_time1.3255952.3875861.9585851.6254693.9758431.813575.6175173.9353743.8341057.570488
std_time0.0013630.0868220.0212540.0021270.0723220.0091810.0260920.0046050.0064948.978715
F22ave_time1.4605152.6455452.1979991.7625464.1593471.7140766.6612084.2279894.8597165.190246
std_time0.0048560.1420530.0182050.000580.0027780.0008590.0059570.038250.0281510.001234
F23ave_time1.5580042.8574372.693532.1685315.1089692.0321437.7077825.0071176.2063336.102901
std_time0.0014630.1598450.0199160.0024410.0075910.0030440.002250.0058210.157520.001592
Table 11. Comparative review for CMBO and compared algorithms.
Table 11. Comparative review for CMBO and compared algorithms.
AlgorithmDisadvantageAdvantage
GAHigh memory consumption, having control parameters, and poor local search.Good global search, simplicity and comprehensibility
PSOHaving control parameters, poor convergence and entrapment in local optimum areas.Simplicity of the relationship and its implementation.
GSAHigh computations, time consuming, having several control parameters, and poor convergence in complex objective functions.Easy implementation, fast convergence in simple problems, and low computational cost.
TLBOPoor convergence rate.Good global search, simplicity, and not requiring any parameter.
GWOLow convergence speed, poor local search, and low accuracy in solving complex problems.Fast convergence due to continuous reduction of search space, less storage and computational requirements, and easy to implement due to its simple structure.
WOALow accuracy, slow convergence, and easy to fall into local optimum.Simple structure, less required operator, and having appropriate balance between exploration and exploitation.
MPAHigh computations, time consuming, and having control parameters.Good global search and fast convergence.
TSAPoor convergence, having control parameters and fall to local optimal solutions in solving high-dimensional multimodal problems.Fast convergence, good global search, having appropriate balance between exploration and exploitation.
TOAFall to local optimal solutions in solving high-dimensional multimodal problems.Not requiring any parameter, good global search, having appropriate balance between exploration and exploitation, and fast convergence.
CMBOThe important thing about all optimization algorithms is that it cannot be claimed that one particular algorithm is the best optimizer for all optimization problems. It is also always possible to develop new optimization algorithms that can provide more desirable quasi-optimal solutions that are also closer to the global optimal.Easy implementation, simplicity of equations, lack of control parameters, proper exploitation, proper exploration, high convergence power, and not getting caught up in local optimal solutions.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dehghani, M.; Hubálovský, Š.; Trojovský, P. Cat and Mouse Based Optimizer: A New Nature-Inspired Optimization Algorithm. Sensors 2021, 21, 5214. https://doi.org/10.3390/s21155214

AMA Style

Dehghani M, Hubálovský Š, Trojovský P. Cat and Mouse Based Optimizer: A New Nature-Inspired Optimization Algorithm. Sensors. 2021; 21(15):5214. https://doi.org/10.3390/s21155214

Chicago/Turabian Style

Dehghani, Mohammad, Štěpán Hubálovský, and Pavel Trojovský. 2021. "Cat and Mouse Based Optimizer: A New Nature-Inspired Optimization Algorithm" Sensors 21, no. 15: 5214. https://doi.org/10.3390/s21155214

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop