Next Article in Journal
Magnetic Fields of Devices during Electric Vehicle Charging: A Slovak Case Study
Next Article in Special Issue
Clustering with Missing Features: A Density-Based Approach
Previous Article in Journal
k-Zero-Divisor and Ideal-Based k-Zero-Divisor Hypergraphs of Some Commutative Rings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Controlled Random Search Method

by
Vasileios Charilogis
1,
Ioannis Tsoulos
1,*,
Alexandros Tzallas
1 and
Nikolaos Anastasopoulos
2
1
Department of Informatics and Telecommunications, University of Ioannina, 471 00 Arta, Greece
2
Computer Engineering and Information Department, University of Patras, 265 04 Rio Patras, Greece
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(11), 1981; https://doi.org/10.3390/sym13111981
Submission received: 15 September 2021 / Revised: 15 October 2021 / Accepted: 17 October 2021 / Published: 20 October 2021

Abstract

:
A modified version of a common global optimization method named controlled random search is presented here. This method is designed to estimate the global minimum of multidimensional symmetric and asymmetric functional problems. The new method modifies the original algorithm by incorporating a new sampling method, a new termination rule and the periodical application of a local search optimization algorithm to the points sampled. The new version is compared against the original using some benchmark functions from the relevant literature.

1. Introduction

Global optimization [1] is considered a problem of high complexity with many applications. The problem is defined as the location of the global minimum of a multi-dimensional function f ( x ) :
x * = arg min x S f ( x )
where S R n is formulated as:
S = a 1 , b 1     a 2 , b 2     a n , b n
In global optimization, many functional problems that need to be solved can have symmetric solutions—the minimum—without this being the rule. The location of the global optimum finds application in many areas such as physics  [2,3], chemistry [4,5], medicine [6,7], economics [8], etc. In modern theory there are two different categories of global optimization methods: the stochastic methods and the deterministic methods. The first category contains the vast majority of methods such as simulated annealing methods [9,10,11], genetic algorithms [12,13,14], tabu search methods [15], particle swarm optimization [16,17,18] etc. A common method that also belongs to stochastic methods is the controlled random search (CRS) method [19], which is a procedure that uses a population of trial solutions. This method initially creates a set with randomly selected points and repeatedly replaces the worst point in that set with a randomly generated point. This process can continue until some termination criterion is satisfied. The CRS method has been used intensively in many problems such as geophysics problems [20,21], optimal shape design problems [22], the animal diet problem [23], the heat transfer problem [24] etc.
This CRS method has been thoroughly analyzed by many researchers in the field, such as the work A of Ali and Storey, where two new variants of the CRS method were proposed [25]. These variants have proposed alternative techniques for the selection of the initial sample set and usage of local search methods. Additionally, Pillo et al. [26] suggested a hybrid CRS method where the base algorithm is combined with a Newton-type unconstrained minimization algorithm [27] to enhance the efficiency of the method in various test problems. Another work is of Kaelo and Ali, in which they suggested [28] some modifications to the method, especially in the new point generation step. Additionally, Filho and Albuquerque have suggested [29] the usage of a distribution strategy to accelerate the controlled random search method. Tsoulos and Lagaris [30] suggested the usage of a new line search method based on genetic algorithms to improve the original CRS method. The current work proposed three major modifications in the CRS method: a new point replacement strategy, a stochastic termination rule and a periodical application of some local search method. The first modification is used to better explore the domain range of the function. The second modification is made in order to achieve a better termination of the method without wasting valuable computational time. The third modification is used in order to speed up the method by applying a small amount of steps of a local search method. The new method introduces a new method to create trial points that was not present in the previous work [30] and also replaces the expensive call-to-line search method with a few calls to a local search optimization method.
The rest of this article is organized as follows: in Section 2, the major steps of the CRS method as well as the proposed modifications are presented; in Section 3, the results from the application of the proposed method on a series of benchmark functions are listed; and finally, in Section 4, some conclusions and guidelines for future research are presented.

2. Method Description

The controlled random search has a series of steps that are described in Algorithm 1. The changes proposed by the new method focus on three points:
  • The creation of a test point (New_Point step) is performed using a new procedure described in Section 2.1.
  • In the Min_Max step, the stochastic termination rule described in Section 2.2 is used. The aim of this rule is to terminate the method when, with some certainty, no lower minimums are to be found.
  • Apply a few steps of a local search procedure after New_Point step in the z ˜ point. This procedure is used to bring the test points closer to the corresponding minimums. This speeds up the process of searching for new minima, although it obviously leads to an increase in function calls

2.1. A New Method for Trial Points

The proposed technique to compute the trial point z ˜ is shown in Algorithm 2. According to this, the calculation of the test point z ˜ does not contain a product with high values as in the basic algorithm, so that the test point is not too far from the centroid. This technique avoids vector jumps from the centroid, where it has great gravity in the calculation for starting the local optimization. This method also considers in the calculation the current minimum point and not only a random point as in the original technique. With this modification, knowledge that has already been found in the past is used to create a new point and in such a way that it is close to the area of attraction of a local minimum.

2.2. A New Stopping Rule

It is quite common in the optimization techniques to use a predefined number of maximum iterations as the stopping rule of the method. Even though this termination rule is easy to implement, it could sometimes require an excessive number of functions calls before termination; therefore, a more sophisticated termination rule is needed. The termination rule proposed here is inspired by [31]. At every iteration k, the variance σ ( k ) of the quantity f min is calculated. If the optimization technique did not manage to find a new estimation of the global minimum for some iterations, then probably the global minimum has been discovered and the algorithm should terminate. The termination rule is defined as follows; terminate when:
σ ( k ) σ k last 2
The term k last represents the last iteration where a new global minimum was located.
Algorithm 1: The original controlled random search method. The basic steps of the method
Initialization Step:
  • Set the value for the parameter N. Typically this value could be set to N = 25 n .
  • Set ϵ as a small positive value, used in comparisons.
  • Create randomly the set T = z 1 , z 2 , , z N from S.
Min_Max Step:
  • Calculate the points z min = argmin f ( z ) and z max = argmax f ( z ) and their function values
    f max = max z T f ( z )
    and
    f min = min z T f ( z )
  • If f max f min < ϵ , then goto Local_Search Step.
New_Point Step:
  • Select randomly the reduced set T ˜ = z T 1 , z T 2 , , z T n + 1 from T.
  • Compute the centroid G:
    G = 1 n i = 1 n z T i
  • Compute a trial point z ˜ = 2 G z T n + 1 .
  • If z ˜ S or f ( z ˜ ) f max then goto New_Point step.
Update Step:
  • T = T { z ˜ }     z max .
  • Goto Min_Max Step.
Local_Search Step:
  • z * = localSearch ( z ) .
  • The final outcome of the algorithm is the discovered global minimum z * .
The amount σ ( k ) decreases continuously over time as either the method will find a lower estimate for the global minimum or the global minimum will have already been found. In addition, this quantity is de facto permanently positive and therefore is a good candidate for use in termination criteria. If the global minimum has already been found or the method is no longer able to find a new estimate for it, then this quantity will tend to zero and therefore we can interrupt the execution of the algorithm when this quantity falls below a value. This value may be a fraction of the value of σ ( k ) the last time a new estimate for the global minimum was found. If we want to allow the algorithm to continue for several generations, this fraction can be small, e.g., 0.25. If we want it to stop more immediately, a good estimate for the fraction can be 0.75. A good compromise between these prices is the 0.5 price chosen here.
Algorithm 2: The steps of the new proposed method to create more efficient trial points for the controlled random search method
  • Calculate the centroid G:
    G = 1 n i = 1 n z T i
  • Set G = G + 1 n z min
  • Compute a trial point z ˜ = G 1 n z T n + 1 .

3. Experiments

3.1. Test Functions

The modified version of the CRS was tested against the traditional CRS on series of benchmark functions from the relevant literature [32,33]. The following functions were used:
  • Bf1 function, defined as:
    f ( x ) = x 1 2 + 2 x 2 2 3 10 cos 3 π x 1 4 10 cos 4 π x 2 + 7 10
    with x [ 100 , 100 ] 2 ;
  • Bf2 function:
    f ( x ) = x 1 2 + 2 x 2 2 3 10 cos 3 π x 1 cos 4 π x 2 + 3 10
    where x [ 50 , 50 ] 2 ;
  • Branin function: f ( x ) = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π cos ( x 1 ) + 10 with 5 x 1 10 , 0 x 2 15 .
  • CM–Cosine Mixture function:
    f ( x ) = i = 1 n x i 2 1 10 i = 1 n cos 5 π x i
    with x [ 1 , 1 ] n . In our experiments we have used n = 4 ;
  • Camel function:
    f ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 , x [ 5 , 5 ] 2
  • Easom function:
    f ( x ) = cos x 1 cos x 2 exp x 2 π 2 x 1 π 2 ;
    with x [ 100 , 100 ] 2 .
  • Exponential function:
    f ( x ) = exp 0.5 i = 1 n x i 2 , 1 x i 1
    In the conducted experiments, the values with n = 2 , 4 , 8 , 16 , 32 , 64 , 100 were used, and the corresponding functions were denoted as EXP2, EXP4, EXP8, EXP16, EXP32, EXP64, EXP100;
  • Goldstein & Price:
f ( x ) = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] × [ 30 + ( 2 x 1 3 x 2 ) 2 ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] ;
  • Griewank2 function:
    f ( x ) = 1 + 1 200 i = 1 2 x i 2 i = 1 2 cos ( x i ) ( i ) , x [ 100 , 100 ] 2 ;
  • Gkls function: f ( x ) = Gkls ( x , n , w ) is a function with w local minima, described in [34] with x [ 1 , 1 ] n . In the conducted experiments, we have used n = 2 , 3 and w = 50 , and the functions are denoted by the labels GKLS250 and GKLS350;
  • Guilin–Hills function: f ( x ) = 3 + i = 1 n c i x i + 9 x i + 10 sin π 1 x i + 1 2 k i , w i t h x [ 0 , 1 ] n , c i > 0 and k i being positive integers. In our experiments, we have used n = 5 , 10 with 50 local minima in each function. The produced functions are entitled GUILIN550 and GUILIN1050;
  • Hansen function: f ( x ) = i = 1 5 i cos ( i 1 ) x 1 + i j = 1 5 j cos ( j + 1 ) x 2 + j , x [ 10 , 10 ] 2 ;
  • Hartman 3 function:
    f ( x ) = i = 1 4 c i exp j = 1 3 a i j x j p i j 2
    with x [ 0 , 1 ] 3 and a = 3 10 30 0.1 10 35 3 10 30 0.1 10 35 , c = 1 1.2 3 3.2 and
    p = 0.3689 0.117 0.2673 0.4699 0.4387 0.747 0.1091 0.8732 0.5547 0.03815 0.5743 0.8828 ;
  • Hartman 6 function:
    f ( x ) = i = 1 4 c i exp j = 1 6 a i j x j p i j 2
    with x [ 0 , 1 ] 6 and a = 10 3 17 3.5 1.7 8 0.05 10 17 0.1 8 14 3 3.5 1.7 10 17 8 17 8 0.05 10 0.1 14 , c = 1 1.2 3 3.2 and
    p = 0.1312 0.1696 0.5569 0.0124 0.8283 0.5886 0.2329 0.4135 0.8307 0.3736 0.1004 0.9991 0.2348 0.1451 0.3522 0.2883 0.3047 0.6650 0.4047 0.8828 0.8732 0.5743 0.1091 0.0381 ;
  • Rastrigin function:
    f ( x ) = x 1 2 + x 2 2 cos ( 18 x 1 ) cos ( 18 x 2 ) , x [ 1 , 1 ] 2 ;
  • Rosenbrock function:
    f ( x ) = i = 1 n 1 100 x i + 1 x i 2 2 + x i 1 2 , 30 x i 30
    In our experiments we used this function with n = 20 ;
  • Shekel 7 function:
f ( x ) = i = 1 7 1 ( x a i ) ( x a i ) T + c i
with x [ 0 , 10 ] 4 and a = 4 4 4 4 1 1 1 1 8 8 8 8 6 6 6 6 3 7 3 7 2 9 2 9 5 3 5 3 , c = 0.1 0.2 0.2 0.4 0.4 0.6 0.3 ;
  • Shekel 5 function:
f ( x ) = i = 1 5 1 ( x a i ) ( x a i ) T + c i
with x [ 0 , 10 ] 4 and a = 4 4 4 4 1 1 1 1 8 8 8 8 6 6 6 6 3 7 3 7 , c = 0.1 0.2 0.2 0.4 0.4 ;
  • Shekel 10 function:
f ( x ) = i = 1 10 1 ( x a i ) ( x a i ) T + c i
with x [ 0 , 10 ] 4 and a = 4 4 4 4 1 1 1 1 8 8 8 8 6 6 6 6 3 7 3 7 2 9 2 9 5 5 3 3 8 1 8 1 6 2 6 2 7 3.6 7 3.6 , c = 0.1 0.2 0.2 0.4 0.4 0.6 0.3 0.7 0.5 0.6 ;
  • Sinusoidal function:
    f ( x ) = 2.5 i = 1 n sin x i z + i = 1 n sin 5 x i z , 0 x i π
    In our experiments, we used n = 4 , 8 , 16 , 32 and z = π 6 , and the corresponding functions are denoted by the labels SINU4, SINU8, SINU16, SINU32;
  • Test2N function. This function is given by the equation
    f ( x ) = 1 2 i = 1 n x i 4 16 x i 2 + 5 x i , x i [ 5 , 5 ] .
    In the conducted experiments the n has the values 4, 5, 6, 7;
  • Test30N function. This function is given by
    f ( x ) = 1 10 sin 2 3 π x 1 i = 2 n 1 x i 1 2 1 + sin 2 3 π x i + 1 + x n 1 2 1 + sin 2 2 π x n
    with x [ 10 , 10 ] . The function has 30 n local minima in the specified range and we used n = 3 , 4 in our experiments.

3.2. Results

In the experiments, two different values were measured: the rejection rate in the New_Point step and the average number of function calls required. In the first case we measured the percentage of points rejected during the New_Point step, i.e., points created that are outside the domain range of the function. All the experiments were conducted 30 times and different seeds for the random number generator were used each time. The local search method that was used in the experiments and denoted as localsearch(x) was a BFGS variant due to Powell [35]. The experiments were conducted on a i7-10700T CPU (Intel, Mountain View, CA, USA) at 2.00 GHz equipped with 16 GB of RAM. The operating system used was Debian Linux and the all the code was compiled using ANSI C++ compiler.
The experimental results are listed in Table 1. The column FUNCTION stands for the name of the objective function. The column CRS-R stands for the rejection rate for the CRS method, while the column NEWCRS-R displays the same measure for the current method. Similarly, the column CRS-C represents the average function calls for the CRS method and the column NEWCRS-C stands for the average function calls of the proposed method. Additionally, a statistical comparison between the CRS and the proposed method is shown in Figure 1.
The proposed method almost annihilates the rejection rate in every test function. This is evidence that the new mechanism proposed here to create a new point is more accurate than the traditional one. Additionally, the proposed method requires a lower number of function calls than the CRS method, as one can deduce from the relevant columns and the statistical comparison. The same information is presented graphically in Figure 2, where the percentage comparison of times of functional problems is outlined. Additionally, in the most difficult problems, the proposed method seems to be even more superior to the original one in number of calls, as the combination of the termination rule together with the improved new point generation technique terminate the method much faster and more correctly than the original method.
Additionally, the execution time for every test function was measured, and this information is outlined in Table 2. The column CRS-TIME stands for the average execution time of the original CRS method, the column NEWCRS-TIME represents the average execution time for the proposed method and the column DIFF is the calculated percentage difference between the previously mentioned columns. It is evident that the proposed method requires shorter execution times than the original one, and in addition, the difference between the two methods is more obvious in large problems. This phenomenon is also reflected in Figure 3, where a graphical representation of the average execution times of the two methods for the EXP problem for a different number of dimensions is made.

4. Conclusions

Three important modifications were proposed in the current work for the CRS method. The first modification has to do with the new test point generation process, which seems to be more accurate than the original one. The new method creates points that are within the domain range of the function almost every time. The second change adds a new termination rule based on stochastic observations. The third proposed modification applies a few steps of a local search procedure to every trial point created by the algorithm. Judging by the results, it seems that the proposed changes have two important effects. The first is that the success of the algorithm in creating valid test points is significantly improved. The second is the large reduction in the number of function calls required to locate the global minimum.
Future research may include the exploration of the usage of additional stopping rules and the parallelization of different aspects of the method in order to speed up the optimization procedure as well as to take advantage of multicore programming environments.

Author Contributions

V.C., I.T., A.T. and N.A. conceived of the idea and methodology and supervised the technical part regarding the software for the estimation of the global minimum of multidimensional symmetric and asymmetric functional problems. V.C. and I.T. conducted the experiments, employing several different functions, and provided the comparative experiments. A.T. performed the statistical analysis. V.C. and all other authors prepared the manuscript. V.C., N.A. and I.T. organized the research team and A.T. supervised the project. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We acknowledge support of this work from the project “Immersive Virtual, Augmented and Mixed Reality Center of Epirus” (MIS 5047221) which is implemented under the Action “Reinforcement of the Research and Innovation Infrastructure”, funded by the Operational Programme “Competitiveness, Entrepreneurship and Innovation” (NSRF 2014-2020) and co-financed by Greece and the European Union (European Regional Development Fund).

Conflicts of Interest

The authors declare no conflicts.

References

  1. Törn, A.; Žilinskas, A. Global Optimization Volume 350 of Lecture Notes in Computer Science; Springer: Heidelberg, Germany, 1987. [Google Scholar]
  2. Yapo, P.O.; Gupta, H.V.; Sorooshian, S. Multi-objective global optimization for hydrologic models. J. Hydrol. 1998, 204, 83–97. [Google Scholar] [CrossRef] [Green Version]
  3. Duan, Q.; Sorooshian, S.; Gupta, V. Effective and efficient global optimization for conceptual rainfall-runoff models. Water Resour. Res. 1992, 28, 1015–1031. [Google Scholar] [CrossRef]
  4. Wales, D.J.; Scheraga, H.A. Global Optimization of Clusters, Crystals, and Biomolecules. Science 1999, 27, 1368–1372. [Google Scholar] [CrossRef] [Green Version]
  5. Pardalos, P.M.; Shalloway, D.; Xue, G. Optimization methods for computing global minima of nonconvex potential energy functions. J. Glob. Optim. 1994, 4, 117–133. [Google Scholar] [CrossRef]
  6. Balsa-Canto, E.; Banga, J.R.; Egea, J.A.; Fernandez-Villaverde, A.; de Hijas-Liste, G.M. Global Optimization in Systems Biology: Stochastic Methods and Their Applications. In Advances in Systems Biology. Advances in Experimental Medicine and Biology; Goryanin, I., Goryachev, A., Eds.; Springer: New York, NY, USA, 2012; Volume 736. [Google Scholar] [CrossRef] [Green Version]
  7. Boutros, P.C.; Ewing, A.D.; Ellrott, K.; Norman, T.C.; Dang, K.K.; Hu, Y.; Kellen, M.R.; Suver, C.; Bare, J.C.; Stein, L.D.; et al. Global optimization of somatic variant identification in cancer genomes with a global community challenge. Nat. Genet. 2014, 46, 318–319. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Gaing, Z.-L. Particle swarm optimization to solving the economic dispatch considering the generator constraints. IEEE Trans. Power Syst. 2003, 18, 1187–1195. [Google Scholar] [CrossRef]
  9. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  10. Ingber, L. Very fast simulated re-annealing. Math. Comput. Model. 1989, 12, 967–973. [Google Scholar] [CrossRef] [Green Version]
  11. Eglese, R.W. Simulated annealing: A tool for operational research. Simulated Anneal. Tool Oper. Res. 1990, 46, 271–281. [Google Scholar] [CrossRef]
  12. Goldberg, D. Genetic Algorithms in Search, Optimization and Machine Learning; Addison-Wesley Publishing Company: Reading, MA, USA, 1989. [Google Scholar]
  13. Michaelewicz, Z. Genetic Algorithms + Data Structures = Evolution Programs; Springer: Berlin, Germany, 1996. [Google Scholar]
  14. Grady, S.A.; Hussaini, M.Y.; Abdullah, M.M. Placement of wind turbines using genetic algorithms. Renew. Energy 2005, 30, 259–270. [Google Scholar] [CrossRef]
  15. Duarte, A.; Martí, R.; Glover, F.; Gortazar, F. Hybrid scatter tabu search for unconstrained global optimization. Ann. Oper. Res. 2011, 183, 95–123. [Google Scholar] [CrossRef]
  16. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  17. Poli, R.; Kennedy, J.K.; Blackwell, T. Particle swarm optimization An Overview. Swarm Intell. 2007, 1, 33–57. [Google Scholar] [CrossRef]
  18. Trelea, I.C. The particle swarm optimization algorithm: Convergence analysis and parameter selection. Inf. Process. Lett. 2003, 85, 317–325. [Google Scholar] [CrossRef]
  19. Price, W.L. Global Optimization by Controlled Random Search. Comput. J. 1977, 20, 367–370. [Google Scholar] [CrossRef] [Green Version]
  20. Smith, D.N.; Ferguson, J.F. Constrained inversion of seismic refraction data using the controlled random search. Geophysics 2000, 65, 1622–1630. [Google Scholar] [CrossRef]
  21. Bortolozo, C.A.; Porsani, J.L.; dos Santos, F.A.M.; Almeida, E.R. VES/TEM 1D joint inversion by using Controlled Random Search (CRS) algorithm. J. Appl. Geophys. 2015, 112, 157–174. [Google Scholar] [CrossRef]
  22. Haslinger, J.; Jedelský, D.; Kozubek, T.; Tvrdík, J. Genetic and Random Search Methods in Optimal Shape Design Problems. J. Glob. Optim. 2000, 16, 109–131. [Google Scholar] [CrossRef]
  23. Gupta, R.; Chandan, M. Use of “Controlled Random Search Technique for Global Optimization” in Animal Diet Problem. Int. J. Emerg. Technol. Adv. Eng. 2013, 3, 284–287. [Google Scholar]
  24. Mehta, R.C.; Tiwari, S.B. Controlled random search technique for estimation of convective heat transfer coefficient. Heat. Mass. Transfer. 2007, 43, 1171–1177. [Google Scholar] [CrossRef]
  25. Ali, M.M.; Storey, C. Modified Controlled Random Search Algorithms. Int. J. Comput. Math. 1994, 53, 229–235. [Google Scholar] [CrossRef]
  26. Di Pillo, G.; Lucidi, S.; Palagi, L.; Roma, M. A Controlled Random Search Algorithm with Local Newton-type Search for Global Optimization. In High Performance Algorithms and Software in Nonlinear Optimization. Applied Optimization; De Leone, R., Murli, A., Pardalos, P.M., Toraldo, G., Eds.; Springer: Boston, MA, USA, 1998; Volume 24. [Google Scholar] [CrossRef] [Green Version]
  27. Lucidi, S.; Rochetich, F.; Roma, M. Curvilinear stabilization techniques for truncated Newton methods in large scale unconstrained optimization. Siam J. Optim. 1998, 8, 916–939. [Google Scholar] [CrossRef]
  28. Kaelo, P.; Ali, M.M. Some Variants of the Controlled Random Search Algorithm for Global Optimization. J. Optim. Appl. 2006, 130, 253–264. [Google Scholar] [CrossRef]
  29. Manzanares-filho, N.; Albuquerque, R.B.F. Accelerating Controlled Random Search Algorithms Using a Distribution Strategy. In Proceedings of the EngOpt 2008—International Conference on Engineering Optimization, Rio de Janeiro, Brazil, 1–5 June 2008. [Google Scholar]
  30. Tsoulos, I.G.; Lagaris, I.E. Genetically controlled random search: A global optimization method for continuous multidimensional functions. Comput. Phys. Commun. 2006, 174, 152–159. [Google Scholar] [CrossRef]
  31. Tsoulos, I.G. Modifications of real code genetic algorithm for global optimization. Appl. Math. Comput. 2008, 203, 598–607. [Google Scholar] [CrossRef]
  32. Ali, M.M.; Khompatraporn, C.; Zabinsky, Z.B. A Numerical Evaluation of Several Stochastic Algorithms on Selected Continuous Global Optimization Test Problems. J. Glob. Optim. 2005, 31, 635–672. [Google Scholar] [CrossRef]
  33. Floudas, C.A.; Pardalos, P.M.; Adjiman, C.; Esposoto, W.; Gümüs, Z.; Harding, S.; Klepeis, J.; Meyer, C.; Schweiger, C. Handbook of Test Problems in Local and Global Optimization; Kluwer Academic Publishers: Dordrecht, The Netheralnds, 1999. [Google Scholar]
  34. Gaviano, M.; Ksasov, D.E.; Lera, D.; Sergeyev, Y.D. Software for generation of classes of test functions with known local and global minima for global optimization. ACM Trans. Math. Softw. 2003, 29, 469–480. [Google Scholar] [CrossRef]
  35. Powell, M.J.D. A Tolerant Algorithm for Linearly Constrained Optimization Calculations. Math. Program. 1989, 45, 547–566. [Google Scholar] [CrossRef]
Figure 1. Statistical comparison for the function calls using box plots.
Figure 1. Statistical comparison for the function calls using box plots.
Symmetry 13 01981 g001
Figure 2. Percentage comparison for time execution between the two methods.
Figure 2. Percentage comparison for time execution between the two methods.
Symmetry 13 01981 g002
Figure 3. Time comparison between the two methods for the EXP function for a variety of problem dimensions.
Figure 3. Time comparison between the two methods for the EXP function for a variety of problem dimensions.
Symmetry 13 01981 g003
Table 1. Experimenting with rejection rates.
Table 1. Experimenting with rejection rates.
FUNCTIONCRS-RCRS-CNEWCRS-RNEWCRS-C
BF11.37%25230.00%1689
BF21.33%25060.17%1569
BRANIN16.00%20149.13%851
CAMEL1.67%22350.20%1487
EASOM51.03%59111.43%635
EXP23.03%12900.70%644
EXP42.67%46880.00%1302
EXP82.77%16,4530.00%2601
EXP164.00%47,4000.00%5207
EXP327.70%93,5200.00%10,414
EXP6418.80%135,6380.00%13,602
EXP10038.53%129,3270.00%14,506
GKLS2503.87%17840.27%1684
GKLS3506.43%38810.03%2088
GOLDSTEIN3.60%21540.70%1829
GRIEWANK21.20%25030.03%2742
GUILIN5508.33%91290.00%25,333
GUILIN10509.63%30,8060.0010,561
HANSEN47.60%26434.03%1736
HARTMAN39.97%30096.13%1331
HARTMAN613.37%13,6150.00%6091
RASTRIGIN9.17%21301.33%2986
ROSENBROCK0.00%59,0240.00%15,719
SHEKEL54.73%89740.00%2967
SHEKEL73.70%86060.00%3236
SHEKEL102.73%92640.00%3479
SINU43.90%65250.00%2889
SINU85.10%21,5610.00%4946
SINU168.43%62,1940.00%9539
SINU3214.40%135,9860.00%18,456
TEST2N424.57%10,1980.00%3756
TEST2N534.17%20,8500.00%4806
TEST2N642.50%43,2900.00%6075
TEST2N750.37%92,6580.00%7005
TEST30N324.10%40110.00%5691
TEST30N427.30%74320.00%8579
TOTAL13.67%1,000,4120.86%208,031
Table 2. Time comparisons.
Table 2. Time comparisons.
FUNCTIONCRS-TIMENEWCRS-TIMEDIFF
BF10.1680.1548.33%
BF20.1800.15414.44%
BRANIN0.2090.13833.97%
CAMEL0.1650.14114.55%
EASOM0.1650.1518.48%
EXP20.1650.14313.33%
EXP40.2280.15233.33%
EXP80.6290.18770.27%
EXP163.1420.29990.48%
EXP3214.3641.08292.47%
EXP6460.8613.93293.54%
EXP100144.7949.38693.52%
GKLS2500.5920.593−0.17%
GKLS3500.6580.5998.97%
GOLDSTEIN0.1910.16314.66%
GRIEWANK20.1740.1664.60%
GUILIN5500.4750.529−11.37%
GUILIN10501.5240.45370.28%
HANSEN0.2170.292−34.56%
HARTMAN30.210.16322.38%
HARTMAN60.5140.26249.03%
RASTRIGIN0.1680.164.76%
ROSENBROCK5.310.58489.00%
SHEKEL50.3210.20336.76%
SHEKEL70.3020.21827.81%
SHEKEL100.3250.27116.62%
SINU40.2830.20627.21%
SINU80.8970.36958.86%
SINU164.7751.44869.68%
SINU3224.4138.99963.14%
TEST2N40.3890.1951.16%
TEST2N50.7330.20971.49%
TEST2N61.7140.25685.06%
TEST2N74.3260.26493.90%
TEST30N30.2220.2038.56%
TEST30N40.3240.23926.23%
TOTAL274.12732.95887.98%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Charilogis, V.; Tsoulos, I.; Tzallas, A.; Anastasopoulos, N. An Improved Controlled Random Search Method. Symmetry 2021, 13, 1981. https://doi.org/10.3390/sym13111981

AMA Style

Charilogis V, Tsoulos I, Tzallas A, Anastasopoulos N. An Improved Controlled Random Search Method. Symmetry. 2021; 13(11):1981. https://doi.org/10.3390/sym13111981

Chicago/Turabian Style

Charilogis, Vasileios, Ioannis Tsoulos, Alexandros Tzallas, and Nikolaos Anastasopoulos. 2021. "An Improved Controlled Random Search Method" Symmetry 13, no. 11: 1981. https://doi.org/10.3390/sym13111981

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop