Next Article in Journal
Research on Floor Heave Mechanisms and Control Technology for Deep Dynamic Pressure Roadways
Next Article in Special Issue
Experimental Evaluation of Chemical Reactions Involved in Ultrasonic-Assisted Absorption of Bulk CO2
Previous Article in Journal
Experiment and Simulation Research on Rock Damage Mechanism in Tooth Indentation
Previous Article in Special Issue
Lithium in a Sustainable Circular Economy: A Comprehensive Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing Mean-Variance Mapping Optimization Using Opposite Gradient Method and Interior Point Method for Real Parameter Optimization Problems

by
Thirachit Saenphon
,
Suphakant Phimoltares
* and
Chidchanok Lursinsap
Advanced Virtual and Intelligent Computing Research Center (AVIC), Department of Mathematics and Computer Science, Faculty of Science, Chulalongkorn University, Bangkok 10330, Thailand
*
Author to whom correspondence should be addressed.
Processes 2023, 11(2), 465; https://doi.org/10.3390/pr11020465
Submission received: 14 December 2022 / Revised: 12 January 2023 / Accepted: 15 January 2023 / Published: 3 February 2023

Abstract

:
The aim of optimization methods is to identify the best results in the search area. In this research, we focused on a mixture of the interior point method, opposite gradient method, and mean-variance mapping optimization, named IPOG-MVMO, where the solutions can be obtained from the gradient field of the cost function on the constraint manifold. The process was divided into three main phases. In the first phase, the interior point method was applied for local searching. Secondly, the opposite gradient method was used to generate a population of candidate solutions. The last phase involved updating the population according to the mean and variance of the solutions. In the experiments on real parameter optimization problems, three types of functions, which were unimodal, multimodal, and continuous composition functions, were considered and used to compare our proposed method with other meta-heuristics techniques. The results showed that our proposed algorithms outperformed other algorithms in terms of finding the optimal solution.

1. Introduction

Optimization methods are employed to solve real parameter optimization problems in order to obtain a vector x = x 1 x D that yields the optimal value of the function f x , where D is the number of dimensions of the vector. To solve optimization problems, a global solution is searched without related knowledge or the physical structure of the cost function. There are real parameter optimization problems, such as engineering optimization problems, as well as those related to scientific applications, energy savings for tissue paper mills involving energy efficiency scheduling [1], the optimal scheduling of vehicle-to-grid energy and ancillary services [2], query optimization mechanisms in cloud computing [3], energy management [4], and tools for analytics in mechanical engineering [5].
Previously, a wide variety of evolutionary algorithms have been proposed to obtain solutions from real-world continuous optimization problems. For instance, the covariance matrix adaptation evolution strategy (CMA-ES) algorithm [6] is based on correlated mutations. The covariance matrix C has been adopted to enable the proper distribution of mutations, resulting in an increased likelihood of the successful replication of the search process. However, there are several black box properties that may promptly lead to premature CMA-ES convergence, with many uncontrollable variations and uncertainties. Therefore, to increase the possibility of finding the global optimum using the restart strategy, many techniques have been proposed based on CMA-ES, such as the restart CMA evolution strategy with increasing population sizes (IPOP-CMA-ES) [7], the bi-population CMA-ES strategy (BIPOP-CMA-ES) [8], and the new bi-population CMA-ES strategy (NBIPOPaCMA-ES) [9]. Usually, a new generation is randomly sampled based on the current covariance matrix in the desired search space. Consequently, the starting point is still randomized. Therefore, the time taken to find a solution increases as the amount of data increases. Moreover, the size of the search area and the surface of the cost function have also not been considered.
Another technique that solves the optimization problem is mean-variance mapping optimization (MVMO). MVMO [10] involves an evolutionary algorithm with two interesting issues. Firstly, the MVMO process determines the range of search areas for variables within an interval [0, 1]. This process ensures that new values computed later for the optimization population are always generated within this range before the fitness value is calculated. Secondly, the statistical features used in MVMO are useful for changing the search direction. Using the mean and variance of the different solutions and a mapping function for the mutation operation yields the optimal fitness value. Additionally, for each time that the algorithm produces better fitness values, the solutions with the n-best fitness values are updated and stored in the archive for use in finding the best solution, until the specified number of iterations has been reached. MVMO is applied in chemical process applications. For example, an adaptive PID controller based on MVMO has been proposed to enhance the performance of a chemical process with variable time delay [11]. Swarm-based mean-variance mapping optimization (MVMO-S) is an extension of MVMO, which is combined with swarm intelligence. The search procedure applied to this technique begins with the particles. They are represented in the form of the consistent function of archiving and mapping to the original MVMO [12]. Another extension of MVMO is the swarm variant of hybrid mean-variance mapping optimization (MVMO-SH) [13,14,15], which is an evolutionary algorithm. The algorithm takes advantage of the statistical features (mean and variance) of the dynamic search function, using mapping functions for mutation and modification according to the mean and variance of the n-best solutions that are recorded in the solution archive. The process of generating new offspring from techniques based on MVMO also uses randomness, regardless of the number of populations in the actual search surface of the cost function.
In addition to the algorithms mentioned above, there are still many algorithms that use natural behavior concepts to solve optimization problems, such as the seagull and gray wolf optimization algorithms. The seagull optimization algorithm (SOA) is a new type of bioinspired optimization algorithm that is based on the characteristics of a seagull. The SOA is combined with another algorithm to solve energy problems, such as short-term wind speed forecasting problems [16]. The gray wolf optimization algorithm (GWO) [17] relies on the level of leadership and the hunting mechanism of the gray wolf population for the search process of the optimization algorithm. The algorithms, based on different animal behavior patterns, are also subject to their effectiveness, which cannot be significantly improved when modified. Valenta and Langer proposed 2D P colonies to model the gray wolf optimization algorithm [18], where the performance was compared with that of the original GWO algorithm. From the computer simulation, the 2D P colonies had good performance for optimization problems.
Seagull, gray wolf optimization, and other evolutionary algorithms have the characteristics of self-management, adaptation, and self-learning. However, the new population determination process requires random initialization, resulting in relatively low computing efficiency. Considering the advantages and disadvantages of these algorithms, facing the same problem, the efficiency can be improved by modifying the original algorithms with other initialization concepts to avoid the randomization process.
Furthermore, a philosophy-inspired algorithm, namely, yin–yang pair optimization (YYOP) [19], uses the concept of balancing between two opposite entities for exploration and exploitation. YYPO is a low-complexity method that maintains good performance.
An important step in an optimization algorithm is the population generation step. Random initialization is used to generate new candidate solutions and eliminate the solutions that yield low scores, so that the cost value can either be maximized or minimized. However, the geometric structure of the cost function is not assessed at any step of the solution when generating offspring or for the search process. We proposed the application of the opposite gradient search [20] technique to search for the solution on the surface of the cost function. The technique, namely the fast opposite gradient search (FOGS) method, refers to a surface search algorithm applied with the opposite gradient method (OGM) [21]. FOGS is different from other search methods. It is not based mainly on meta-heuristics; instead, it seeks the manifold to obtain the locations where the cost function has zero gradients and minimum values.
This article introduces methods to increase the problem-solving efficiency with a better initial population for real parameter optimization problems. The proposed method presents a combination of the interior point method, the opposite gradient for generating new offspring, and the mean-variance mapping optimization algorithm (IPOG-MVMO). The mapping optimization algorithm is applied for the mutation operation to generate a modification depending on the n-best solution’s mean and variance, to obtain a candidate solution on a complex surface of the function. Therefore, this proposed research contribution involves generating new offspring and obtaining the solution from the high-dimensional space of the cost function.
The manuscript is structured as follows. Section 2 describes the concepts of the proposed technique. In Section 3, we present an experimental simulation and analyze the results. Section 4 presents the discussion. Finally, Section 5 provides the conclusions and scope for future research.

2. Proposed Concept

For the proposed method, there were three main phases. Firstly, IPM was applied for local searching. The results from the first phase were introduced as points used for the second phase to create a new population, which depended on the manifold of the cost function using OGM. Finally, the last phase aimed to obtain the best solution using MVMO.
Karmarkar [22] presented the interior point method (IPM) to solve a linear programming problem in polynomial time complexity. The number of iterations taken by IPM was not greater than 100, regardless of the size of the problem. Therefore, IPM is suitable for local searching for large problems. The repetitive path pattern of searching for IPM always walks within the domain of possible answers. Since each iteration of IPM is complicated, especially in cases where the coefficient matrix is dense, IPM spends more time on each iteration.
One important concept that has been used to find the best solution or enhance the IPM is the opposite gradient method (OGM). The OGM is used to create a new set of points from the point resulting from the IPM. MVMO then uses the new points from the OGM phase to mutate to achieve a better solution, instead of randomly generating points as the original MVMO. This proposed technique, based on a combination of IPM, OGM, and MVMO, is called IPOG-MVMO and yields a set of solutions throughout the surface of the cost function. The process of IPOG-MVMO is presented in Algorithm 1.
Algorithm 1 Proposed IPOG-MVMO algorithm
      Initialize parameters.
      Generate an initial population using IPM from randomized points.
      Improve the population using OGM.
      Update the archive using MVMO.
      Return the best point in the archive.

2.1. Generating Initial Population Using Local Minima Obtained from the IPM

The proposed algorithm focused on the interior point method called the barrier method [23,24], because of the proof of convergence and its complexity. The IPM was used to find the local minima that are located near an initial population. A function F x was defined and was differentiable in the neighborhood of a point x . The process involved formulating a rule to test a differentiable function for the local minima that satisfied the restrictions. The procedure is described in Algorithm 2, where the output of this algorithm is assigned as the initial population for the next phase.
Algorithm 2 Finding the local minima using IPM
Q : an archive storing a population
D : number of dimensions in search space
Randomize N points in the feasible domain to generate an initial population.
Set Q as an empty set.
Set parameters t =   t 0 > 0 ,     μ > 1 ,     ϵ > 0 .
FOR { 1   j   N }
      Set a random point x j   as a starting point.
      WHILE ( D t > ϵ ) do
    Compute x j * by minimizing t F x + x , subject to A x j = b .
    Set x j = x j * .
    Update new t as μ t .
      ENDWHILE
      Insert x j into Q .
ENDFOR
RETURN Q as an archive storing the initial population for the next phase.
According to Algorithm 2, there are three constants, t 0 > 0 ,   μ > 1 ,   ϵ > 0 , to be defined to initiate the algorithm. In each iteration, x j * is computed by minimizing t F x + x subject to constraint A x j = b and x = i = 1 M log b i a i x . The solution is updated until t D / ϵ . All local minima in a set Q are used as the initial population for the next step.

2.2. Opposite Gradient Method (OGM)

This section briefly presents a concept for generating new offspring. For a vector in a D -dimensional space of the cost function, new offspring are generated at the position of the manifold by OGM, where its first derivative is zero. One of these locations is expected to be the best solution for the cost function. The proposed technique proceeded along the same line with the following observation. The first derivative of F x is denoted by F x and the gradient vector field is represented as the matrix of the first derivative form. Let x σ and x γ be two vectors on the manifold of the cost function. The locations with zero values of the first derivatives on dimension i could be in the range between x σ and x γ such that F i x σ   F i x γ < 0 . If F i x σ > F i x γ , and the location with zero gradient is closer to x σ than x γ . On the other hand, if F i x σ < F i x γ , the location with zero gradient is closer to x γ than x σ . This concept can also be applied to any other dimensions.
In particular, for each pair of two vectors obtained from set Q , the output of IPM in Algorithm 2, new next-generation vectors are generated from a varying jumping distance δ related to two vectors that have different signs of gradient on each dimension, as shown in Equation (1).
δ = F i x γ F i x σ + F i x γ x γ x σ w
Suppose that x σ and x γ are selected from the output IPM with the assumption that F i x σ F i x γ < 0 . Then, we have two new vectors, x n e w σ , x n e w γ , whose coefficients on dimension i are calculated from this jumping distance, as shown in Equations (2) and (3), while the coefficients on the other dimensions are still unchanged.
x n e w , i σ = x i σ + δ
x n e w , i γ = x i γ δ
In Equation (1), a weight value, w , is calculated at each iteration t according to Algorithm 3 to guarantee that the new vectors are generated within the pre-specified range of the search area. In essence, the weight value is in the interval between 0 and 1 and involves the jumping distance, δ , such that the new vectors discovered outside the search space are discarded immediately.
Algorithm3 Generating candidate vectors x n e w σ , x n e w γ within the pre-specified search space
w : weight value
NP: number of iterations
Q : an archive storing the current population
Set Χ c a n d = .
w =   0.05 w (NP/t)/* Calculating an updated weight value */
Compute a jumping distance δ using Equation (1).
Create two new candidate vectors x n e w σ , x n e w γ using Equations (2) and (3).
IF { x n e w σ is in the range}
    Insert x n e w σ into Χ c a n d .
ELSE
    Discard x n e w σ .
ENDIF
IF { x n e w γ is in the range}
    Insert x n e w γ into Χ c a n d .
ELSE
    Discard x n e w γ .
ENDIF
RETURN Χ c a n d
Two new vectors can be generated and located approximately between the parent vectors. Accordingly, the next generation of vectors is calculated to decrease the search area. The cost function values of these new vectors must also be within an acceptable range. If some new vectors providing better cost value are attained, these new vectors will replace some vectors in the previous generation that are stored in the archive. Subsequently, for each dimension i , all the vectors in the archive are sorted ascendingly according to their cost values and divided into two different gradient groups, which are G i + , a group of vectors with a positive gradient value along dimension i , and G i , a group of vectors with a negative gradient value along dimension i . For each group, a vector that contains the smallest gradient is retrieved such that these vectors are used as a pair of vectors to generate the next candidate vectors in the next generation. These steps are described in Algorithm 4.
Algorithm 4 Generating a population using the opposite gradient method (OGM)
Q : an archive storing the initial population obtained from Algorithm 2
NP: number of iterations
D : number of dimensions
Set count = 1
WHILE {count ≤ NP}
      FOR { 1 i D }
    Set w   = 1
    Partition the population in Q into G i + and G i .
    Select x σ , a vector with smallest gradient from G i + .
    Select x γ , a vector with smallest gradient from G i .
    Compute vectors x n e w σ ,   x n e w γ from x σ and x γ using Algorithm 3.
    IF x n e w σ Χ c a n d and F x n e w σ is less than the highest cost value from vectors in Q
      Discard the vector with the highest cost value.
      IF F i x n e w σ > 0
       Insert x n e w σ in G i + .
      ELSE
       Insert x n e w σ in G i .
      ENDIF
    ENDIF
    IF x n e w γ Χ c a n d and F x n e w γ is less than the highest cost value from vectors in Q
      Discard the vector with the highest cost value.
      IF F i x n e w γ < 0
       Insert x n e w γ in G i .
      ELSE
       Insert x n e w γ in G i + .
      ENDIF
    ENDIF
      ENDFOR
       Q = i = 1 D G i + i = 1 D G i
      Select n best points with respect to the cost values to store in Q .
      count++
ENDWHILE
RETURN Q as an archive storing the initial population for the next phase
The offspring from Algorithm 4 are filled in the archive Q to store the n -best offspring. The offspring index in the archive can be arranged according to the fitness suitability sequence and can be used as a guide for conducting the search. The size of the archive Q is not changed throughout the process. Note that the archive is updated only when the new vector obtains a solution better than the existing solution in the archive.

2.3. Combining IPM, OGM, and MVMO

The combination of IPM and OGM with MVMO is presented in this section. MVMO carries out global searching and focuses on the best solution. MVMO continuously updates an archive using compact memory with a fixed storage space. The n -best offspring are stored in the archive and serve as a guide toward the search direction. A solution stored in the archive is replaced by new offspring with a lower cost function value. Once the proposed combination is conducted, the vector with the lowest cost function value represents the optimal solution of the given function.
The search procedure applied to MVMO [25] is initiated with a particular set of points with D dimensions, obtained from Algorithm 4 in this study. MVMO has a key feature in that a mapping function is used for modifying the offspring depending on the specific mean-variance of the solutions collected in an archive. The mean x ¯ i and variance v i of dimension i are calculated once the update of the archive for each dimension is made with Equations (4) and (5), where N is the population size stored in the archive.
x ¯ i = 1 N j = 1 N x i j
v i = 1 N j = 1 N x i j x ¯ i 2
The size of the search space depends on the mapping function. Moreover, the shape of the function with respect to x ¯ i can be controlled by two shape variables s i , 1 and s i , 2 as follows:
h x ¯ i , s i , 1 , s i , 2 , x = x ¯ i 1 e x s i , 1 + 1 x ¯ i e ( 1 x ) s i , 2
The new coefficient of each selected dimension x i of x is calculated using h values in terms of h x , h 1 ,   h 0 as follows:
x i = h x + 1 h 1 + h 0   x i r h 0
where x i r is a number obtained randomly with uniform distribution in the range of [0, 1]. It can be guaranteed that the output of Equation (7) is within the range of [–100, 100]. According to Equation (6), h x , h 1 ,   h 0 are calculated using x = x i r , x = 1 , and x = 0 , respectively.
MVMO is capable of searching the global optimum with the best mean values of the solution. Two shape variables s i , 1 and s i , 2 are originated from a variable s i , calculated from a scaling factor f s and variance v i to change the shape of the function as follows:
s i = f s ln v i
where v i is initially set to 1 and then recalculated once the update of the archive for each dimension is made using (5). The scaling factor f s can be used to improve the accuracy when its value is greater than 1. On the other hand, the search is coarsely conducted when the value is less than 1. The factor is initiated for the search procedure with a small value. Subsequently, the size is increased for every iteration to increase the efficiency, as denoted in Equation (9):
f s = f s * 1 + r a n d
where r a n d is randomized within [0, 1], and f s * can be calculated as in Equation (10):
f s * = f s i * + j j f 2 f s f * f s i *
where j and j f are the current iteration number and the last iteration number, respectively. In this study, the values of f s i * and f s f * were set to 1 and 25, respectively.
As aforementioned, the shape of the mapping function can be determined, which depends upon the shape variables s i , 1 and s i , 2 of x i . To improve the efficiency of the search process, the proper shape variables can be calculated using Algorithm 5.
Algorithm 5 Calculating the shape variables s i , 1   a n d   s i , 2 for dimension i at iteration j
Calculate the variance v i using Equation (5).
Calculate the scaling factor f s using Equations (9) and (10).
Calculate the variable s i using Equation (8).
Set s i , 1 = s i   a n d   s i , 2 = s i .
      IF   s i > 0 then
    IF s i > d i  
       Update d i   by multiplying with Δ d .
    ELSE
       Update d i   by dividing with Δ d .
    ENDIF
    IF d i   > s i
       Set α =   d i and β =   s i .
    ELSE
       Set α = s i and β =   d i .
    ENDIF
    IF r a n d < 0.5
       Set s i , 1 = α and s i , 2 = β .
    ELSE
       Set s i , 1 = β and s i , 2 = α .
    ENDIF
      ENDIF
RETURN s i , 1 , s i , 2 , d i
The parameter d i   is initially set to 1 and then continuously updated at every iteration using the increment factor Δ d calculated from Equation (11).
Δ d = 1 + Δ d 0 + 2 · Δ d 0 · r a n d 0.5
This factor leads to the expansion or shrinkage of the value of parameter d i   , resulting in oscillation around the current s i . The optimal interval of Δ d 0 is within [0.01, 0.4].
In essence, the original MVMO encounters two problems, causing a long-duration searching process and difficulty in finding the best solution [10,26]. The first problem is zero variance when solutions stored in the archive are located at the same position. The second problem is that the value of the variance is sometimes outside of the specified range. This research attempted to overcome the limitations of MVMO and improve the flexibility of global searching over the classical MVMO by generating the population with the opposite gradient method before proceeding to MVMO. Algorithm 6 presents the determination of the n -best solutions using MVMO within fewer generations when using the population obtained from OGM.
Algorithm 6 Generating n -best offspring using MVMO
Q : an archive storing the initial population obtained from Algorithm 4
NP: number of iterations
D : number of dimensions
Set the parameters d i , Δ d 0 , f s i * , f s f * .
WHILE {count ≤ NP}
    FOR { 1   i   D }
       Calculate x ¯ i using Equation (4).
       Apply Algorithm 5 to obtain two shape variables s i , 1   a n d   s i , 2 .
       Calculate h x , h 1 ,   h 0 using Equation (6).
       Generate new x i using Equation (7).
    ENDFOR
    Calculate the cost value of x n e w .
    IF F ( x n e w ) is less than the highest cost value from vectors in Q .
       Discard the vector with the highest cost value.
       Insert x n e w in Q .
    ENDIF
ENDWHILE

3. Experiments

The IPOG-MVMO algorithm and the other algorithms, which were NBIPOPaCMA [9], PVADE [27], and MVMO, were applied to 28 benchmark functions on real-parameter optimization [28] using a computer with a Core i7 2.10 GHz CPU and 6 GB RAM for comparative purposes. All 28 benchmark functions have been described by Liang et al. [29,30]. The functions were composed of three types: unimodal (F1–F5), multimodal (F6–F20), and composition functions (F21–F28). All algorithms began running from the same set of initial points. The global optimal point of each test function was at the origin of the vector space. The error value ε can be calculated by Equation (12):
ε = f j 0 f j x *
where f j 0 is the cost value at the exact optimal point of the j-th benchmark function and f j x * is the cost value of the optimal point obtained by a selected algorithm. The error value is rounded to zero if it is less than 0.00000001 or 1 × 10 8 .

3.1. IPOG-MVMO Parameter Set-Up

By using IPOG-MVMO, all functions have been tested with N = 15 D points, where D is the number of dimensions. The parameters selected for IPOG-MVMO are presented in Table 1.
The parameters used for IPOG-MVMO are presented in Table 1. The number of dimensions (D) in each problem, and the search range, were defined according to the evaluation criteria of CEC2013 benchmark problems. The number of points ( N ) varied depending on the number of dimensions, to improve the flexibility of the algorithms to deal with more difficult problems. The number of iterations (NP) and the number of experimental trials were defined to achieve reliable results. Moreover, other parameters were successfully tested in the original MVMO process.

3.2. Experimental Results

The best value, worst value, mean, and standard deviation of the error between the cost value of the solution found by an algorithm and the exact cost value were recorded over 50 trials. With these four metrics, the best value was used for the comparison and was the focus of this study. The results for 10, 30, and 50 dimensions are summarized in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8, respectively. To clearly illustrate the comparison, only the mean of the error was selected and plotted, as is shown in Figure 1, Figure 2 and Figure 3.
To solve unimodal functions F1–F5, the proposed algorithm was able to search the solution without error in every case except F3 ( D = 50), where NBIPOPaCMA yielded the better solution, as is shown in Table 2. Moreover, when considering the mean of the error, IPOG-MVMO yielded a zero mean in 10 out of 15 cases. In other words, in these 10 cases, IPOG-MVMO guaranteed the best solution for every trial in every case. It seems that IPOG-MVMO is a suitable technique for solving unimodal problems.
The multimodal functions F6–F20 were analyzed in three groups based on their complexity. To solve multimodal functions in the D = 10 group with the lowest complexity, IPOG-MVMO attained results with no error, except for F14, F15, and F20. Nevertheless, these results were still satisfactory when compared with other algorithms. For the D = 30 group, IPOG-MVMO performed worse than the other algorithms only in two cases, which were F8 and F16. Moreover, IPOG-MVMO also attained error-free results in F6 and F10–F13. For the D = 50 group with the highest complexity, IPOG-MVMO yielded lower performance than the other algorithms in only five cases, which were F8, F9, F16, F17, and F20.
To solve the composition functions F21–F28, the proposed algorithm also outperformed the other algorithms in all cases of D = 10. The algorithm was able to attain better results than the other algorithms in F22 and F24 for D = 30, and in F23, F24, and F26 for D = 50, respectively. Note that this type of function could not provide zero errors in all cases due to its complexity.
The superior results for each problem are highlighted in bold where the best value of IPOG-MVMO was obtained and was better than the other methods; although sometimes, more than one method provided the best value. Clearly, among the 28 × 3 cases, there were 68 cases in which IPOG-MVMO yielded the best result. Likewise, when considering the mean value, IPOG-MVMO yielded the best result in 59 out of the 28 × 3 cases. The results indicated that applying IPM and OGM to MVMO provided a solution that was closer to the optimal solution.
Some examples for the performance comparison were selected, as presented in Figure 4. We also present the average error with respect to the number of iterations (NP) in Figure 4. The average error was computed over 50 trials. The results showed that applying IPM and OGM to MVMO can accelerate the search process to achieve a solution that was closer to the optimal solution within a smaller number of iterations.

4. Discussion

4.1. Optimal Solutions

Twenty-eight different functions of three types were taken to verify our proposed method. The experimental results in terms of error are shown in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8. When considering the best solution out of all the methods, it was found that IPOG-MVMO achieved the best result in 68 cases out of 84 cases, or 80.95%. Next, three types of functions were further analyzed separately. For the unimodal functions F1–F5, the proposed method offered the best result in 14 out of 15 cases, or 93.33%. This is quite acceptable when applying our method to this type of function. In the case of the multimodal functions F6–F20, there were 45 cases in total. Among these cases, IPOG-MVMO yielded the best result in 37 cases, or 82.22%. This was slightly lower than for the unimodal function due to the complexity of the problem. The remaining functions were the composition functions, F21–F28. The proposed method yielded the best result in 17 out of 24 cases, or 70.83%, which was lower than for unimodal and multimodal functions. Subsequently, the results can be discussed from a different perspective, where we mainly focused on the number of dimensions. There were 28 cases for each dimension and our method yielded the best result in 27, 22, and 19 cases, or 96.43%, 78.57%, and 67.86%, respectively.
In essence, zero error indicates whether the algorithm can reach the exact solution within the pre-defined number of iterations. IPOG-MVMO provided zero error in 33 out of 84 cases. Table 9 presents the likelihood that IPOG-MVMO and four comparative methods could reach the exact solution in terms of function type and the number of dimensions. YYPO, a method inspired by the philosophy of creating a balance between two concepts, was also conducted and is included in this table. For unimodal functions, there was only one case at D = 50 in which the error was non-zero, whereas the percentage dropped when the number of dimensions was higher in multimodal functions. Moreover, NBIPOPaCMA presented a higher likelihood of obtaining zero error at D = 50; however, IPOG-MVMO still provided the best results at D  = 10 and D = 30. For the composition function, the best solution over 50 trials did not reach the exact solution for any algorithm. Since the function type and the number of dimensions corresponded to the problem difficulty and complexity, it can be inferred that although our IPOG-MVMO method outperformed the other comparative methods, the performance gradually decreased as the problem difficulty and complexity rose. However, using different parameter settings might result in greater performance for solving black box problems.
When considering the mean value of error in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8, it was found that IPOG-MVMO yielded the best result in 59 out of 84 cases, or 70.24%. Typically, the zero mean rarely occurs because the method must reach the exact solution in every trial. Our proposed method achieved zero mean in 13 cases, although most were unimodal functions. According to the analysis of the function type, IPOG-MVMO yielded the best result in 12 cases, or 80%, for the unimodal functions, 34 cases, or 75.56%, for the multimodal functions, and 13 cases, or 54.17%, for the composition functions. It can be seen that the type of function also affected the performance in terms of the mean value. In the case of the number of dimensions, IPOG-MVMO yielded the best result in 25 cases, or 89.29%, 18 cases, or 64.29%, and 16 cases, or 57.14%, in 10, 30, and 50 dimensions, respectively. The number of dimensions was another factor that impacted our algorithm.
Finally, the worst value of error presented a solution to be expected over a specific number of trials. IPOG-MVMO achieved the best result in 48 cases, or 57.14%, which was higher than the other methods. Again, the method still worked well in 11 cases, or 73.33%, of unimodal functions, while it yielded the best result in 29 cases, or 64.44%, and eight cases, or 33.33%, in multimodal and composition functions, respectively. In terms of the number of dimensions, our method achieved the best result in 20 out of 28 cases in 10 dimensions. For the cases with 30 dimensions, our method yielded the best result in 12 cases, or 42.86%, and the percentage became higher when the number of dimensions was 50, with a 57.14% rate of obtaining the best result. More parameter settings and experiments are required to identify the relationship between the number of dimensions and the optimal solutions in the worst-case scenario.

4.2. Stability

The standard deviation of error can be used to measure the stability of the proposed method compared with the other methods. Zero standard deviation means that the method can provide the same solution over a limited number of trials. Under the experiments, IPOG-MVMO obtained the best result in 46 out of 84 cases, or 54.76%, and there were 12 cases of zero standard deviation, 10 of which were uniform functions. Based on the function type, the unimodal function was the type with the highest stability, and the best result was obtained in 11 out of 15 cases, whereas the multimodal functions yielded the best result in 26 out of 45 cases. Unfortunately, the composition functions method yielded the best result only in nine out of 24 cases, which was greater than the other methods, but most of the cases were in 10 dimensions. For the number of dimensions, the method yielded the best result in 20 cases (71.43%), 11 cases (39.29%), and 15 cases (53.57%), in 10, 30, and 50 dimensions, respectively. Without taking the composition functions into consideration, it seems that the number of dimensions had a minor impact on the stability of the method, because, as previously mentioned, the composition method worked well only in 10 dimensions.

5. Conclusions

In this study, IPOG-MVMO, a combination of the interior point method (IPM), the opposite gradient method (OGM), and mean-variance mapping optimization (MVMO), was proposed to identify a solution for the continuous real-world optimization problem. IPM and OGM were combined with the original MVMO in the area of an IPM local search and then a new population was created using the opposite gradient concept. This procedure ensured that the newly created population was located somewhere close to the minimum value so that a better solution could be obtained using MVMO. For experiments, creating a new population close to the minimum value to obtain an accurate and fast solution by using the local search strength of IPM and OGM’s solution convergence was a key step in this study. When these two approaches were combined with traditional MVMO, the best cost was achieved, which was better than using a single technique, i.e., MVMO. It was important to generate a new population near the minimum in the optimization problem, which can be applied to optimization problems in many areas, such as energy saving for tissue paper mills through energy efficiency scheduling, and the optimal scheduling of energy and ancillary services. Future research will focus on the hybridization of IPOG-MVMO and other techniques, and improving the algorithms to ensure compatibility with combinatorial real-world problems.

Author Contributions

Conceptualization, T.S., S.P. and C.L.; methodology, T.S., S.P. and C.L.; software, T.S.; validation, T.S. and S.P.; formal analysis, T.S. and S.P.; investigation, S.P.; resources, T.S. and C.L.; data curation, T.S.; writing—original draft preparation, T.S.; writing—review and editing, S.P.; visualization, T.S.; supervision, S.P. and C.L.; project administration, T.S. and S.P.; funding acquisition, T.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Officer of the Higher Education Commission, Ministry of Education, Thailand, through Silpakorn University under Grant Number 5/2555.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available in [31].

Acknowledgments

The authors would like to gratefully thank the Faculty of Information and Communication Technology, Silpakorn University, and the Officer of the Higher Education Commission, Ministry of Education, Thailand, for supporting this work through a grant funded under the Higher Education Research Promotion and National Research University Project of Thailand.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; the collection, analyses, or interpretation of data; the writing of the manuscript; or the decision to publish the results.

References

  1. Zeng, Z.; Chen, X.; Wang, K. Energy saving for tissue paper mills by energy-efficiency scheduling under time-of-use electricity tariffs. Processes 2021, 9, 274. [Google Scholar] [CrossRef]
  2. Sortomme, E.; El-Sharkawi, M.A. Optimal scheduling of vehicle-to-grid energy and ancillary services. IEEE Trans. Smart Grid 2012, 3, 351–359. [Google Scholar] [CrossRef]
  3. Azhir, E.; Jafari Navimipour, N.; Hosseinzadeh, M.; Sharifi, A.; Darwesh, A. Deterministic and non-deterministic query optimization techniques in the cloud computing. Concurr. Comput. Pract. Exp. 2019, 31, e5240. [Google Scholar] [CrossRef]
  4. Foroozandeh, Z.; Ramos, S.; Soares, J.; Vale, Z. Energy management in smart building by a multi-objective optimization model and pascoletti-serafini scalarization approach. Processes 2021, 9, 257. [Google Scholar] [CrossRef]
  5. Belfiore, N.P.; Rudas, I.J. Applications of computational intelligence to mechanical engineering. In Proceedings of the IEEE 15th International Symposium on Computational Intelligence and Informatics (CINTI), Budapest, Hungary, 9–21 November 2014. [Google Scholar]
  6. Hansen, N.; Sibylle, D.M.; Koumoutsakos, P. Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evol. Comput. 2003, 11, 1–18. [Google Scholar] [CrossRef] [PubMed]
  7. Auger, A.; Hansen, N. A restart CMA evolution strategy with increasing population size. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC), Edinburgh, Scotland, UK, 2–5 September 2005. [Google Scholar]
  8. Hansen, N. Benchmarking a BI-population CMA-ES on the BBOB-2009 function tested workshop. In Proceedings of the GECCO Genetic and Evolutionary Computation Conference, Montreal, QC, Canada, 8–12 July 2009. [Google Scholar]
  9. Loshchilov, I. CMA-ES with restarts for solving CEC 2013 benchmark problems. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC), Cancun, Mexico, 20–23 June 2013. [Google Scholar]
  10. Erlich, I.; Venayagamoorthy, G.K.; Nakawiro, W. A mean-variance optimization algorithm. In Proceedings of the 2010 IEEE World Congress on Computational Intelligence, Barcelona, Spain, 18–23 July 2010. [Google Scholar]
  11. Salazar, E.; Herrera, M.; Camacho, O. An Application of MVMO Based Adaptive PID Controller for Process with Variable Delay. In Systems and Information Sciences; Botto-Tobar, M., Zamora, W., Larrea Plúa, J., Bazurto Roldan, J., Santamaría Philco, A., Eds.; Springer: Cham, Switzerland, 2021; Volume 1273, pp. 1–6. [Google Scholar] [CrossRef]
  12. Rueda, J.L.; Erlich, I. Short-term transmission expansion planning by using swarm mean-variance mapping optimization. In Proceedings of the 17th International Conference on Intelligent System Applications to Power Systems, Tokyo, Japan, 1–4 July 2013. [Google Scholar]
  13. Rueda, J.L.; Erlich, I. Hybrid mean-variance mapping optimization for solving the IEEE-CEC 2013 competition problems. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC), Cancun, Mexico, 20–23 June 2013. [Google Scholar]
  14. Khoa, T.H.; Vasant, P.M.; Singh, M.S.B.; Dieu, V.N. Swarm based mean-variance mapping optimization for solving economic dispatch with cubic fuel cost function. Lect. Notes Comput. Sci 2015, 9012, 3–12. [Google Scholar] [CrossRef]
  15. Mori, H.; Ikegami, H. An efficient MVMO-SH method for optimal capacitor allocation in electric power distribution systems. In Advances in Swarm Intelligence; Tan, Y., Takagi, H., Shi, Y., Niu, B., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2017; Volume 10386, pp. 1–10. [Google Scholar] [CrossRef]
  16. Chen, X.; Li, Y.; Zhang, Y.; Ye, X.; Xiong, X.; Zhang, F. A novel hybrid model based on an improved seagull optimization algorithm for short-term wind speed forecasting. Processes 2021, 9, 387. [Google Scholar] [CrossRef]
  17. Sun, X.; Fan, Z.; Ji, Y.; Wang, S.; Yan, S.; Wu, S.; Fu, Q.; Ghazali, K.H. Robust multi-user detection based on hybrid gray wolf optimization. Concurr. Comput. Pract. Exp. 2021, 33, e5273. [Google Scholar] [CrossRef]
  18. Valenta, D.; Langer, M. On numerical 2D P colonies modelling the gray wolf optimization algorithm. Processes 2021, 9, 330. [Google Scholar] [CrossRef]
  19. Punnathanam, V.; Kotecha, P. Yin-yang-pair optimization: A novel lightweight optimization algorithm. Eng. Appl. Artif. Intell. 2016, 54, 62–79. [Google Scholar] [CrossRef]
  20. Saenphon, T.; Lursinsap, C. Fast evolutionary solution finding for optimization using opposite gradient movement. In Proceedings of the International Conference on Natural Computation (ICNC), Shanghai, China, 26–28 July 2011. [Google Scholar]
  21. Saenphon, T.; Phimoltares, S.; Lursinsap, C. Combining new fast opposite gradient search with ant colony optimization for solving travelling salesman problem. Eng. Appl. Artif. Intell 2014, 35, 324–334. [Google Scholar] [CrossRef]
  22. Narendra, K.K. A new polynomial-time algorithm for linear programming. Combinatorica 1984, 4, 373–395. [Google Scholar] [CrossRef]
  23. Ohmori, S.; Yoshimoto, K. A Primal-Dual Interior-Point Method for Facility Layout Problem with Relative-Positioning Constraints. Algorithms 2021, 14, 60. [Google Scholar] [CrossRef]
  24. Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004; pp. 561–620. [Google Scholar] [CrossRef]
  25. Rueda, J.L.; Erlich, I. Evaluation of the mean-variance mapping optimization for solving multimodal problems. In Proceedings of the IEEE Symposium Series on Computational Intelligence, Singapore, 16–19 April 2013. [Google Scholar]
  26. Cepeda, J.C.; Rueda, J.L.; Erlich, I.; Korai, A.W.; Gonzalez-Longatt, F.M. Mean-variance mapping optimization algorithm for power system applications in DIgSILENT power factory. In PowerFactory Applications for Power System Analysis; Springer: Cham, Switzerland, 2014; pp. 267–296. [Google Scholar] [CrossRef]
  27. Coelho, L.D.; Ayala, H.V.H.; Freire, R.Z. Population’s variance-based adaptive differential evolution for real parameter optimization. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC), Cancun, Mexico, 20–23 June 2013. [Google Scholar]
  28. Tianjun, L.; Stutzle, T. Benchmark results for a simple hybrid algorithm on the CEC 2013 benchmark set for real-parameter optimization. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC), Cancun, Mexico, 20–23 June 2013. [Google Scholar]
  29. Liang, J.; Qin, A.K.; Suganthan, P.N.; Baskar, S. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
  30. Liang, J.; Qu, B.Y.; Suganthan, P.N.; Hernández-Díaz, A.G. Problem Definitions and Evaluation Criteria for the CEC 2013 Special Session and Competition on Real-Parameter Optimization; Computational Intelligence Laboratory, Zhengzhou University: Zhengzhou, China; Nanyang Technological University: Singapore, 2013. [Google Scholar]
  31. Liang, J.J.; Qu, B.Y.; Suganthan, P.N. Zhengzhou China and Technical Report; Nanyang Technological University: Singapore, 2013. [Google Scholar]
Figure 1. Comparison of the best cost value for F1–F10 at (a) 10 dimensions, (b) 30 dimensions, and (c) 50 dimensions.
Figure 1. Comparison of the best cost value for F1–F10 at (a) 10 dimensions, (b) 30 dimensions, and (c) 50 dimensions.
Processes 11 00465 g001
Figure 2. Comparison of the best cost value for F11–F20 at (a) 10 dimensions, (b) 30 dimensions, and (c) 50 dimensions.
Figure 2. Comparison of the best cost value for F11–F20 at (a) 10 dimensions, (b) 30 dimensions, and (c) 50 dimensions.
Processes 11 00465 g002
Figure 3. Comparison of the best cost value for F21–F28 at (a) 10 dimensions, (b) 30 dimensions, and (c) 50 dimensions.
Figure 3. Comparison of the best cost value for F21–F28 at (a) 10 dimensions, (b) 30 dimensions, and (c) 50 dimensions.
Processes 11 00465 g003
Figure 4. Examples of the performance comparison among IPOG-MVMO, NBIPOPaCMA, PVADE, and MVMO. F9, F14, F15, and F24 at 10, 30, and 50 dimensions were selected. The vertical axis represents the averaged error, while the horizontal axis represents the number of iterations. All were averaged over 50 trials. (ad) F9, F14, F15, and F24 with D = 10; (eh) F9, F14, F15, and F24 with D = 30; and (il) F9, F14, F15, and F24 with D = 50, respectively.
Figure 4. Examples of the performance comparison among IPOG-MVMO, NBIPOPaCMA, PVADE, and MVMO. F9, F14, F15, and F24 at 10, 30, and 50 dimensions were selected. The vertical axis represents the averaged error, while the horizontal axis represents the number of iterations. All were averaged over 50 trials. (ad) F9, F14, F15, and F24 with D = 10; (eh) F9, F14, F15, and F24 with D = 30; and (il) F9, F14, F15, and F24 with D = 50, respectively.
Processes 11 00465 g004
Table 1. The parameters of the proposed algorithm experiment.
Table 1. The parameters of the proposed algorithm experiment.
ParametersValues
The number of dimensions (D) in each problem 10 ,   30 ,   50
Search range 100 , 100 D
f s i * 1
f s f * 25
d i   1
  Δ d 0 0.05
N 15 D
Size of the archive15
NP100,000
Number of experimental trials50
Table 2. The comparative results of the errors obtained from unimodal functions F1–F5.
Table 2. The comparative results of the errors obtained from unimodal functions F1–F5.
F1
Algorithms D = 10 D = 30 D = 50
BestWorstMeanStd.BestWorstMeanStd.BestWorstMeanStd.
NBIPOPaCMA000001.91 × 10−804.30 × 10−80000
PVADE000000000000
MVMO000000000000
IPOG-MVMO000000000000
F2
Algorithms D = 10 D = 30 D = 50
BestWorstMeanStd.BestWorstMeanStd.BestWorstMeanStd.
NBIPOPaCMA000003.45 × 10−81.61 × 10−83.78 × 10−90000
PVADE06.57 × 1021.39 × 1011.12 × 1021.35 × 1034.23 × 1049.03 × 1037.40 × 1033.55 × 1044.15 × 1052.07 × 1059.44 × 104
MVMO00005.59 × 10−49.21 × 10−47.65 × 10−49.49 × 10−55.59 × 10−49.21 × 10−47.65 × 10−49.49 × 10−5
IPOG-MVMO000005.27 × 10−7000000
F3
Algorithms D = 10 D = 30 D = 50
BestWorstMeanStd.BestWorstMeanStd.BestWorstMeanStd.
NBIPOPaCMA000007.42 × 10−82.21 × 10−81.68 × 10−91.17 × 10−55.01 × 1001.41 × 1002.19 × 100
PVADE04.85 × 10−39.7 × 10−57.15 × 10−401.55 × 1034.01 × 1012.31 × 1036.14 × 1052.63 × 1071.89 × 1071.16 × 107
MVMO00001.00 × 1031.11 × 1077.95 × 1051.81 × 1062.88 × 10−61.29 × 1066.08 × 1042.72 × 105
IPOG-MVMO000001.17 × 10−63.65 × 10−74.62 × 10−71.75 × 10−56.24 × 10−42.11 × 10−42.23 × 10−4
F4
Algorithms D = 10 D = 30 D = 50
BestWorstMeanStd.BestWorstMeanStd.BestWorstMeanStd.
NBIPOPaCMA00007.75 × 10−81.71 × 10−81.39 × 10−84.81 × 10−87.75 × 10−81.71 × 10−81.39 × 10−84.81 × 10−8
PVADE02.12 × 10−14.21 × 10−33.13 × 10−25.53 × 10−71.33 × 10−31.93 × 10−43.31 × 10−45.53 × 10−71.33 × 10−31.93 × 10−43.31 × 10−4
MVMO01.18 × 10−8005.37 × 10−63.91 × 10−34.62 × 10−47.56 × 10−45.37 × 10−63.91 × 10−34.62 × 10−47.56 × 10−4
IPOG-MVMO000001.42 ×10−63.65 ×10−75.64 ×10−701.42 ×10−63.65 ×1075.64 ×107
F5
Algorithms D = 10 D = 30 D = 50
BestWorstMeanStd.BestWorstMeanStd.BestWorstMeanStd.
NBIPOPaCMA01.15 × 10−80003.34 × 10−81.39 × 10−86.18 × 10−80000
PVADE08.13 × 10−51.26 × 10−51.11 × 10−403.39 x10−81.02 × 10−81.14 × 10−81.45 × 10−41.6 × 10−41.02 × 10−31.11 × 10−3
MVMO04.19 × 10−8001.22 × 10−83.01 × 10−82.34 × 10−82.12 × 10−80000
IPOG-MVMO000002.79 × 10−81.02 ×1081.46 ×1080000
Table 3. The comparative results of the errors obtained from multimodal functions F6–F9.
Table 3. The comparative results of the errors obtained from multimodal functions F6–F9.
F6
Algorithms D = 10 D = 30 D = 50
BestWorstMeanStd.BestWorstMeanStd.BestWorstMeanStd.
NBIPOPaCMA000002.13 × 10−803.96 × 10−90000
PVADE07.35 × 1007.81 × 1003.89 × 10002.67 × 1014.96 × 10−13.73 × 1002.57 × 1011.48 × 1028.52 × 1013.15 × 101
MVMO000003.91 × 1011.04 × 1019.71 × 10004.34 × 1013.05 × 1012.01 × 101
IPOG-MVMO000002.81 × 10−808.11 × 10−905.76 × 1001.21 × 1002.26 × 100
F7
Algorithms D = 10 D = 30 D = 50
BestWorstMeanStd.BestWorstMeanStd.BestWorstMeanStd.
NBIPOPaCMA01.62 × 1011.40 × 1024.80 × 1025.67 × 10−35.69 × 1017.01 × 1001.71 × 1013.05 × 10−31.83 × 1013.94 × 1008.05 × 100
PVADE3.11 × 10−51.10 × 1013.02 × 10−11.53 × 1004.21 × 10−12.64 × 1014.60 × 1005.75 × 1003.92 × 1044.41 × 1011.89 × 1019.26 × 100
MVMO6.33 × 10−36.34 × 10−36.13 × 10−302.48 × 1001.92 × 1018.17 × 1003.62 × 1001.52 × 1014.91 × 1013.04 × 1017.87 × 100
IPOG-MVMO00004.22 × 10−54.62 × 10−42.11 × 10−42.22 × 10−41.32 × 10−54.81 × 10−42.21 × 10−42.15 × 10−4
F8
Algorithms D = 10 D = 30 D = 50
BestWorstMeanStd.BestWorstMeanStd.BestWorstMeanStd.
NBIPOPaCMA3.49 × 10−81.68 × 10−81.04 × 10−88.57 × 10−92.08 × 1012.19 × 1012.13 × 1012.85 × 1017.77 × 10−12.11 × 1015.87 × 1009.20 × 100
PVADE1.92 × 1012.01 × 1012.01 × 1016.11 × 10−22.04 × 1012.14 × 1012.61 × 1014.08 × 1012.10 × 1012.12 × 1012.11 × 1013.49 × 10-2
MVMO1.94 × 1012.11 × 1011.98 × 1016.54 × 10−22.08 × 1012.10 × 1012.19 × 1015.41 × 1022.10 x1012.12 × 1012.11 × 1013.94 × 10-2
IPOG-MVMO06.12 × 10−81.54 × 10−71.02 × 10−72.08 × 1012.10 × 1012.09 × 1012.37 × 1028.35 × 1022.16 × 1011.96 × 1014.34 × 100
F9
Algorithms D = 10 D = 30 D = 50
BestWorstMeanStd.BestWorstMeanStd.BestWorstMeanStd.
NBIPOPaCMA1.92 × 10−82.47 × 1013.82 × 1009.56 × 10−92.02 x1001.78 × 1014.30 × 1004.81 × 10007.41 × 1011.50 × 1013.30 × 100
PVADE8.11 × 10−33.36 × 1001.34 × 1009.67 × 10−12.27 × 1013.09 × 1012.74 × 1011.77 × 1001.99 × 1013.34 × 1012.60 × 1013.05 × 100
MVMO4.92 × 10−12.45 × 1008.26 × 10−16.66 × 10−17.11 × 1001.92 × 1011.32 × 1012.65 × 1002.48 × 1014.17 × 1013.33 × 1014.39 × 100
IPOG-MVMO09.45 × 10−86.95 × 10−84.45 × 10−87.54 × 10−71.19 × 10−69.57 × 10−72.19 × 10−74.82 × 10−26.73 × 1001.64 × 1002.57 × 1000
Table 4. The comparative results of the errors obtained from multimodal functions F10–F13.
Table 4. The comparative results of the errors obtained from multimodal functions F10–F13.
F10
Algorithms D = 10 D = 30 D = 50
BestWorstMeanStd.BestWorstMeanStd.BestWorstMeanStd.
NBIPOPaCMA1.02 × 10−86.71 × 10−83.24 × 10−81.84 × 10−91.01 × 10−83.55 × 10−81.94 × 10−88.32 × 10−804.69 × 10−29.37 × 10−32.09 × 10−2
PVADE01.79 × 10−15.01 × 10−23.79 × 10−21.73 × 10−21.89 × 10−17.64 × 10−23.53 × 10−24.77 × 10−21.15 × 1005.99 × 10−13.42 × 10−1
MVMO9.95 × 10−33.58 × 10−21.58 × 10−22.11 × 10−27.40 × 10−37.35 × 10−22.78 × 10−21.64 × 10−20000
IPOG-MVMO03.87 × 10−82.89 × 10−87.34 × 10−904.01 × 10−81.42 × 10−81.78 × 10−806.66 × 1012.78 × 10−33.91 × 10−2
F11
Algorithms D = 10 D = 30 D = 50
BestWorstMeanStd.BestWorstMeanStd.BestWorstMeanStd.
NBIPOPaCMA06.72 × 10−83.27 × 10−82.11 × 10−99.95 × 10−13.98 × 1002.70 x1001.25 × 1007.37 × 10−21.89 × 1013.94 × 1008.36 × 100
PVADE01.21 × 1014.04 × 1002.33 × 1001.03 × 1007.34 × 1003.13 × 1001.49 × 1006.76 × 1012.34 × 1021.68 × 1024.08 × 101
MVMO06.02 × 1002.32 × 1001.29 × 10−21.01 × 1008.05 × 1004.34 × 1001.90 × 1002.59 × 1017.57 × 1014.55 × 1011.22 × 101
IPOG-MVMO01.13 × 10−84.87 × 10−83.13 × 10−901.03 × 10−72.13 × 10−84.03 × 10−81.10 × 10−34.90 × 10−21.21 × 10−21.81 × 10−2
F12
Algorithms D = 10 D = 30 D = 50
BestWorstMeanStd.BestWorstMeanStd.BestWorstMeanStd.
NBIPOPaCMA01.03 × 1003.43 × 10−11.89 × 10−96.70 × 10−83.98 × 1002.27 × 1001.70 × 1007.37 × 10−21.02 × 1003.30 × 10−13.96 × 10−1
PVADE9.87 × 10−11.59 × 1015.86 × 1003.68 × 1001.59 × 1003.24 × 1012.13 × 1013.79 × 1002.20 × 1023.12 × 1022.56 × 1022.01 × 101
MVMO1.89 × 1001.55 × 1015.89 × 1001.32 × 1001.59 × 1015.97 × 1013.36 × 1011.14 × 1013.98 × 1011.34 × 1027.77 × 1012.33 × 101
IPOG-MVMO03.49 × 10−54.78 × 10−87.56 × 10−902.72 × 10−88.87 × 10−81.21 × 10−87.74 × 10−52.45 × 10−39.99 × 1041.05 × 10−3
F13
Algorithms D = 10 D = 30 D = 50
BestWorstMeanStd.BestWorstMeanStd.BestWorstMeanStd.
NBIPOPaCMA04.54 × 1007.57 × 10−11.45 × 10003.07 × 1002.77 × 1001.36 × 1006.46 × 10−31.48 × 1013.10 × 1006.54 × 100
PVADE02.08 × 1018.35 × 1004.34 × 1007.60 × 1011.56 × 1021.21 × 1021.86 × 1018.10 × 1011.62 × 1021.18 × 1021.69 × 101
MVMO1.87 × 1001.82 × 1018.84 × 1009.01 × 1003.06 x1019.95 × 1015.90 × 1011.67 × 1017.20 × 1012.05 × 1021.33 × 1023.43 × 101
IPOG-MVMO03.58 × 10−51.08 × 10−51.01 × 10−501.79 × 10−63.23 × 10−76.07 × 10−76.43 × 10−46.30 × 1001.54 × 1002.59 × 100
Table 5. The comparative results of the errors obtained from multimodal functions F14–F17.
Table 5. The comparative results of the errors obtained from multimodal functions F14–F17.
F14
Algorithms D = 10 D = 30 D = 50
BestWorstMeanStd.BestWorstMeanStd.BestWorstMeanStd.
NBIPOPaCMA2.13 × 1017.68 × 1023.44 × 1022.11 × 1027.12 × 1021.35 × 1038.70 × 1023.40 × 1023.16 × 1014.59 × 1013.96 × 1015.94 × 100
PVADE4.01 × 1015.13 × 1021.67 × 1021.07 × 1022.04 × 1034.01 × 1033.11 × 1034.70 × 1022.14 × 1033.90 × 1033.07 × 1033.80 × 102
MVMO3.21 × 1002.08 × 1018.01 × 1007.59 × 1009.40 × 1011.66 × 1038.56 × 1024.12 × 1022.19 × 1034.84 × 1033.89 × 1036.02 × 102
IPOG-MVMO2.03 × 10−52.67 × 10−52.39 × 10−54.78 × 10−62.37 × 1016.35 × 1023.09 × 1023.11 × 1029.23 × 10−56.10 × 1015.05 × 1011.70 × 100
F15
Algorithms D = 10 D = 30 D = 50
BestWorstMeanStd.BestWorstMeanStd.BestWorstMeanStd.
NBIPOPaCMA6.87 × 1002.38 × 1021.17 × 1028.89 × 1015.53 × 1021.67 × 1037.65 × 1022.85 × 1021.27 × 10−19.68 × 1016.51 × 1013.94 × 101
PVADE3.71 × 1021.02 × 1037.84 × 1021.68 × 1022.13 × 1034.94 × 1033.32 × 1034.08 × 1024.54 × 1036.17 × 1035.36 × 1033.54 × 102
MVMO1.89 × 1026.57 × 1025.58 × 1029.01 × 1011.70 × 1034.45 × 1033.03 × 1035.41 × 1025.75 x1038.65 × 1036.64 × 1035.92 × 102
IPOG-MVMO2.58 × 10−57.28 × 10−23.21 × 10−23.13 × 10−22.19 × 1011.22 × 1032.28 × 1022.37 × 1023.69 × 10−36.67 × 1012.03 × 1012.76 × 101
F16
Algorithms D = 10 D = 30 D = 50
BestWorstMeanStd.BestWorstMeanStd.BestWorstMeanStd.
NBIPOPaCMA01.97 × 1015.68 × 1007.23 × 1005.68 x10−21.23 × 1009.14 × 10−11.86 × 10−11.18 x10−33.51 × 1001.20 × 1001.65 × 100
PVADE4.75 × 10−11.26 × 1008.93 × 10−11.88 × 10−11.42 × 1003.13 × 1002.32 × 1003.01 × 10−12.67 × 1003.84 × 1003.38 × 1002.86 × 10−1
MVMO3.52 × 10−16.49 × 10−15.39 × 10−11.62 × 10−15.37 × 10−11.62 × 1001.08 × 1002.88 × 10−16.60 × 10−11.70 × 1001.14 × 1002.37 × 10−1
IPOG-MVMO07.64 × 10−11.63 × 10−13.18 × 10−18.08 × 10−21.34 × 1005.40 × 1001.09 × 1013.12 × 10−13.25 × 1002.11 × 1002.24 × 10−1
F17
Algorithms D = 10 D = 30 D = 50
BestWorstMeanStd.BestWorstMeanStd.BestWorstMeanStd.
NBIPOPaCMA09.71 × 1011.89 × 1012.64 × 1013.41 × 1014.26 × 1013.53 × 1011.87 × 1002.10 × 1005.55 × 1011.30 × 1012.37 × 101
PVADE1.15 × 1011.75 × 1011.39 × 1011.39 × 1017.44 × 1011.17 × 1029.52 × 1011.10 × 1011.44 × 1023.17 × 1022.37 × 1021.10 × 101
MVMO1.03 × 1011.18 × 1011.11 × 1017.65 × 10−14.38 × 1018.14 × 1016.72 × 1018.50 × 1008.89 × 1011.43 × 1021.11 × 1021.34 × 101
IPOG-MVMO03.07 × 1019.85 × 1008.85 × 1003.25 × 1016.60 × 1014.31 × 1014.89 × 1004.50 × 1006.66 × 1013.48 × 1013.30 × 101
Table 6. The comparative results of the errors obtained from composition functions F18–F20.
Table 6. The comparative results of the errors obtained from composition functions F18–F20.
F18
Algorithms D = 10 D = 30 D = 50
BestWorstMeanStd.BestWorstMeanStd.BestWorstMeanStd.
NBIPOPaCMA05.74 × 1011.58 × 1011.51 × 1013.86 × 1011.72 × 1026.38 × 1014.45 × 1018.60 × 10−29.86 × 1012.54 × 1014.13 × 101
PVADE1.13 × 1013.26 × 1012.40 × 1013.90 × 1001.40 × 1021.89 × 1021.66 × 1021.12 × 1013.43 × 1024.22 × 1023.65 × 1021.12 × 101
MVMO1.51 × 1012.01 × 1011.72 × 1012.58 × 1004.58 × 1017.88 × 1015.94 × 1017.59 × 1008.06 × 1011.63 × 1021.07 × 1021.73 × 101
IPOG-MVMO01.01 × 10−13.27 × 10−23.71 × 10−21.88 × 1013.38 × 1012.94 × 1019.87 × 1001.20 × 10−24.18 × 1001.34 × 1001.92 × 100
F19
Algorithms D = 10 D = 30 D = 50
BestWorstMeanStd.BestWorstMeanStd.BestWorstMeanStd.
NBIPOPaCMA07.38 × 10−14.34 × 10−12.01 × 10−11.14 × 1002.54 × 1002.40 × 1004.41 × 10−11.60 × 10−12.85 × 1008.65 × 10−11.16 × 100
PVADE3.27 × 10−19.69 × 10−15.98 × 10−11.25 × 10−13.10 × 1001.06 × 1016.49 × 1001.74 × 1001.06 × 1013.10 × 1012.12 × 1014.74 × 100
MVMO3.52 × 10−16.07 × 10−15.19 × 10−11.44 × 10−11.10 × 1003.13 × 1001.93 × 1004.09 × 10−12.79 × 1007.55 × 1004.82 × 1001.24 × 100
IPOG-MVMO01.90 × 10−12.59 × 10−26.02 × 10−23.90 × 10−12.88 × 1001.06 × 1004.73 × 10−11.52 × 10−13.28 × 1001.53 × 1001.52 × 100
F20
Algorithms D = 10 D = 30 D = 50
BestWorstMeanStd.BestWorstMeanStd.BestWorstMeanStd.
NBIPOPaCMA2.11 × 1004.85 × 1003.26 × 1009.87 × 10−11.01 × 1011.56 × 1011.29 × 1015.98 × 10−17.85 × 10−12.53 × 1017.01 × 1001.05 × 100
PVADE9.96 × 10−13.52 × 1002.12 × 1005.61 × 10−11.04 × 1011.50 × 1011.34 × 1011.89 × 10−11.95 × 1012.55 × 1012.36 × 1011.90 × 100
MVMO1.83 × 1002.78 × 1002.34 × 1004.76 × 10−18.81 × 1001.19 × 1011.04 × 1015.85 × 10−11.87 × 1012.23 × 1012.01 × 1016.84 × 10−1
IPOG-MVMO1.12 × 1002.74 × 1001.95 × 1004.01 × 10−16.84 × 1001.28 × 1011.01 × 1016.23 × 10−12.10 × 1001.78 × 1012.30 × 1004.57 × 10−1
Table 7. The comparative results of the errors obtained from composition functions F21–F24.
Table 7. The comparative results of the errors obtained from composition functions F21–F24.
F21
Algorithms D = 10 D = 30 D = 50
BestWorstMeanStd.BestWorstMeanStd.BestWorstMeanStd.
NBIPOPaCMA3.64 × 1001.42 × 1011.05 × 1014.47 × 1002.00 × 1022.00 × 1022.00 × 10201.00 × 1022.00 × 1021.91 × 1021.43 × 101
PVADE2.53 × 1023.81 × 1022.90 × 1024.06 × 1012.00 × 1024.43 × 1023.17 × 1026.22 × 1018.36 × 1021.22 × 1039.56 × 1021.44 × 102
MVMO3.35 × 10−53.90 × 1022.26 × 1029.73 × 1012.00 × 1024.43 × 1022.95 × 1021.07 × 1021.00 × 1021.13 × 1032.45 × 1021.91 × 102
IPOG-MVMO2.68 × 1001.50 × 1011.22 × 1014.11 × 1002.00 × 1024.45 × 1022.57 × 1021.15 × 1011.00 × 1025.24 × 1022.29 × 1023.48 × 101
F22
Algorithms D = 10 D = 30 D = 50
BestWorstMeanStd.BestWorstMeanStd.BestWorstMeanStd.
NBIPOPaCMA3.31 × 1001.00 × 1018.81 × 1003.05 × 1003.54 × 1021.08 × 1038.03 × 1023.12 × 1021.89 × 1023.86 × 1031.45 × 1036.01 × 102
PVADE2.34 × 1015.07 × 1022.57 × 1021.11 × 1021.72 × 1033.38 × 1032.49 × 1033.86 × 1026.11 × 1031.01 × 1047.72 × 1038.44 × 102
MVMO5.02 × 1001.00 × 1023.27 × 1011.91 × 1012.35 × 1022.00 × 1038.19 × 1024.28 × 1021.17 × 1034.94 × 1032.76 × 1038.38 × 102
IPOG-MVMO3.00 × 1001.76 × 1016.20 × 1002.48 × 1022.00 × 1022.83 × 1036.26 × 1022.55 × 1023.92 × 1024.68 × 1032.35 × 1036.94 × 102
F23
Algorithms D = 10 D = 30 D = 50
BestWorstMeanStd.BestWorstMeanStd.BestWorstMeanStd.
NBIPOPaCMA3.59 × 1004.14 × 1003.96 × 1002.01 × 1013.52 × 1023.59 × 1031.29 × 1031.35 × 1031.37 × 1031.31 × 1044.29 × 1035.00 × 103
PVADE1.18 × 1011.12 × 1029.14 × 1011.78 × 1014.74 × 1037.25 × 1035.81 × 1035.04 × 1027.91 × 1031.54 × 1041.17 × 1041.48 × 103
MVMO5.63 × 1018.71 × 1024.95 × 1022.03 × 1021.69 × 1034.45 × 1033.09 × 1035.28 × 1026.17 × 1031.12 × 1048.64 × 1031.32 × 103
IPOG-MVMO3.54 × 1003.14 × 1003.89 × 1003.01 × 10−14.89 × 1024.28 × 1032.79 × 1039.41 × 1024.97 × 1027.29 × 1035.79 × 1039.41 × 102
F24
Algorithms D = 10 D = 30 D = 50
BestWorstMeanStd.BestWorstMeanStd.BestWorstMeanStd.
NBIPOPaCMA3.59 × 1003.70 × 1003.74 × 1005.01 × 10−21.39 × 1023.01 × 1022.42 × 1028.00 × 1012.35 × 1023.86 × 1023.27 × 1027.60 × 101
PVADE1.01 × 1022.11 × 1021.94 × 1021.30 × 1012.01 × 1022.61 × 1022.03 × 1021.39 × 1002.40 × 1023.28 × 1022.78 × 1021.83 × 101
MVMO1.01 × 1022.07 × 1021.83 × 1023.74 × 1012.04 × 1022.20 × 1022.11 × 1023.77 × 1002.28 × 1022.61 × 1022.45 × 1028.05 × 100
IPOG-MVMO9.85 × 10−14.42 × 1002.07 × 1001.44 × 10−21.05 × 1024.62 × 1022.17 × 1021.46 × 1001.05 × 1024.62 × 1022.17 × 1021.46 × 101
Table 8. The comparative results of the errors obtained from composition functions F25–F28.
Table 8. The comparative results of the errors obtained from composition functions F25–F28.
F25
AlgorithmsD = 10D = 30D = 50
BestWorstMeanStd.BestWorstMeanStd.BestWorstMeanStd.
NBIPOPaCMA7.31 × 1007.49 × 1007.44 × 1005.01 × 10−22.05 × 1023.01 × 1022.66 × 1024.26 × 1012.34 × 1023.86 × 1023.23 × 1027.86 × 101
PVADE1.90 × 1022.13 × 1022.04 × 1023.77 × 1002.00 × 1022.56 × 1022.30 × 1022.04 × 1013.17 × 1023.92 × 1023.54 × 1021.72 × 101
MVMO1.01 × 1022.01 × 1021.94 × 1022.25 × 1012.00 × 1022.71 × 1022.51 × 1029.39 × 1003.00 × 1023.50 × 1023.26 × 1021.13 × 101
IPOG-MVMO7.09 × 1007.57 × 1007.37 × 1004.88 × 10−22.09 × 1022.75 × 1022.55 × 1022.84 × 1012.36 × 1022.77 × 1022.57 × 1028.88 × 101
F26
AlgorithmsD = 10D = 30D = 50
BestWorstMeanStd.BestWorstMeanStd.BestWorstMeanStd.
NBIPOPaCMA4.04 × 1004.29 × 1004.14 × 1001.41 × 10−11.55 × 1023.26 × 1022.42 × 1027.92 × 1011.95 × 1024.84 × 1023.43 × 1021.42 × 102
PVADE1.04 × 1022.00 × 1021.84 × 1023.33 × 1012.00 × 1023.11 × 1022.18 × 1024.01 × 1012.00 × 1023.96 × 1023.47 × 1026.01 × 101
MVMO1.02 × 1021.15 × 1021.08 × 1023.52 × 1002.00 × 1022.00 × 1022.00 × 1024.47 × 10-32.00 × 1022.00 × 1022.00 × 1023.21 × 10−3
IPOG-MVMO3.94 × 1004.46 × 1004.12 × 1001.29 × 10−11.89 × 1023.11 × 1022.12 × 1022.29 × 1011.94 × 1022.96 × 1022.12 × 1022.29 × 101
F27
AlgorithmsD = 10D = 30D = 50
BestWorstMeanStd.BestWorstMeanStd.BestWorstMeanStd.
NBIPOPaCMA1.13 × 1011.26 × 1011.22 × 1015.70 × 10−14.00 × 1025.85 × 1024.75 × 1026.75 × 1014.00 × 1022.15 × 1031.48 × 1039.01 × 102
PVADE2.95 × 1025.35 × 1023.64 × 1028.98 × 1013.06 × 1023.61 × 1023.26 × 1021.14 × 1018.27 × 1021.36 × 1031.11 × 1031.85 × 102
MVMO2.48 × 1022.88 × 1022.60 × 1024.03 × 1003.43 × 1026.89 × 1024.74 × 1029.19 × 1019.09 × 1021.31 × 1031.08 × 1031.10 × 102
IPOG-MVMO5.75 × 1001.11 × 1017.34 × 1002.45 × 10−13.95 × 1025.17 × 1024.34 × 1022.45 × 1015.95 × 1021.17 × 1037.34 × 1022.45 × 102
F28
AlgorithmsD = 10D = 30D = 50
BestWorstMeanStd.BestWorstMeanStd.BestWorstMeanStd.
NBIPOPaCMA3.60 × 1001.11 × 1019.63 × 1003.31 × 1003.00 × 1023.00 × 1023.00 × 10204.00 × 1024.00 × 1024.00 × 1020
PVADE8.45 × 1013.01 × 1022.38 × 1024.52 × 1013.00 × 1023.00 × 1023.00 × 1022.23 × 10−54.00 × 1023.54 × 1054.64 × 1024.39 × 102
MVMO9.87 × 1012.90 × 1021.88 × 1021.00 x1023.00 × 1023.00 × 1023.00 x1025.36 × 10−44.00 × 1024.00 × 1024.00 × 1021.29 × 10−2
IPOG-MVMO3.00 × 1008.72 × 1004.49 × 1002.37 × 10−13.00 × 1023.00 × 1023.00 × 1022.37 × 10−54.00 × 1024.00 × 1024.00 × 1021.13 × 10−2
Table 9. Likelihood of obtaining zero error in 50 trials.
Table 9. Likelihood of obtaining zero error in 50 trials.
AlgorithmsUnimodal FunctionsMultimodal FunctionsComposition Functions
D D D D D D D D D
NBIPOPaCMA100806033.3313.3320000
PVADE100602026.676.670000
MVMO100204013.336.6713.33000
YYPO4040206.676.676.67000
IPOG-MVMO100100808033.3313.33000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Saenphon, T.; Phimoltares, S.; Lursinsap, C. Enhancing Mean-Variance Mapping Optimization Using Opposite Gradient Method and Interior Point Method for Real Parameter Optimization Problems. Processes 2023, 11, 465. https://doi.org/10.3390/pr11020465

AMA Style

Saenphon T, Phimoltares S, Lursinsap C. Enhancing Mean-Variance Mapping Optimization Using Opposite Gradient Method and Interior Point Method for Real Parameter Optimization Problems. Processes. 2023; 11(2):465. https://doi.org/10.3390/pr11020465

Chicago/Turabian Style

Saenphon, Thirachit, Suphakant Phimoltares, and Chidchanok Lursinsap. 2023. "Enhancing Mean-Variance Mapping Optimization Using Opposite Gradient Method and Interior Point Method for Real Parameter Optimization Problems" Processes 11, no. 2: 465. https://doi.org/10.3390/pr11020465

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop