Next Article in Journal
Secure Active Intelligent Reflecting Surface Communication against Colluding Eavesdroppers
Previous Article in Journal
A Mathematical Analysis of Competitive Dynamics and Aggressive Treatment in the Evolution of Drug Resistance in Malaria Parasites
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Teaching–Learning-Based Optimization Algorithm with Stochastic Crossover Self-Learning and Blended Learning Model and Its Application

School of Mathematics and Computer Science, Shaanxi University of Technology, Hanzhong 723001, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(10), 1596; https://doi.org/10.3390/math12101596
Submission received: 18 April 2024 / Revised: 12 May 2024 / Accepted: 17 May 2024 / Published: 20 May 2024

Abstract

:
This paper presents a novel variant of the teaching–learning-based optimization algorithm, termed BLTLBO, which draws inspiration from the blended learning model, specifically designed to tackle high-dimensional multimodal complex optimization problems. Firstly, the perturbation conditions in the “teaching” and “learning” stages of the original TLBO algorithm are interpreted geometrically, based on which the search capability of the TLBO is enhanced by adjusting the range of values of random numbers. Second, a strategic restructuring has been ingeniously implemented, dividing the algorithm into three distinct phases: pre-course self-study, classroom blended learning, and post-course consolidation; this structural reorganization and the random crossover strategy in the self-learning phase effectively enhance the global optimization capability of TLBO. To evaluate its performance, the BLTLBO algorithm was tested alongside seven distinguished variants of the TLBO algorithm on thirteen multimodal functions from the CEC2014 suite. Furthermore, two excellent high-dimensional optimization algorithms were added to the comparison algorithm and tested in high-dimensional mode on five scalable multimodal functions from the CEC2008 suite. The empirical results illustrate the BLTLBO algorithm’s superior efficacy in handling high-dimensional multimodal challenges. Finally, a high-dimensional portfolio optimization problem was successfully addressed using the BLTLBO algorithm, thereby validating the practicality and effectiveness of the proposed method.

1. Introduction

Global optimization plays a key role in solving many real-world scientific computing and engineering applications. However, the increasing complexity of these problems often renders traditional approaches like linear and dynamic programming insufficient for identifying optimal solutions efficiently. This limitation has spurred the development of metaheuristics, which aim to enhance global optimization performance. Among these metaheuristics, the teaching–learning-based optimization (TLBO) algorithm, introduced by Rao R.V. in 2011, stands out [1]. The TLBO algorithm, inspired by the classroom teaching process, boasts a simple structure, straightforward implementation, high exploration capability, and rapid convergence. Its effectiveness is evidenced by successful applications across various fields including electric vehicle load scheduling [2] and neural network training [3].
TLBO is a population-based metaheuristic that mimics the teaching and learning process in a classroom. However, it may suffer from stagnation and premature convergence when dealing with complexity optimization problems, especially when the global optimum is shifted. To enhance the applicability of the TLBO algorithm, many researchers have proposed various modifications and extensions in recent years. These improvements can be classified into three main categories. The first category involves adding extra phases or operators to the original TLBO framework. For example, Chen and Zou introduced local learning and self-learning mechanisms to the TLBO algorithm and developed an improved teaching–learning-based optimization (ITLBO) method wherein each learner learns from both global and local information according to a given probability, and this uses a self-learning approach that considers gradient information and random ranges to improve the TLBO’s ability to escape local optimality but increases the number of evaluations per iteration [4]. Ahmad proposed a balanced teaching–learning-based optimization (BTLBO); the BTLBO adds tutoring and restart phases to the basic TLBO algorithm, uses a weighted average of the learners in the modified teaching phase, and employs two strategies to influence the learners, which successfully maintains a balance between the exploitation and exploration capabilities [5]. He and Xu combined a gradient-based mutation strategy with the TLBO algorithm to construct a mutation-restart phase and developed an improved TLBO algorithm (LMRTLBO) [6]. Sun and Cai added a teacher self-learning phase to the TLBO algorithm and applied a feedback TLBO (FTLBO) to solve the L-R fuzzy flexible assembly job shop scheduling problem with batch partitioning [7]. Zeng et al. presented an improving TLBO combined with the dynamic ring neighborhood topology (DRCMTLBO), a dynamic neighborhood strategy driven by fitness-based evaluation and a new cross-search mechanism driven by Euclidean distance-based evaluation are embedded into the TLBO algorithm, and these are assisted by improved search approaches that achieve a balance between exploitation and exploration [8]. Xing et al. presented a unique golden-sine and multi-population TLBO (GMTLBO); the GMTLBO utilizes a good point-set strategy to enhance the level of accuracy of the initial solution, uses a golden-sine search model to solve the origin offset issue during the teaching phase, and incorporates the multi-population learner phase following the learner phase [9]. The second category involves modifying the original “teaching” and “learning” phases of TLBO. For example, Bi and Wang introduced a differential evolution mutation strategy to the “learning” phase and developed a teaching–learning-based optimization algorithm based on hybrid learning strategies and disturbance (DSTLBO) [10]. Wu et al. presented an improved teaching–learning-based optimization algorithm (RLTLBO) by incorporating reinforcement learning and random opposition-based learning strategies, a new learning mode considering the effect of the teacher was presented, and the Q-Learning method in reinforcement learning was introduced to build a switching mechanism between two different learning modes in the learner phase [11]. Yu et al. proposed a variant of TLBO with reinforcement learning (PLPS-TLBO), modified the structure of TLBO into a parallel structure, and randomly selected three individuals for information exchange during the learning phase [12]. Shukla and Singh proposed an improved teaching learning-based optimization algorithm using adaptive exponential distribution inertia weight, and the logistic map was applied to generate a uniformly distributed population to enhance the quality of the initial populations [13]. Mohammad et al. used a modified dynamic oppositional learning strategy and proposed a modified dynamic oppositional learning TLBO (MDOLTLBO) algorithm [14]. Satya et al. redefined the learning strategies and proposed a multi-teacher TLBO (MT-TLBO) [15]. The third category consists of hybrid algorithms that combine TLBO with other optimization algorithms or search techniques. For instance, Chen and Liu constructed an adaptive framework and fused the original TLBO algorithm with a Gaussian distribution algorithm, resulting in a self-adaptive hybrid self-learning-based TLBO (SHSLTLBO) algorithm, and the empirical results demonstrated the superior advantages of the SHSLTLBO at balancing along the evolutionary stages among the TLBO variants [16]. Tang and Fang solved the distributed sand casting job-shop scheduling problem by combining the tabu search technique with TLBO, creating a hybrid TLBO (HTLBO) [17]. Li et al. proposed a hybrid adaptive teaching–learning-based optimization with differential evolution (ATLDE) to address the problem of PV-model parameter identification [18]. Tanmay and Harish combined the slime mold algorithm (SMA) with TLBO and proposed a hybrid LSMA-TLBO algorithm based on levy flight mutation, which was used to solve numerical and engineering design optimization problems [19].
Although these enhanced TLBO algorithms have demonstrated satisfactory outcomes in the problems studied, there is still potential further improvement in their global optimization capability when tackling high-dimensional complex multimodal problems.
The blended learning mode combines traditional face-to-face teaching with online learning, which can give full play to the advantages of both teaching methods. Students can acquire a wider range of knowledge through self-study before class by applying rich online resources, and they can gain a deeper understanding of the difficult points through blended teaching in the classroom and consolidation after class. Through the simulation of a blended learning mode, the exploitation and exploration of algorithms can be effectively balanced.
To tackle complex high-dimensional optimization problems, this paper presents a novel variant of the TLBO algorithm called BLTLBO that draws inspiration from the blended learning model. Firstly, the perturbation conditions in the “teaching” and “learning” stages of the original TLBO algorithm are analyzed geometrically. To enhance the global search ability of the BLTLBO algorithm, the exploration and exploitation of the algorithm are effectively balanced through three stages: pre-learning, classroom learning, and post-consolidation. The performance of BLTLBO is evaluated on 18 multimodal test functions and is compared both with that of seven prominent variants of the TLBO algorithm as well as with two exceptional high-dimensional optimization algorithms. Additionally, BLTLBO is applied to a high-dimensional portfolio optimization problem to assess its practicality. In summary, this paper contributes in the following ways:
  • The perturbation conditions in the “teaching” and “learning” stages of the original TLBO algorithm are interpreted geometrically, then improved on this basis.
  • A random crossover self-study phase is established and the structure of TLBO is modified into three stages: pre-course, classroom learning, and post-course consolidation.
  • This paper validates the BLTLBO algorithm by conducting a comparison with other algorithms on 18 multimodal functions of CEC2008 and CEC2014.
  • This paper applies BLTLBO to a high-dimensional portfolio optimization problem to test its practicality.
The rest of this paper is structured as follows: Section 2 describes the original TLBO algorithm. Section 3 analyzes the TLBO algorithm geometrically. Section 4 presents the BLTLBO algorithm. Section 5 discusses and analyzes the simulation results. Section 6 applies the TLBO algorithm to a portfolio optimization problem. Section 7 concludes the paper and suggests future research directions.

2. Original TLBO

The TLBO algorithm is a population stochastic optimization algorithm that simulates the classroom teaching process, and its optimization process consists of two phases: firstly, the “teaching” phase, in which the learners improve their knowledge by learning from the teacher; secondly, the “learning” phase, in which the learners interact with another randomly selected learners to further improve their knowledge. The algorithm aims to improve the average knowledge level of the class.

2.1. Initialization

For the miniaturization problem with D -dimensional decision variables, let X i = ( x i 1 , x i 2 , , x i D ) denote the i-th learner (individual), f ( X i ) denote academic performance (fitness value), and N P denote the class size (population size). Each learner in the initial class X i ( i = 1 , 2 , , N P ) is randomly initialized as follows:
x i j = x j L + r a n d · ( x j U x j L )
Here, x j L , x j U denote the lower and upper bounds of the j-th course (decision variable), respectively, and r a n d is a random number taking the value [0, 1].

2.2. ”Teaching” Phase

At this stage, learners improve their grades by understanding the difference between the class average and the teacher ( D i f f e r e n c e ). The specific teaching mechanism is denoted below:
N e w X i = O l d X i + D i f f e r e n c e
D i f f e r e n c e = r i · ( T e a c h e r T F · X m e a n )
Here, O l d X i , N e w X i denote the state of the learner X i before and after the instruction, respectively; T e a c h e r is the learner with the best performance (smallest fitness value) in the class; X m e a n = 1 N P i = 1 N P O l d X i denotes the average state of the class; T F is the instructional factor, which takes the value of 1 or 2; and r i is a random vector such that each of its elements is a random number in the range [0, 1].
At the end of the “teaching” phase, the better of the pre-and post-teaching states of the learner is accepted and enters the “learning” phase.

2.3. “Learning” Phase

At this stage, learners improve themselves by interacting with other learners in the class. The specific “learning” mechanisms are denoted thus:
N e w X i = O l d X i + r i · ( O l d X i O l d X k ) i f f ( O l d X i ) < f ( O l d X k ) O l d X i + r i · ( O l d X k O l d X i ) o t h e r w i s e
O l d X k is another randomly selected learner in the class and r i is a random vector with each element taking the values [0, 1].
Similar to the “teaching” phase, the better of the learner’s pre- and post-learning states will be selected, and the “learning” phase will be followed by the next “teaching” phase.
If the termination condition is satisfied (maximum number of evaluations or maximum number of iterations are reached), then the optimization ends and the results are output.
In the “Teaching” phase, all individuals acquire knowledge based on the difference between the teacher’s performance and the class average. The teacher then steers the students towards enhancing their performance, thereby accelerating convergence rates. Nevertheless, all students inevitably experience premature convergence when the teacher reaches a locally optimal solution. In the “Learning” phase, students augment the diversity of the group by learning from one another, thus bolstering their exploratory capabilities. However, the absence of new knowledge infusion during the learner phase results in a restricted search space, hindering the further enhancement of population diversity. Consequently, the TLBO algorithm often converges to a local optimum when tackling multimodal complex optimization problems.

3. Geometric Analysis of TLBO

3.1. Analysis of the “Teaching” Phase

In the “teaching” phase, the TLBO algorithm searches by adding, to each individual, the perturbation vector. Since each component of the random vector Field is a probabilistic determinant in the range [0, 1], the effect of r i on D i f f e r e n c e is essentially a dimensional scaling. In two dimensions, the limiting condition of the random perturbation is analyzed geometrically, and D i f f e r e n c e can be represented as an arbitrary vector within the rectangular box (the perturbation box) of Figure 1a.
The literature [20] has found experimentally that when the population converges on the search space, if the teaching factor, the converged population, is maintained, when T F = 2 , this maintenance depends on the random vector r i ; when each component of r i is close to 0, the convergence property is the same as that of T F = 1 ; when each component of r i is close to 1, if the whole population has fully converged, the corner of the D i f f e r e n c e perturbation box applied to the individual X i tends to the origin, as shown in Figure 1b, which reveals that the original TLBO algorithm allows a converged population to continue searching for a better solution in the direction of the origin in the “teaching” phase. This reveals that the original TLBO algorithm allows a converging population to continue searching for a better solution in the direction of the origin during the “teaching” phase, and this convergence property creates a bias that makes it easier to solve the problem where the optimal solution is located at the origin.
However, in practical optimization applications, the optimal solution of the problem is usually unknown or uncertain, which requires the algorithm to have a high degree of generality in dealing with the optimal solution being located at a random position; thus, the practical applicability of the original TLBO algorithm is greatly limited. If each component r i in Equation (3) is allowed to take the value of [−1, 1], the range of the D i f f e r e n c e perturbation box will increase exponentially when the population converges, and the probability of obtaining a globally optimal solution will be greatly increased, and the geometrical analysis is shown in Figure 1c.

3.2. Analysis of the “Learning” Phase

The geometric analysis of the perturbation condition in the “learning” phase is shown in Figure 2a using an analysis method similar to that of the “teaching” phase. If each component r i in Equation (4) takes the value of [−1, 1], the perturbation region in the learning phase will also increase exponentially, and the geometric analysis is shown in Figure 2b.
From the geometrical interpretation, it can be seen that the perturbed region contains the original perturbed region, and for this reason, the original “learning” mechanism can be modified as follows:
N e w X i = O l d X i + r i · ( O l d X i O l d X k )
Here, each component of r i takes the value of [−1, 1]; compared with the original “learning” mechanism, this mechanism reduces the judgment operation in the search process, so that the algorithm structure is more concise, and can improve the computational efficiency.

3.3. Confirmatory Experiment

To verify the above analysis, three multimodal standard test functions (Ackley, Rastrigin, Griewank) were selected to carry out tests, setting dimensions D = 100 , maximum evaluation times 5000 × D , and population size N P = 10 . TLBO1 and TLBO2 represent variants of a random vector using values [−1, 1] in the “teaching” and “learning” phases, respectively; TLBO3 represents a variant of a random vector that uses the value [−1, 1] in both the “teaching” and “learning” phases; TLBO4 indicates that the simplified Formula (5) is used in the “learning” phase of TLBO3. The experimental statistical results are shown in Table 1. They include Mean, Std, and average execution time (measured in seconds), where Mean and Std denote the average mean and standard deviation of the function error value under 30 independent runs; the best results are in bold. The algorithm’s convergence curve is depicted in Figure 3.
From the simulation data in Table 1 and the convergence curves shown in Figure 3, it is evident that the original TLBO algorithm has a good convergence effect on the standard test function (optimal solution as the origin) because of the inherent origin bias within the “teaching” stage, but for the shifted test function (optimal solutions away from the origin), the modified algorithms have a better performance impact, especially TLBO4, which has higher convergence accuracy and shortest time. Thus, the correctness of the geometric analysis of the disturbance in the “teaching” and “learning” stages is verified.

4. TLBO Based on Blended Learning Model

Currently, blended learning that combines online and offline components has become a common mode of education. In this approach, students typically engage in pre-course learning using online resources before participating in classroom discussions and question-and-answer sessions led by the teacher. By simulating pre-course learning, blended classroom learning, and post-course consolidation, the exploration and exploitation of the algorithm are balanced in the iterative process, which can improve the optimization performance of high-dimensional multimodal complex problems.

4.1. Pre-Course Random Crossover Self-Study Phase

Blended learning involves the teacher providing course resources and tasks before class, allowing learners to study at their own pace. Learners may choose to communicate with other students in depth on certain courses, conduct self-study, and expand on a few courses outside of class. The simulation process is outlined below:
N e w x ij = O l d x kj i f r a n d < r a n d O l d x ij + ( 2 · r a n d 1 ) · b w e l s e r a n d > 0.01 x j L + r a n d · ( x j U x j L ) o t h e r w i s e
b w = b w max · exp t T max · L n ( b w min b w max )
Here, i = 1 , , N P , j = 1 , , D , k is a random integer in 1 , N P that is different from j , r a n d is a random number in the range 0 , 1 , L n ( ) denotes the logarithm with e as the base, b w max and b w min are the maximum and minimum self-learning steps, t is the current iteration number, and T max is the maximum iteration number. The process of learning from other students is essentially a random crossover mechanism. The pre-course random crossover self-study is expressed in Figure 4.

4.2. Classroom Blended Learning

4.2.1. Adjustment of Class Average Status

In the “teaching” phase of the original TLBO, the average state of the group is determined, and the perturbation vectors D i f f e r e n c e added to different individuals are similar, and the geometric analysis is shown in Figure 5a. In reality, teachers not only have to consider the overall level of the class, but also have to provide targeted teaching to different learners. Thus, the perturbation vectors D i f f e r e n c e added to different learners should be different, which will help maintain the diversity of the population.
To this end, for the learner, X m e a n Equation (3) can be replaced by the following, where
M X i = X m e a n + X i 2 .
The geometric analysis of M X i is shown in Figure 5b.

4.2.2. Blended Classroom Instruction

Different from the traditional classroom teaching mode, the offline part of blended learning mainly consists of classroom teaching activities such as the teacher’s key lectures, problem solving, and group discussions among learners, which provide the learners with the knowledge learned before the class for checking and filling the gaps, consolidating, and applying knowledge flexibly. In this model, some learners will follow the teacher and some learners will interact with their classmates, which can be modeled by the following equation: for each learner, the following is used to simulate the learning process.
N e w X i = O l d X i + r i · ( T e a c h e r T F · M X i ) i f rand < r a n d ( t T max ) · T e a c h e r + ( 1 t T max ) · O l d X i + r i · ( O l d X i O l d X k ) o t h e r w i s e

4.3. Post-Course Consolidation Phases

After the lesson, learners will consolidate and reinforce what they have learned. Usually, there are fewer problems in the early stages of learning, but as learning progresses, in the middle and late stages, there will be a need to ask the teacher for focused tutoring, i.e., when t ( T max / 2 )
N e w X i = T e a c h e r + r a n d · exp t T max 2 · ( T e a c h e r O l d X i )
When the number of iterations surpasses half, the majority of the population has converged towards the optimal solution region. At this juncture, a detailed search around the current optimal solution becomes necessary. Incorporating this reinforcement operation can direct the population to learn from the current optimal solution while introducing a minor perturbation. This perturbation progressively diminishes as the iterations advance, thereby ensuring the convergence of the population while maintaining the capability to explore the convergence region.

4.4. Flow of BLTLBO Algorithm

The flow chart of the BLTLBO algorithm is shown in Figure 6.
BLTLBO adjusts the structure of the original TLBO algorithm, randomly divides the learned courses into three categories, and applies different learning strategies in the added pre-class learning stage while the original “teaching” and “learning” stages are implemented in different groups of students during the classroom learning process, and the exploration and exploitation of the algorithm are balanced through the integration of different strategies in the search process. In the middle and later stages of iteration, the exploitation ability of the algorithm is strengthened through after-class consolidation. Compared with the original TLBO, BLTLBO only increases the maximum and minimum self-learning step sizes of two additional parameters but does not increase the evaluation times of the algorithm.

4.5. Time Complexity Analysis of BLTLBO

Computational complexity is a crucial metric for assessing the performance of algorithms. The time complexity (TC) of an algorithm is primarily determined by the frequency of fitness function evaluations and the frequency of solution updates, which are largely influenced by the problem dimensions D , population size N P , and the number of iterations T max . The time complexity of the basic Teaching–Learning-Based Optimization (TLBO) algorithm can be represented as O ( T max × N P × D ) .
In the BLTLBO, the TC is increased by the pre-course random crossover self-study phase in O ( T max × N P × D ) , the TC is increased by the classroom blended-learning phase in O ( T max × N P × D ) , and the TC is increased by the post-course consolidation phases in O ( 1 2 T max × N P × D ) . Therefore, the overall time complexity of the proposed GMTLBO is as follows:
O ( B L T L B O ) = O ( T max × N P × D ) + O ( T max × N P × D ) + O ( 1 2 T max × N P × D ) = O ( T max × N P × D )

5. Simulation Experiments

To evaluate the performance of the BLTLBO on high-dimensional multimodal problems, we tested and compared it with seven TLBO variants (ITLBO [4], BTLBO [5], DSTLBO [8], GMTLBO [9], RLTLBO [11], SHSLTLBO [14], ATLDE [18]) and two high-dimensional optimization algorithms (DECCG [21], CSO [22]) using multimodal test functions selected from the CEC suite. Table 2 describes a set of benchmark functions used to evaluate the optimality-seeking effect and performance of algorithms. Specifically, F1–F13 are taken from CEC2014 and focus on multimodal complex problems while F14–F18 are taken from CEC2008 and focus on high-dimensional complex problems.

5.1. Experimental Environment and Parameter Settings

The simulation experiment environment comprised the following: a Dell PC Intel (R) Core (TM) i7-9700 3.00 GHz CPU, 16 G RAM; a Windows 10 operating system; MATLAB R2016b software.
The parameter settings of all the comparison algorithms have been referred to the original literature, and the parameter values of the algorithms in this paper were experimentally tested and set thus: N P = 30 , λ max = ( x U x L ) / 10 , and λ min = ( x U x L ) / ( 1 E + 15 ) .

5.2. Simulation Results and Analysis

To ensure a fair comparison, we tested the various algorithms using the same number of fitness function evaluations (MaxFEs).

5.2.1. Simulation of Complex Multimodal Problems

Table 3 presents the test statistical results of 30 independent tests running on F1–F13 with MaxFEs = 10,000 × D, including the mean, standard deviation of the fitness value error, and average execution time (measured in seconds) for various algorithms, highlighting the best mean in bold. If the given value is less than 10−10, it was taken as zero. The Friedman test results of BLTLBO and other excellent variations of TLBO are shown in Table 4. The performances of the algorithms in this paper were compared to others using the Wilcoxon signed-rank test with a significance level of 0.05, where ‘+’, ‘−’, and ‘≈’ denote performances that were better than, worse than, and similar to that of BLTLBO, respectively; p-values of Wilcoxon signed-rank tests are shown in Table 5. Figure 7 shows the average error convergence curves and box plots of the distribution of the average errors for 30 independent runs of the three test functions, providing further analysis of the experimental results.
As can be seen from Table 3, Table 4 and Table 5, the BLTLBO algorithm has better or comparable performance compared to the other algorithms on most functions, except for one function where it is slightly worse than the BTLBO algorithm. Friedman test results show that BLTLBO ranks first among all algorithms, which indicates that the BLTLBO algorithm can effectively optimize complex multimodal problems after using random crossover strategy and structural reconstruction. In addition, the running time of BLTLBO is not the shortest, but is within the acceptable range, and compared with the second-ranked BTLBO algorithm, its running time is only half, which proves its high efficiency and scalability. Figure 6 shows the superiority and stability of the proposed BLTLBO over other algorithms for three challenging functions: F4 (Shifted and Rotated Griewank Function), F5 (Shifted Rastrigin Function), and F6 (Shifted and Rotated Rastrigin Function).

5.2.2. Scalable Multimodal Problem Simulation

To further evaluate the performance of the BLTLBO algorithm on high-dimensional complex problems, two state-of-the-art large-scale optimization algorithms (DECCG, CSO) were added to the comparison set when testing on F14–F18 with MaxFEs = 5000 × D. Table 6 reports the mean, standard deviation of the function fitness value error (F18 is the fitness value), and single average running time of different algorithms on these functions when D = 100,200,500,dhighlighting the best mean in bold. If the given value is less than 10−10, it was taken as zero. The Friedman test results of BLTLBO and other comparison algorithms are shown in Table 7. The performances of the algorithms in this paper are compared to others using the Wilcoxon signed-rank test with a significance level of 0.05, where ‘+’, ‘−’, and ‘≈’ denotes performances that are better than, worse than, and similar to that of the BLTLBO, respectively; p-values of Wilcoxon signed-rank tests are shown in Table 8.
As can be seen from Table 6, Table 7 and Table 8, BLTLBO outperforms all the other algorithms on five functions when D = 100, and only the CSO algorithm achieves better results on two functions when D = 200 and D = 500. According to the Friedman’s test results, the BLTLBO algorithm achieves the highest ranking among all the algorithms considered when the dimensions of the problem are set to D = 100 and D = 200. In the case where D = 500, BLTLBO ties for first place with the DECCG algorithm. This statistical evidence suggests that the BLTLBO algorithm consistently maintains competitive performance in higher-dimensional spaces. However, the proposed algorithm has a higher time complexity than DECCG and CSO algorithms. Figure 8 illustrates the convergence curves of the proposed algorithm for different dimensions. It can be observed that the BLTLBO has a consistent convergence behavior and can obtain competitive optimization results as the problem dimension increases.

5.2.3. Impact Analysis of the Three Improvement Phases

Tests were conducted on functions F4–F6 to analyze the impact of the three phases proposed in this paper—pre-learning, classroom learning, and post-consolidation—on the BLTLBO algorithm for solving high-dimensional optimization problems. BLTLBO1 and BLTLBO3 excluded the pre-learning and the post-consolidation phases, respectively. BLTLBO2 and BLTLBO4 used the original ‘Teach-Learn’ and ‘Teach-Learn’ phases, respectively. In BLTLBO3, the post-course consolidation phase was omitted. Figure 7 shows the average error convergence curves for 20 runs of various algorithms on the four functions.
As can be seen from Figure 9, there were greater impacts on F4, comprising the pre-course and classroom learning phases; on F5 and F7, the pre-course learning phase; and F6, the post-course consolidation phase; and, collectively, the pre-course learning phase had the highest algorithmic performance enhancement.

5.2.4. Population Diversity Analysis of BLTLBO Algorithm

To assess the population diversity during the iterative process of the BLTLBO algorithm, we conducted experiments on two complex multimodal functions (Shifted and Rotated Griewank’s Function and Shifted Rastrigin’s Function) and recorded the variation in population diversity throughout the search process, as shown in Figure 8. We defined the population diversity [12] as follows:
Population   diversity = 1 D i = 1 D 1 N P j = 1 N P ( x i j x i ¯ ) 2
Figure 10 shows that the diversity of the original TLBO algorithm decreased significantly at the early stage of iteration, and there was always oscillation in the process of iteration, especially in the later stage of iteration, which was not conducive to population convergence, resulting in low solution accuracy. The BLTLBO algorithm has good diversity in the initial iteration, which makes the algorithm have good global search ability. With the progress of iteration, the diversity tends to decline, which is conducive to the fine searching of the algorithm, thus improving the solution accuracy.

6. Application of BLTLBO in Portfolio Optimization

It is challenging to forecast the securities market accurately due to various uncertain factors. Therefore, finding the optimal investment allocation that maximizes the return while minimizing the risk is of great interest to many investors. Markowitz proposed the mean-variance model (M-V model) in 1952 [23], which established the foundation of modern securities portfolio theory. To better balance the risk and return, the M-V portfolio model introduces a risk aversion coefficient that transforms the dual-objective optimization problem into a single-objective. The resulting model is as follows [24]:
min λ i = 1 D j = 1 x i x j σ i j ( 1 λ ) i = 1 D x i μ i s . t . i = 1 D x i = 1   0 x i 1 , i = 1 , , D
Here, D denotes the number of investable securities, x i denotes the investment ratio of the i security, μ i denotes the expected return of the i security, σ i j denotes the covariance between the return of the i security and the return of the j security, and R * is the expected return target.
To evaluate the performance of BLTLBO for the portfolio optimization problem, four sets of testing data (DAX100, FTSE100, S&P100, and Nikkei) were selected for testing (http://people.brunel.ac.uk/~mastjjb/jeb/orlib/portinfo.html) (accessed on 10 January 2023); we set λ = 0.02 and used a maximum number of evaluations of 1000 × D . Three evaluation indicators, Mean Euclidian Distance (MED) to the optimal front-end, Variance of Return Error (VRE), and Mean of Return Error (MRE), were selected. Table 9 shows the statistics of this paper’s algorithm with GA, PSO, HSDS [25], and HS-TLBO [26] solution results on four sets of data, and Figure 11 plots the standard effective boundary curves and the effective boundary curves obtained with BLTLBO.
It is clear from Table 4 that the average Euclidean distance and average return error of the BLTLBO algorithm are the smallest among the five algorithms except for FTSE100. In Figure 10, it can be seen that the optimal front-end obtained via the BLTLBO algorithm is consistent with the standard optimal front-end and has a more uniform distribution, which shows that the algorithm in this paper is effective for solving the high-dimensional portfolio optimization problem. Please refer to the Supplementary Materials for other numerical simulation results.

7. Conclusions

This paper presents a variant of the TLBO algorithm (called BLTLBO) that incorporates a hybrid learning model for solving high-dimensional multimodal optimization problems. A geometric analysis of the perturbation mechanisms of the standard TLBO algorithm in both the ‘teaching’ and ‘learning’ phases has been presented, and the algorithm structure has been adapted accordingly to achieve a trade-off between exploration and exploitation during the search process. This paper has made significant progress in solving complex multimodal optimization, covering important aspects such as population diversity, trade-off between global search and local exploitation, search accuracy, and convergence speed, improving the performance of the standard TLBO algorithm. The proposed algorithm has been tested on several high-dimensional multimodal problems and compared with some excellent optimization algorithms. The results show that BLTLBO provides superior optimization performance and stability, especially for complex problems with moving features, but at the cost of a slightly higher computational cost. This paper has also applied BLTLBO to a large-scale portfolio optimization problem and demonstrated its effectiveness.
Future work will include further theoretical analysis of the algorithm and its application to more practical optimization problems such as multi-objective optimization, image processing, feature selection, and task scheduling. In addition, we will explore the possibility of integrating multi-task evolutionary optimization and membrane computing parallel search techniques to improve the search speed and balance of the algorithm and further reduce the computational cost.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/math12101596/s1, Convergence curve and distribution box plot of test function on different dimensions.

Author Contributions

Conceptualization, Y.M. and L.Y.; methodology, Y.M.; software, Y.L.; validation, Y.M., Y.L., and L.Y.; formal analysis, Y.M.; investigation, L.Y.; writing—original draft preparation, Y.M.; writing—review and editing, L.Y.; supervision, L.Y.; funding acquisition, L.Y. and Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of Shaanxi Province (grant number 2024JC-YBMS-014) and the Science and Technology Innovation Team of the Shaanxi Provincial Education Department (grant number 23JP024).

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput.-Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  2. Zhou, Y.; Wang, B.; Li, H. A Surrogate-Assisted Teaching-Learning-Based Optimization for Parameter Identification of The Battery Model. IEEE Trans. Ind. Inform. 2021, 17, 5909–5918. [Google Scholar] [CrossRef]
  3. Yang, Z.; Li, K.; Guo, Y.; Ma, H.; Zheng, M. Compact Real-valued Teaching-Learning Based Optimization with the Applications to Neural Network Training. Knowl.-Based Syst. 2018, 159, 51–62. [Google Scholar] [CrossRef]
  4. Chen, D.; Zou, F.; Li, Z.; Wang, J.; Li, S. An improved teaching–learning-based optimization algorithm for solving global optimization problems. Inf. Sci. 2015, 297, 171–190. [Google Scholar] [CrossRef]
  5. Taheri, A.; Rahimizadeh, K.; Rao, R.V. An efficient Balanced Teaching-Learning-Based Optimization Algorithm with Individual Restarting Strategy for Solving Global Optimization Problems. Inf. Sci. 2021, 6, 68–104. [Google Scholar] [CrossRef]
  6. Dong, H.; Xu, Y.; Cao, D.; Zhang, W.; Yang, Z.; Li, X. An improved teaching-learning-based optimization algorithm with a modified learner phase and a new mutation-restarting phase. Knowl.-Based Syst. 2022, 10, 109989. [Google Scholar] [CrossRef]
  7. Sun, M.; Cai, Z.; Zhang, H. A teaching-learning-based optimization with feedback for L-R fuzzy flexible assembly job shop scheduling problem with batch splitting. Expert Syst. Appl. 2023, 224, 120043. [Google Scholar] [CrossRef]
  8. Zeng, Z.; Dong, H.; Xu, Y.; Zhang, W.; Yu, H.; Li, X. Teaching-learning-based optimization algorithm with dynamic neighborhood and crossover search mechanism for numerical optimization. Appl. Soft Comput. 2024, 154, 111332. [Google Scholar] [CrossRef]
  9. Xing, A.; Chen, Y.; Suo, J.; Zhang, J. Improving teaching-learning-based optimization algorithm with golden-sine and multi-population for global optimization. Math. Comput. Simul. 2024, 221, 94–134. [Google Scholar] [CrossRef]
  10. Bi, X.; Wang, J. Teaching–learning-based optimization algorithm with hybrid learning strategy. J. Zhejiang Univ. Eng. Sci. 2017, 51, 1024–1031. [Google Scholar]
  11. Wu, D.; Wang, S.; Liu, Q.; Abualigah, L.; Jia, H. An Improved Teaching-Learning-Based Optimization Algorithm with Reinforcement Learning Strategy for Solving Optimization Problems. Comput. Intell. Neurosci. 2022, 2022, 1535957. [Google Scholar] [CrossRef] [PubMed]
  12. Yu, Y.; Zhang, W. A teaching-learning-based optimization algorithm with reinforcement learning to address wind farm layout optimization problem. Appl. Soft Comput. 2024, 151, 111135. [Google Scholar] [CrossRef]
  13. Shukla, A.K.; Singh, P.; Vardhan, M. An adaptive inertia weight teaching-learning-based optimization algorithm and its applications. Appl. Math. Model. 2020, 77, 309–326. [Google Scholar] [CrossRef]
  14. Eirgash, M.A.; Toğan, V.; Dede, T.; Başağa, H.B. Modified dynamic opposite learning assisted TLBO in solving Time-Cost optimization in generalized construction projects. Structures 2023, 53, 806–821. [Google Scholar] [CrossRef]
  15. Ram, S.D.K.; Srivastava, S.; Mishra, K.K. Redefining teaching-and-learning-process in TLBO and its application in the cloud. Appl. Soft Comput. 2023, 135, 110017. [Google Scholar]
  16. Chen, Z.; Liu, Y.; Yang, Z.; Fu, X.; Tan, J.; Yang, X. An enhanced teaching-learning-based optimization algorithm with self-adaptive and earning operators and its search bias towards origin. Swarm Evol. Comput. 2021, 60, 100766. [Google Scholar] [CrossRef]
  17. Tang, H.; Fang, B.; Liu, R.; Li, Y.; Guo, S. A hybrid teaching and learning-based optimization algorithm for distributed sand casting job-shop scheduling problem. Appl. Soft Comput. 2022, 120, 108694. [Google Scholar] [CrossRef]
  18. Li, S.; Gong, W.; Wang, L.; Yan, X.; Hu, C. A hybrid adaptive teaching–learning-based optimization and differential evolution for parameter identification of photovoltaic models. Energy Convers. Manag. 2020, 225, 113474. [Google Scholar] [CrossRef]
  19. Tanmay, K.; Harish, G. LSMA-TLBO: A hybrid SMA-TLBO algorithm with lévy flight based mutation for numerical optimization and engineering design problems. Adv. Eng. Softw. 2022, 172, 103185. [Google Scholar]
  20. Pickard, J.K.; Carretero, J.A.; Bhavsar, V.C. On the convergence and origin bias of the Teaching-Learning-Based-Optimization algorithm. Appl. Soft Comput. 2016, 46, 115–127. [Google Scholar] [CrossRef]
  21. Yang, Z.; Tang, K.; Yao, X. Large-scale evolutionary optimization using cooperative coevolution. Inf. Sci. 2008, 178, 2985–2999. [Google Scholar] [CrossRef]
  22. Cheng, R.; Jin, Y. A Competitive Swarm Optimizer for Large Scale Optimization. IEEE Trans. Cybern. 2015, 45, 191–204. [Google Scholar] [CrossRef] [PubMed]
  23. Chang, T.J.; Meade, N.; Beasley, J.E.; Sharaiha, Y.M. Heuristics for cardinality constrained portfolio optimization. Comput. Oper. Res. 2000, 27, 1271–1302. [Google Scholar] [CrossRef]
  24. Cura, T. A rapidly converging artificial bee colony algorithm for portfolio optimization. Knowl.-Based Syst. 2021, 233, 107505. [Google Scholar] [CrossRef]
  25. Tuo, S. A Modified Harmony Search Algorithm for Portfolio Optimization Problems. Econ. Comput. Econ. Cybern. Stud. Res. 2016, 52, 3111–3326. [Google Scholar]
  26. Tuo, S.; He, H. Solving complex cardinality-constrained mean-variance portfolio optimization problems using hybrid HS and TLBO algorithm. Econ. Comput. Econ. Cybern. Stud. Res. 2018, 52, 231–248. [Google Scholar]
Figure 1. Geometric analysis of the teaching phases: (a) perturbation box in the “teaching” phase; (b) perturbation box when the original algorithm converges; (c) perturbation box when the improved algorithm converges.
Figure 1. Geometric analysis of the teaching phases: (a) perturbation box in the “teaching” phase; (b) perturbation box when the original algorithm converges; (c) perturbation box when the improved algorithm converges.
Mathematics 12 01596 g001
Figure 2. Geometric analysis of the learning phases: (a) perturbation box of the original algorithm; (b) perturbation box of the improved algorithm.
Figure 2. Geometric analysis of the learning phases: (a) perturbation box of the original algorithm; (b) perturbation box of the improved algorithm.
Mathematics 12 01596 g002
Figure 3. The converging curves of the algorithms.
Figure 3. The converging curves of the algorithms.
Mathematics 12 01596 g003
Figure 4. Pre-course random crossover self-study.
Figure 4. Pre-course random crossover self-study.
Mathematics 12 01596 g004
Figure 5. Geometric analysis of population mean positions: (a) original algorithm; (b) improved algorithm.
Figure 5. Geometric analysis of population mean positions: (a) original algorithm; (b) improved algorithm.
Mathematics 12 01596 g005
Figure 6. Flow chart of BLTLBO algorithm.
Figure 6. Flow chart of BLTLBO algorithm.
Mathematics 12 01596 g006
Figure 7. Convergence plot and box plot of benchmark functions F4–F6.
Figure 7. Convergence plot and box plot of benchmark functions F4–F6.
Mathematics 12 01596 g007
Figure 8. Convergence plot of benchmark function F18: (a) D = 100; (b) D = 200; (c) D = 500.
Figure 8. Convergence plot of benchmark function F18: (a) D = 100; (b) D = 200; (c) D = 500.
Mathematics 12 01596 g008
Figure 9. Convergence plot of benchmark functions F4–F6.
Figure 9. Convergence plot of benchmark functions F4–F6.
Mathematics 12 01596 g009
Figure 10. Population diversity curves.
Figure 10. Population diversity curves.
Mathematics 12 01596 g010
Figure 11. Effective boundary curves of four data sets.
Figure 11. Effective boundary curves of four data sets.
Mathematics 12 01596 g011
Table 1. Experimental results on test functions (Standard and Shifted).
Table 1. Experimental results on test functions (Standard and Shifted).
FunctionStatistical IndicatorTLBOTLBO1TLBO2TLBO3TLBO4
AckleyMean6.04 × 10−152.052.29 × 10−34.78 × 10−44.84 × 10−4
Std1.81 × 10−152.991.02 × 10−26.20 × 10−45.44 × 10−4
Time (s)1.912.872.503.002.86
Rastrigin Mean02.42 × 1022.24 × 1022.88 × 1022.18 × 102
Std09.29 × 1015.23 × 1011.96 × 1021.35 × 102
Time (s)2.303.403.553.603.80
GriewankMean02.33 × 10−168.32 × 10−104.20 × 10−94.94 × 10−4
Std03.38 × 10−163.72 × 10−91.88 × 10−82.20 × 10−3
Time (s)2.192.623.093.323.39
Shifted AckleyMean1.92 × 1011.93 × 1019.87 × 10−11.43 × 10−24.22 × 10−3
Std2.09 × 10−13.36 × 10−11.494.44 × 10−27.90 × 10−3
Time (s)6.596.506.296.206.01
Shifted Rastrigin Mean8.60 × 1028.93 × 1022.32 × 1022.35 × 1022.10 × 102
Std5.39 × 1011.06 × 1024.48 × 1016.73 × 1014.24 × 101
Time (s)6.356.226.166.356.12
Shifted GriewankMean4.79 × 1015.00 × 1013.22 × 10−11.85 × 10−39.86 × 10−4
Std5.95 × 1013.97 × 1019.44 × 10−14.98 × 10−33.14 × 10−3
Time (s)7.317.077.046.836.77
Table 2. Test functions.
Table 2. Test functions.
Function NameCharacteristic
F1Shifted and Rotated Rosenbrock’s FunctionShifted, rotated, non-separable
F2Shifted and Rotated Ackley’s FunctionShifted, rotated, non-separable
F3Shifted and Rotated Weierstrass FunctionShifted, rotated, non-separable
F4Shifted and Rotated Griewank’s FunctionShifted, rotated, non-separable
F5Shifted Rastrigin’s FunctionShifted, separable
F6Shifted and Rotated Rastrigin’s FunctionShifted, rotated, separable
F7Shifted Schwefel’s FunctionShifted, separable
F8Shifted and Rotated Schwefel’s FunctionShifted, rotated, non-separable
F9Shifted and Rotated Katsuura FunctionShifted, rotated, non-separable
F10Shifted and Rotated HappyCat FunctionShifted, rotated, non-separable
F11Shifted and Rotated HGBat FunctionShifted, rotated, non-separable
F12Shifted and Rotated Expanded Griewank’s plus Rosenbrock’s FunctionShifted, rotated, non-separable
F13Shifted and Rotated Expanded Scaffer’s F6 FunctionShifted, rotated, non-separable
F14Shifted Rosenbrock’s FunctionShifted, non-separable, expandable
F15Rastrigin-shifted’s FunctionShifted, separable, expandable
F16Shifted Griewank’s FunctionShifted, non-separable, expandable
F17Shifted Ackley’s FunctionShifted, separable, expandable
F18FastFractal “DoubleDip” FunctionNon-separable, expandable
Table 3. Statistical results of 30 independent experiments (D = 100).
Table 3. Statistical results of 30 independent experiments (D = 100).
FunctionStatistical IndicatorBTLBOSHSLTLBOITLBODSTLBORLTLBOATLDEGMTLBOBLTLBO
F1Mean2.32 × 1022.77 × 1021.25 × 1042.19 × 1024.72 × 1024.72 × 1024.72 × 1021.85 × 102
Std4.75 × 1014.65 × 1012.55 × 1033.51 × 1016.56 × 1014.78 × 1018.36 × 1014.10 × 101
Times (s)41.2814.0311.6118.0332.5973.3013.6320.80
F2Mean2.02 × 1012.13 × 1012.12 × 1012.01 × 1012.13 × 1012.13 × 1012.13 × 1012.00 × 101
Std1.71 × 10−12.83 × 10−24.30 × 10−27.56 × 10−21.58 × 10−22.35 × 10−22.47 × 10−24.28 × 10−6
Times (s)46.5020.8613.7319.5336.9575.4915.3420.75
F3Mean8.79 × 1018.78 × 1011.34 × 1021.06 × 1021.20 × 1021.20 × 1021.20 × 1021.23 × 101
Std6.424.055.336.734.136.544.534.22
Times (s)220.60187.35183.97197.65462.64240.25184.71188.78
F4Mean7.12 × 10−22.46 × 10−29.22 × 1023.30 × 10−11.611.611.610
Std1.36 × 10−14.55 × 10−21.39 × 1021.101.825.77 × 10−22.13 × 10−10
Times (s)42.1816.2614.7119.6737.8475.1915.9620.55
F5Mean2.01 × 1023.75 × 1028.85 × 1024.69 × 1025.68 × 1025.68 × 1025.68 × 1020
Std2.77 × 1014.21 × 1011.39 × 1023.24 × 1012.67 × 1013.11 × 1012.73 × 1010
Times (s)33.058.766.0711.0615.8372.647.2411.79
F6Mean3.57 × 1023.80 × 1021.28 × 1035.19 × 1027.92 × 1027.92 × 1027.92 × 1021.38 × 102
Std8.47 × 1013.18 × 1011.74 × 1024.45 × 1014.60 × 1014.99 × 1015.79 × 1012.22 × 101
Times (s)41.1316.2114.1619.2836.2079.1815.1320.48
F7Mean7.66 × 1021.01 × 1042.47 × 1041.18 × 1041.43 × 1041.43 × 1041.43 × 1042.95 × 10−1
Std3.49 × 1022.19 × 1031.21 × 1031.23 × 1031.06 × 1039.83 × 1021.76 × 1031.81 × 10−1
Times (s)41.1215.8412.3617.9932.5778.3114.8218.71
F8Mean1.09 × 1043.01 × 1042.46 × 1041.42 × 1041.47 × 1041.47 × 1041.47 × 1041.26 × 104
Std1.33 × 1034.50 × 1021.12 × 1031.08 × 1031.48 × 1031.09 × 1031.65 × 1031.21 × 103
Times (s)48.9528.4920.1326.5152.8084.2821.7928.12
F9Mean5.09 × 10−14.021.941.833.893.893.893.55 × 10−1
Std1.64 × 10−11.82 × 10−13.80 × 10−14.26 × 10−13.29 × 10−11.404.53 × 10−17.24 × 10−2
Times (s)79.4553.5146.1052.87118.50111.2749.6954.23
F10Mean4.49 × 10−15.95 × 10−14.855.88 × 10−16.76 × 10−16.76 × 10−16.76 × 10−14.28 × 10−1
Std6.22 × 10−25.77 × 10−23.82 × 10−16.77 × 10−28.66 × 10−25.66 × 10−28.40 × 10−24.79 × 10−2
Times (s)38.2718.0512.1918.0132.6076.5914.1719.03
F11Mean2.98 × 10−13.31 × 10−12.62 × 1023.46 × 10−13.17 × 10−13.17 × 10−13.17 × 10−12.96 × 10−1
Std2.16 × 10−23.03 × 10−23.03 × 1011.14 × 10−14.88 × 10−23.13 × 10−21.48 × 10−12.87 × 10−2
Times (s)39.5819.1112.5018.3433.5578.6514.3019.54
F12Mean3.21 × 1011.46 × 1021.07 × 1065.19 × 1024.22 × 1034.22 × 1034.22 × 1031.68 × 101
Std6.114.48 × 1013.44 × 1053.20 × 1021.83 × 1033.12 × 1011.53 × 1032.60
Times (s)42.9616.7014.6719.9137.7379.8315.8420.81
F13Mean3.95 × 1014.63 × 1014.31 × 1014.22 × 1014.49 × 1014.49 × 1014.49 × 1013.91 × 101
Std1.133.46 × 10−15.31 × 10−11.014.74 × 10−12.691.051.11
Times (s)44.0321.1714.2220.5739.1778.5616.3221.96
Table 4. Friedman test compared with other excellent variations of TLBO.
Table 4. Friedman test compared with other excellent variations of TLBO.
AlgorithmsBTLBOSHSLTLBOITLBODSTLBORLTLBOATLDEGMTLBOBLTLBO
ranking2.23 4.77 7.00 3.62 5.77 5.77 5.77 1.08
Table 5. p-values of Wilcoxon signed-rank tests (D = 100).
Table 5. p-values of Wilcoxon signed-rank tests (D = 100).
BTLBOSHSLTLBOITLBODSTLBORLTLBOATLD × 10GMTLBO
F13.44 × 10−33.20 × 10−66.58 × 10−81.09 × 10−26.59 × 10−84.09 × 10−21.61 × 10−7
F29.08 × 10−77.43 × 10−104.18 × 10−92.01 × 10−34.68 × 10−107.43 × 10−101.10 × 10−9
F36.78 × 10−86.78 × 10−86.43 × 10−86.74 × 10−86.58 × 10−86.79 × 10−86.70 × 10−8
F46.33 × 10−86.33 × 10−86.33 × 10−86.33 × 10−86.30 × 10−86.33 × 10−86.33 × 10−8
F55.47 × 10−85.46 × 10−85.48 × 10−85.47 × 10−85.47 × 10−85.48 × 10−85.47 × 10−8
F66.78 × 10−86.79 × 10−86.75 × 10−86.79 × 10−86.79 × 10−86.80 × 10−86.79 × 10−8
F76.63 × 10−86.68 × 10−86.68 × 10−86.65 × 10−86.64 × 10−86.70 × 10−86.68 × 10−8
F83.72 × 10−46.62 × 10−86.75 × 10−81.98 × 10−41.90 × 10−55.11 × 10−23.90 × 10−5
F91.63 × 10−36.80 × 10−86.80 × 10−86.80 × 10−86.80 × 10−87.20 × 10−26.80 × 10−8
F102.50 × 10−16.78 × 10−86.77 × 10−81.32 × 10−76.78 × 10−85.19 × 10−76.77 × 10−8
F119.46 × 10−19.21 × 10−46.80 × 10−81.79 × 10−22.18 × 10−12.56 × 10−25.63 × 10−4
F126.74 × 10−86.73 × 10−86.73 × 10−86.74 × 10−86.75 × 10−86.75 × 10−86.75 × 10−8
F131.55 × 10−16.56 × 10−86.63 × 10−81.05 × 10−76.64 × 10−81.29 × 10−56.63 × 10−8
+1000000
9131313121113
=3000120
Table 6. Statistical results of 30 independent experiments (D = 100, D = 200, D = 500).
Table 6. Statistical results of 30 independent experiments (D = 100, D = 200, D = 500).
FunctionAlgorithmD = 100D = 200D = 500
MeanStdTimes (s)MeanStdTimes (s)MeanStdTimes (s)
F14BTLBO2.21 × 1021.10 × 10222.069.40 × 1026.44 × 10245.22 2.01 × 1054.28 × 105138.28
CSO7.43 × 1021.01 × 1031.833.72 × 1022.55 × 10210.04 1.04 × 1033.11 × 10256.30
DECCG4.34 × 1021.56 × 1021.618.89 × 1022.01 × 1029.672.17 × 1031.70 × 10322.71
SHSLTLBO9.19 × 1026.21 × 1028.516.82 × 1069.37 × 10617.45 2.80 × 1091.45 × 10970.67
ITLBO2.86 × 10105.79 × 1097.102.01 × 10112.88 × 101018.55 1.00 × 10125.18 × 101055.43
DSTLBO2.48 × 1023.35 × 10211.616.16 × 1031.30 × 10414.70 2.77 × 1083.71 × 10889.60
RLTLBO1.32 × 1084.30 × 10818.137.63 × 1085.90 × 10831.75 7.22 × 1093.13 × 109143.64
ATLDE1.32 × 1086.81 × 10733.877.63 × 1087.64 × 10879.48 7.22 × 1092.25 × 109774.00
GMTLBO1.32 × 1084.01 × 1027.757.63 × 1082.16 × 10319.15 7.22 × 1097.04 × 10865.98
BLTLBO1.78 × 1026.39 × 10111.144.72 × 1027.75 × 10127.79 3.24 × 1034.09 × 102138.27
F15BTLBO2.82 × 1023.75 × 10120.621.23 × 1031.49 × 10241.42 4.01 × 1037.16 × 101140.55
CSO4.86 × 1016.79 1.571.54 × 1022.09 × 10111.09 6.72 × 1024.55 × 10156.34
DECCG2.48 × 1021.84 × 1011.493.10 × 1021.17 × 1017.593.70 × 1021.73 × 10123.46
SHSLTLBO5.11 × 1024.76 × 1016.501.29 × 1037.10 × 10119.46 3.88 × 1031.26 × 10281.38
ITLBO1.08 × 1031.59 × 1026.123.27 × 1036.71 × 10120.53 9.11 × 1032.52 × 10260.22
DSTLBO6.86 × 1025.34 × 1018.321.52 × 1035.10 × 10116.91 4.04 × 1038.04 × 10196.72
RLTLBO8.51 × 1025.35 × 10114.821.71 × 1034.35 × 10123.67 4.29 × 1037.01 × 101172.50
ATLDE8.51 × 1026.40 × 10132.941.71 × 1036.73 × 10181.39 4.29 × 1031.48 × 102772.48
GMTLBO8.51 × 1025.45 × 1016.411.71 × 1037.09 × 10112.62 4.29 × 1036.58 × 10168.92
BLTLBO006.950027.37 1.17 × 1029.83132.97
F16BTLBO3.24 × 10−25.19 × 10−222.021.10 2.10 49.10 1.14 1.73 171.90
CSO0.00 0.00 1.930.00 0.00 12.033.94 × 10−36.79 × 10−365.09
DECCG1.11 × 10−34.95 × 10−32.004.19 × 10−37.61 × 10−312.88 2.46 × 10−35.68 × 10−339.02
SHSLTLBO3.14 × 10−14.29 × 10−18.305.75 5.86 22.93 1.65 × 1023.46 × 101111.80
ITLBO8.80 × 1029.56 × 1017.463.25 × 1032.63 × 10224.35 1.22 × 1044.07 × 102106.12
DSTLBO1.65 4.93 10.123.81 × 10−13.63 × 10−118.13 6.59 1.09 × 101124.97
RLTLBO1.34 × 1011.89 × 10118.855.84 × 1014.17 × 10129.87 3.68 × 1021.53 × 102244.16
ATLDE1.34 × 1013.92 × 10−134.205.84 × 1011.52 82.90 3.68 × 1022.80 × 101802.68
GMTLBO1.34 × 1018.47 × 10−17.705.84 × 1015.87 × 10−113.40 3.68 × 1028.15 × 101107.63
BLTLBO008.110030.26 3.28 × 10−37.33 × 10−3171.65
F17BTLBO2.25 2.55 × 10−121.368.86 8.43 × 10−145.47 1.88 × 1011.11 × 10−1141.07
CSO001.630.00 0.00 9.98 9.78 × 10−13.92 × 10−154.82
DECCG4.00 × 10−41.98 × 10−41.569.56 × 10−23.02 × 10−19.622.16 × 10−76.79 × 10−728.36
SHSLTLBO1.16 × 1011.42 7.281.64 × 1015.62 × 10−118.93 1.90 × 1011.20 × 10−182.26
ITLBO2.00 × 1012.29 × 10−16.762.09 × 1015.93 × 10−221.14 2.12 × 1013.71 × 10−262.18
DSTLBO1.69 × 1018.75 × 10−19.751.88 × 1012.17 × 10−115.09 1.92 × 1012.72 × 10−296.71
RLTLBO1.89 × 1015.82 × 10−116.011.94 × 1012.70 × 10−124.86 1.94 × 1018.38 × 10−2184.21
ATLDE1.89 × 1011.99 33.611.94 × 1013.88 × 10−182.45 1.94 × 1018.42 × 10−2792.81
GMTLBO1.89 × 1011.45 × 10−16.571.94 × 1014.17 × 10−210.94 1.94 × 1011.47 × 10−176.02
BLTLBO008.280025.18 1.67 1.05 × 10−1134.67
F18BTLBO−1.43 × 1032.06 × 10144.73−2.71 × 1033.59 × 101129.15 −6.21 × 1031.64 × 102502.70
CSO−1.48 × 1032.55 × 1019.20−2.88 × 1031.51 × 10184.66 −6.96 × 1031.12 × 101419.17
DECCG−1.20 × 1031.02 × 1019.16−2.12 × 1039.30 × 10182.05 −5.52 × 1033.81 × 101388.11
SHSLTLBO−9.51 × 1028.20 × 10132.16−1.88 × 1031.91 × 102104.06 −4.93 × 1034.48 × 101442.04
ITLBO−1.00 × 1033.29 × 10123.85−1.76 × 1034.58 × 10193.57 −3.81 × 1036.04 × 101360.80
DSTLBO−1.21 × 1034.55 × 10128.18−2.21 × 1038.29 × 10165.23 −5.00 × 1031.41 × 102434.37
RLTLBO−1.11 × 1037.41 × 10162.96−2.02 × 1031.33 × 102129.53 −5.10 × 1033.72 × 102947.70
ATLDE−1.11 × 1032.38 × 10152.81−2.02 × 1032.87 × 101124.31 −5.10 × 1038.36 × 1011052.60
GMTLBO−1.11 × 1034.83 × 10125.26−2.02 × 1031.02 × 10263.40−5.10 × 1033.79 × 102387.87
BLTLBO−1.55 × 1032.8825.56−3.02 × 1036.8793.81 −7.26 × 1031.89 × 101456.15
Table 7. Friedman test compared with other comparison algorithms.
Table 7. Friedman test compared with other comparison algorithms.
AlgorithmsBTLBOCSODECCGSHSLTLBOITLBODSTLBORLTLBOATLDEGMTLBOBLTLBO
rankingD = 1003.4 2.4 3.6 6.2 9.8 5.0 7.8 7.8 7.8 1.2
D = 20041.63.46.21057.87.87.81.4
D = 50042.2261067.67.67.62
Table 8. p-values of Wilcoxon signed-rank tests ((D = 100, D = 200, D = 500).
Table 8. p-values of Wilcoxon signed-rank tests ((D = 100, D = 200, D = 500).
DimensionFunctionBTLBOCSODECCGSHSLTLBOITLBODSTLBORLTLBOATLDEGMTLBO
D = 100F141.59 × 10−12.23 × 10−12.75 × 10−76.80 × 10−86.80 × 10−86.95 × 10−16.80 × 10−84.22 × 10−76.79 × 10−8
F156.71 × 10−86.67 × 10−86.72 × 10−86.72 × 10−86.73 × 10−86.74 × 10−86.72 × 10−86.73 × 10−86.70 × 10−8
F166.67 × 10−5NaN8.01 × 10−97.99 × 10−97.99 × 10−98.01 × 10−97.99 × 10−91.05 × 10−77.99 × 10−9
F178.01 × 10−9NaN8.01 × 10−98.01 × 10−98.01 × 10−98.01 × 10−98.01 × 10−98.01 × 10−98.01 × 10−9
F183.48 × 10−83.33 × 10−82.60 × 10−83.66 × 10−83.58 × 10−83.53 × 10−83.60 × 10−83.46 × 10−83.63 × 10−8
D = 200F142.11 × 10−24.57 × 10−33.28 × 10−41.82 × 10−41.81 × 10−41.82 × 10−41.82 × 10−41.82 × 10−41.81 × 10−4
F151.83 × 10−41.83 × 10−41.82 × 10−41.83 × 10−41.82 × 10−41.79 × 10−41.82 × 10−41.81 × 10−41.81 × 10−4
F166.39 × 10−5NaN6.39 × 10−56.39 × 10−56.39 × 10−56.39 × 10−56.39 × 10−56.39 × 10−56.39 × 10−5
F176.39 × 10−5NaN6.39 × 10−56.16 × 10−54.73 × 10−55.47 × 10−56.25 × 10−55.94 × 10−53.32 × 10−5
F181.45 × 10−41.39 × 10−41.43 × 10−41.44 × 10−41.45 × 10−41.44 × 10−41.45 × 10−41.41 × 10−41.45 × 10−4
D = 500F141.83 × 10−41.83 × 10−42.83 × 10−31.83 × 10−41.78 × 10−41.83 × 10−41.83 × 10−41.83 × 10−41.83 × 10−4
F151.79 × 10−41.83 × 10−41.82 × 10−41.82 × 10−41.83 × 10−41.78 × 10−41.80 × 10−41.83 × 10−41.77 × 10−4
F165.83 × 10−46.95 × 10−21.11 × 10−21.83 × 10−41.79 × 10−41.83 × 10−41.83 × 10−42.46 × 10−41.83 × 10−4
F171.59 × 10−41.82 × 10−41.82 × 10−41.71 × 10−41.29 × 10−46.39 × 10−51.31 × 10−41.58 × 10−41.59 × 10−4
F181.55 × 10−41.49 × 10−41.43 × 10−41.44 × 10−41.42 × 10−41.41 × 10−41.44 × 10−41.41 × 10−41.46 × 10−4
+012000000
14813151515151515
=160000000
Table 9. Test results of four data sets.
Table 9. Test results of four data sets.
Testing DataEvaluation IndicatorsGAPSOHSDSHS-TLBOBLTLBO
DAX100 (D = 85)MED1.20 × 10−31.40 × 10−33.39 × 10−61.80 × 10−61.40 × 10−6
VRE3.10 × 10−13.90 × 10−12.01 × 10−19.60 × 10−27.85 × 10−2
MRE1.20 × 10−11.30 × 10−12.17 × 10−29.96 × 10−31.03 × 10−2
FTSE100 (D = 89)MED3.00 × 10−43.30 × 10−43.64 × 10−64.75 × 10−74.94 × 10−7
VRE5.00 × 10−15.40 × 10−12.57 × 10−12.37 × 10−22.27 × 10−2
MRE5.70 × 10−26.40 × 10−23.19 × 10−25.86 × 10−37.00 × 10−3
S&P100 (D = 98)MED6.20 × 10−47.90 × 10−43.86 × 10−61.56 × 10−61.55 × 10−6
VRE6.10 × 10−16.90 × 10−12.88 × 10−17.28 × 10−27.13 × 10−2
MRE2.10 × 10−12.50 × 10−12.68 × 10−21.05 × 10−21.10 × 10−2
Nikkei (D = 225)MED1.50 × 10−32.90 × 10−41.01 × 10−58.33 × 10−77.10 × 10−7
VRE2.10 × 10−14.30 × 10−11.84 × 10−16.36 × 10−25.24 × 10−2
MRE9.30 × 10−11.40 × 10−15.90 × 10−21.34 × 10−21.30 × 10−2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, Y.; Li, Y.; Yong, L. Teaching–Learning-Based Optimization Algorithm with Stochastic Crossover Self-Learning and Blended Learning Model and Its Application. Mathematics 2024, 12, 1596. https://doi.org/10.3390/math12101596

AMA Style

Ma Y, Li Y, Yong L. Teaching–Learning-Based Optimization Algorithm with Stochastic Crossover Self-Learning and Blended Learning Model and Its Application. Mathematics. 2024; 12(10):1596. https://doi.org/10.3390/math12101596

Chicago/Turabian Style

Ma, Yindi, Yanhai Li, and Longquan Yong. 2024. "Teaching–Learning-Based Optimization Algorithm with Stochastic Crossover Self-Learning and Blended Learning Model and Its Application" Mathematics 12, no. 10: 1596. https://doi.org/10.3390/math12101596

APA Style

Ma, Y., Li, Y., & Yong, L. (2024). Teaching–Learning-Based Optimization Algorithm with Stochastic Crossover Self-Learning and Blended Learning Model and Its Application. Mathematics, 12(10), 1596. https://doi.org/10.3390/math12101596

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop