4.1. Benchmark Functions and Performance Metrics
To comprehensively evaluate the proposed MST-DMOA, we adopt benchmark functions DF1-DF14 from the CEC 2018 dynamic multiobjective optimization benchmark test suite [
32] in the experiments. Unlike other problem suites representing only one or several aspects of simple scenes, DF’s 14 benchmark functions cover diverse properties and represent a variety of real-world scenarios well. These include time-dependent POF/PS geometries, irregular POF shapes, disconnectivity, knee regions/points, long tails and so on. Therefore, the DF test suite can be used to effectively verify the performance of algorithms to solve various problems.
As a performance metric that reflects both diversity and convergence, the inverted generational distance (IGD) measures the average distance between the obtained POF and the corresponding real POF as follows:
where
and
are the real POF and the POF obtained by a DMOA at time
t, respectively.
is the minimum Euclidean distance between
x and the individuals in
, and
is the cardinality of
.
The mean inverted generational distance (MIGD) is a variant of the IGD and is defined as the average of the IGD values over the number of environmental changes:
where
T is a set of discrete time points and
is the number of environmental changes in a run. A smaller MIGD value indicates that the DMOA has better performance in terms of diversity and convergence.
4.2. Compared Algorithms and Experimental Settings
In order to verify the performance of the proposed MST-DMOA, five popular DMOAs are selected to be compared, namely KT-DMOA [
8], KTS-DMOA [
6], Tr-DMOA [
3], SVR-DMOA [
17] and KF-DMOA [
20]. Among them, KT-DMOA, KTS-DMOA and Tr-DMOA are currently popular transfer learning-based DMOAs, while SVR-DMOA and KF-DMOA are classical prediction-based DMOAs. These algorithms are of a similar type to our method and have good effects in handling DMOPs. By comparing it with them, the performance of MST-DMOA can be objectively demonstrated. For a fair comparison, the specific parameters of these algorithms are set according to the original references, and all testing methods include MOEA/D [
33] as the static MOEA optimizer.
The time variable
t of DF1-DF14 is depicted as
, where
and
are the severity and frequency of environmental changes, respectively, and
denotes the maximum iteration counter. A low
implies a severely changing environment and a low
denotes a rapidly changing environment. For all benchmarks, there are three groups of environmental parameters adopted,
[
3], and the total number of generations is fixed to
to ensure that there are 50 environmental changes in each run. In addition, the dimension of the search space is set to 10, and the population size
N is fixed at 100 for bi-objective problems or 105 for three-objective problems [
34]. Therefore, the stopping condition in Algorithm 1 is that all evaluations are exhausted, and the total the number of evaluations is
.
4.3. Comparisons with Other DMOAs
In this section, the proposed MST-DMOA is compared with five other advanced DMOAs. The MIGD values of the six algorithms on 14 benchmarks are reported in
Table 2, where the best MIGD value for each instance of the test problems is highlighted in bold. The Wilcoxon rank-sum test [
35] is performed with a 0.05 significance level to ensure the statistical significance of the differences between MST-DMOA and the other algorithms. In addition, (+), (−) and (=) in
Table 2 indicate that the performance of MST-DMOA is better than, worse than or similar to that of the corresponding competitor, respectively.
As can be observed from
Table 2, MST-DMOA achieves the best performance on 32 out of 42 test instances in terms of the MIGD. More precisely, MST-DMOA performs better than all competitors on all three test instances of DF1, DF2, DF3, DF4, DF5, DF7, DF10 and DF14. Moreover, we can see that MST-DMOA is slightly inefficient on one instance of DF9, DF11 and DF12, but this still indicates the competitive performance of our method in solving dynamic problems.
MST-DMOA is slightly inferior to the compared algorithms on DF6; this problem has been recognized as a challenge due to its knee regions/points and long tails [
32]. Some edge solutions may be considered noise and ignored by DBSCAN, but they represent valuable information about the environment, thus affecting the correct transfer learning of historical knowledge. As a result, the performance of MST-DMOA will be slightly reduced. It is worth noting that KT-DMOA focuses on selecting knee points as knowledge for transfer learning, which makes it particularly effective in knee regions, leading to better performance than MST-DMOA in DF6.
As for DF8, although the POSs change with time, they have a stationary centroid. The invariance of the POSs’ centroids can lead to consistent knowledge representation in environments with significant differences; therefore, this is not conducive to the matching of similar environments and the reuse of appropriate historical knowledge.
MST-DMOA performs more poorly than its competitors on the three instances of DF13. DF13 has continuous and disconnected POS geometries, which causes some solutions in a disconnected state to have few neighborhoods; hence, they cannot be presented as knowledge by clustering. Such circumstances may lead to incorrect search directions due to insufficient knowledge, which will decrease the performance of MST-DMOA due to negative transfer.
In general, MST-DMOA is suitable for the solution of most DMOPs, because MST-DMOA obtains rich and valuable knowledge from multiple similar historical environments, and the high-quality target domain helps the knowledge to adapt well to the current environment. However, the knowledge from different environments is difficult to distinguish due to the clustering of POSs with fixed centroids, which may mislead the environment matching process. In addition, for scenarios in which the POSs have sparse distributions, some key solutions may be treated as noise, resulting in insufficient knowledge acquired. The above issues may cause MST-DMOA to fall slightly behind the compared algorithms.
Moreover,
Figure 2 plots the IGD values obtained by the six algorithms after each environmental change for the 14 DF problems in the configuration (
= 10 and
= 10). The horizontal coordinate represents the changing environment, while the vertical coordinate represents the IGD value of the algorithm in the environment, i.e., the average distance between the obtained POF and the corresponding real POF. We can observe from
Figure 2 that most of the curves obtained by MST-DMOA are located at the bottom and are smoother than the other curves, indicating that, compared with the other algorithms, our method can produce stable and accurate predictions in frequently changing environments. This verifies the effectiveness and stability of MST-DMOA in tracking varying POFs.
Figure 2.
IGD values of six algorithms on fourteen DF problems with (,) = (10,10).
Figure 2.
IGD values of six algorithms on fourteen DF problems with (,) = (10,10).
4.4. Effectiveness of Transfer Learning
In order to verify the effectiveness of transfer learning in MST-DMOA, the compared version is constructed as MST-DMOA w/o TL, where “w/o” stands for “without” and "TL" denotes transfer learning. More specifically, MST-DMOA w/o TL directly reuses the selected historical knowledge, i.e., using the solutions in the source domain as the initial population of the new environment, without executing the transfer learning process.
Table 3 shows the MIGD values of MST-DMOA and MST-DMOA w/o TL on the 14 test problems in the configuration (
= 10 and
= 10). MST-DMOA achieves the best performance for all test instances, indicating that transfer learning is effective for our method. This is because transfer learning can align the distributional differences between different environments, adapting the historical knowledge to the current environment.
4.5. Adaptation Study
MST-DMOA is designed to generate an initial population with high quality for an SMOA when an environmental change occurs. To explore whether different SMOAs have an effect on the performance of MST-DMOA, we select three popular SMOAs, namely MOEA/D [
33], SPEA2 [
36] and PAES [
37], as baseline optimizers for MST-DMOA, generating MST-MOEA/D, MST-SPEA2 and MST-PAES, respectively. In addition, to enable the three SMOAs to independently run in dynamic environments, they are modified as DMOAs via randomly initializing a partial population when the environment changes, thus obtaining DA-MOEA/D, DA-SPEA2 and DA-PAES. The average MIGD values of the above six algorithms on fourteen DF problems are shown in
Table 4.
We can observe from
Table 4 that MST-MOEA/D, MST-NSGA-II and MST-RM-MEDA are significantly better than DA-MOEA/D, DA-NSGA-II and DA-RM-MEDA, respectively.
Table 4 effectively demonstrates the outstanding performance of MST-DMOA, as it can generate an initial population with high quality and accelerate the convergence to the true POFs in the new environment. As a result, the effectiveness of MST-DMOA is reflected in the prediction of the initial population rather than SMOAs. Therefore, the proposed MST-DMOA possesses high adaptability and outstanding performance, proving that it can collaborate with various SMOAs.