Next Article in Journal
Oncom from Surplus Bread Enriched in Vitamin B12 via In Situ Production by Propionibacterium freudenreichii
Previous Article in Journal
Plant Synthetic Promoters
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Metaheuristic Framework with Experience Reuse for Dynamic Multi-Objective Big Data Optimization

1
School of Computer Science and Engineering, Northeastern University, Shenyang 110169, China
2
Software College, Northeastern University, Shenyang 110169, China
3
National Frontiers Science Center for Industrial Intelligence and Systems Optimization, Northeastern University, Shenyang 110819, China
4
Key Laboratory of Data Analytics and Optimization for Smart Industry (Northeastern University), Ministry of Education, Shenyang 110169, China
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2024, 14(11), 4878; https://doi.org/10.3390/app14114878
Submission received: 10 May 2024 / Revised: 29 May 2024 / Accepted: 31 May 2024 / Published: 4 June 2024

Abstract

:
Dynamic multi-objective big data optimization problems (DMBDOPs) are challenging because of the difficulty of dealing with large-scale decision variables and continuous problem changes. In contrast to classical multi-objective optimization problems, DMBDOPs are still not intensively explored by researchers in the optimization field. At the same time, there is lacking a software framework to provide algorithmic examples to solve DMBDOPs and categorize benchmarks for relevant studies. This paper presents a metaheuristic software framework for DMBDOPs to remedy these issues. The proposed framework has a lightweight architecture and a decoupled design between modules, ensuring that the framework is easy to use and has enough flexibility to be extended and modified. Specifically, the framework now integrates four basic dynamic metaheuristic algorithms, eight test suites of different types of optimization problems, as well as some performance indicators and data visualization tools. In addition, we have proposed an experience reuse method, speeding up the algorithm’s convergence. Moreover, we have implemented parallel computing with Apache Spark to enhance computing efficiency. In the experiments, algorithms integrated into the framework are tested on the test suites for DMBDOPs on an Apache Hadoop cluster with three nodes. The experience reuse method is compared to two restart strategies for dynamic metaheuristics.

1. Introduction

According to the research of the International Data Corporation (IDC) [1], by 2025, approximately six billion consumers, or 75% of the world’s population, will interact with data every day. Furthermore, the global datasphere will grow from 45 zettabytes in 2019 to 175 by 2025. Nearly 30% of the world’s data will need real-time processing. Optimization problems have also faced big data challenges, which means that matching solutions are urgently needed. In many real-world situations, objective functions, decision space, or constraints of multi-objective optimization problems (MOPs) may change over time, making MOPs have dynamic features. These problems are dynamic multi-objective optimization problems (DMOPs). The DMOP was first systematically described and presented by Farina et al. [2] in 2004, and the DMOP can be formally defined in a minimization way as    
min f ( x , t ) = ( f 1 ( x , t ) , , f M ( x , t ) ) s . t . x Ω g i ( x , t ) 0 , i = 1 , , m h j ( x , t ) = 0 , j = 1 , , n
where x = ( x 1 , , x D ) Ω is a solution vector with D decision variables, and  Ω R D is the decision space; f R M consists of M conflicting objectives; g i ( x , t ) and h j ( x , t ) represent, respectively, m inequality constraints and n equality constraints. Variable t represents time and the dynamic nature of the problem.
Zhou et al. [3] illustrated the challenges of big data optimization and the application of metaheuristic algorithms to solve such problems, which lays a good foundation for the emergence of DMBDOPs. In some real-world big data applications, the volume of data and streaming features introduce new challenges for optimization tasks. We could roughly classify these new challenges in optimization problems into two aspects:
  • The tremendous volume of data may cause the optimization problem to have a high-dimensional decision space, which is hard to formulate and tackle using traditional methods. It also has a higher requirement for computing power. Furthermore, on account of the veracity feature, the convergence and fitness evaluations of optimization algorithms could be rather time-consuming when tackling noisy data. Thus, designing efficient algorithms to address high-dimensional and noisy data problems is challenging.
  • In many real-world big data optimization tasks, apart from the mentioned features, data could be transmitted quickly as a streaming form from different sources, and streaming data are inherently dynamic. For example, the ever-changing traffic signal data in transportation planning and the constantly updated investment data in quantitative investing. Considering transmission speed and the dynamic nature of streaming, relevant algorithms must process the streaming data quickly and react efficiently to the resulting change in the problem. Consequently, managing streaming data, detecting rapid changes in the problem, and then restarting the algorithm to solve the new problem effectively is a significant challenge.
From the perspective of all of the above, when DMOPs are combined with the characteristics of big data, DMBDOPs appear. The DMBDOP is a particular case of the DMOP, so a DMBDOP can also be expressed using Equation (1) in a mathematical way. The difference is that, in a DMBDOP, the dynamic feature of the problem is brought about by streaming data. In addition, the decision space Ω may contain hundreds, thousands, or even millions of variables. Therefore, a DMBDOP can be defined as a DMOP with the characteristics of streaming data and possibly with a high-dimensional decision space simultaneously.
Inspired by biological mechanisms such as biological evolution and social interaction, metaheuristic algorithms also have the ability to adapt to environmental changes just like living things, and metaheuristics are reasonable solutions to solve dynamic optimization problems. Generally, metaheuristics can be divided into evolutionary algorithms (EAs) and swarm intelligence (SI), and  these algorithms have achieved significant success in combinatorial optimization problems [4,5,6], constrained optimization problems [7,8,9], and many other complex optimization problems under stationary environments [10,11,12]. In contrast, there are few studies on applying metaheuristics to solve DMOPs [13] and even fewer studies on DMBDOPs, though these problems are of great importance in practice. Therefore, it is necessary to investigate how metaheuristics can be applied to solve DMBDOPs.
Moreover, metaheuristics have strong adaptation capabilities, but it is difficult for a conventional metaheuristic to track the changing optimum once the algorithm has converged on a solution when addressing dynamic optimization problems [14]. One way to solve this puzzle is to consider every change in the environment as a new optimization problem, and the algorithm has to solve it from scratch to ensure the diversity of solutions. However, this method may not be adaptive to a DMBDOP considering the efficiency and real-time requirements of big data tasks. Hence, designing more efficient problem change response strategies for dynamic algorithms according to the characteristics of DMBDOPs is another essential task.
In this paper, we present a metaheuristic software framework named JDBDO, which was written in Java 11 and designed to solve and research DMBDOPs. Our motivation is to provide a software framework to support the research of DMBDOPs, try to solve DMBDOPs using metaheuristic algorithms, and provide as comprehensive test suites as possible. Since DMBDOPs are closely related to big data and the optimization process contains many data operations, Java has natural advantages when using some big data processing engines, and as a compile-based programming language, the execution of algorithms can be relatively fast. In addition, to solve DMBDOPs more efficiently and cope with problem changes, this work proposes an experience reuse method to accelerate the optimization process after the environment changes. This method makes metaheuristics handle the trade-off between exploration and exploitation well when constructing the initial population of the new problem. The source code for a preliminary version of JDBDO is now available at: https://github.com/NEUCS/JDBDO (accessed on 1 May 2023).
On the whole, the main new contributions of this work are summarized as follows:
  • A metaheuristic framework based on Java for solving DMBDOPs is proposed, which contains a metaheuristic set including both EAs and SI algorithms. The framework utilizes Apache Spark for implementing parallel computing and stream processing. The proposed framework remedies the current lack of a software framework for DMBDOPs and supports subsequent research on these problems.
  • Some test suites are developed and embedded in the proposed framework, involving two kinds of DMBDOPs. One is extended from traditional DMOPs by introducing the streaming feature, and another is DMOP with high-dimensional decision space. These test suites aim to offer broader and relatively comprehensive experimental use cases for studies in this field.
  • An experience reuse method is proposed for enhancing computing efficiency while maintaining the diversity and quality of solutions. This method could measure the degree of change between problems if the optimization algorithm needs to restart. Then, it would decide whether to exploit historical experience or re-explore a new situation.
The remainder of this paper is organized as follows. Section 2 introduces related work of this paper. Section 3 describes in detail the architecture and the components that introduce the proposed framework. Furthermore, Section 4 describes the uses of the framework, and experiments are carried out on different DMBDOPs to validate the performance of the framework as well as the effectiveness of the experience reuse method. Finally, conclusions and future work are given in Section 5.

2. Related Work

In this section, some existing metaheuristic applications for DMOPs, big data optimization, and DMBDOPs are introduced. Moreover, some existing metaheuristic frameworks are introduced at the end of this section.

2.1. Metaheuristics in DMOPs and DMBDOPs

In MOPs, several conflicting objectives need to be optimized simultaneously, and problems are continuous during the optimization process. Metaheuristic algorithms have been applied to MOPs with remarkable success. However, many other real-world problems are dynamic due to objective functions, constraints, and other parameters that may change over time [13]. These problems are defined as DMOPs. A key challenge in a DMOP is making proper responses when detecting problem changes because the convergence of the algorithm may decrease diversity, which directly impacts the adaptability of the algorithm to problem changes and the quality of new solutions. In [15], a diversity-based strategy is designed by using the new problem to evaluate both parent and offspring solutions. In each optimization run, ζ % of the new population is replaced by randomly created or mutated solutions, aiming to increase diversity. In addition, prediction-based methods are also widely used in dealing with DMOPs, in which the Pareto front (PF) or Pareto set (PS) would be different when the changes occur. Different prediction strategies are used in EAs to predict the new location of the PF or PS. For instance, Hatzakis and Wallace [16] combined a queuing multi-objective optimizer with an autoregressive model (a forecasting model) for predicting the location of the optimal PS and helping to find the following PS. More recently, researchers have been trying to apply artificial intelligence techniques to prediction. Ref. [17] presented an ensemble learning-based prediction strategy that takes advantage of historical information to train four base prediction models, and this strategy helps EAs reinitialize a new population after a change is detected. In [18], a regression transfer learning prediction-based EA is proposed to generate an excellent initial population to accelerate the evolutionary process and improve the evolutionary performance in solving DMOPs.
The field of dynamic multi-objective optimization is closely related to EAs. However, it has been attracting more and more interest in applying SI algorithms to cope with DMOPs. Nevertheless, the research on this aspect is still limited, and some of the current works are reviewed here. In [19], different multi-objective ant colony optimization (ACO) algorithms have been applied to a dynamic multi-objective railway junction rescheduling problem, and this algorithm performs well when the changes in the problem are large and frequent. In [20], a dynamic multi-objective particle swarm optimization (PSO) algorithm based on adversarial decomposition and neighborhood evolution was proposed. The proposed algorithm utilizes a novel particle update strategy to guide evolution and improve the diversity of solutions. Takano et al. [21] proposed a multi-agent-based artificial bee colony (ABC) algorithm for adjusting cooperation among autonomous rescue agents in dynamic disaster environments. Literature [22] presented a multi-objective cat swarm optimization (CSO) algorithm with the Borda count method for DMOPs that could effectively track the time-varying optimal PFs.
Research in big data optimization mainly aims to solve stationary problems with many objectives and numerous decision variables in recent years. From this perspective, some works have improved the traditional metaheuristics and realized parallel computing using big data analytics tools. Ref. [23] gives a detailed review based on the application of metaheuristic algorithms to big data optimization. In [24], a hybrid multi-objective firefly algorithm (FA) is proposed, which performed well on the BigOpt2015 benchmark in the CEC2015 Optimization of Big Data Competition. Yi et al. [25] presented an improved NSGA-III algorithm with an adaptive mutation operator for big data optimization problems, which outperforms many NSGA-III variants. In [26], parallel hybrid metaheuristics are presented to deal with large-scale optimization in big data statistical analysis. Mishra et al. [27] applied PSO-based and BAT-based algorithms to tackle sustainable service allocation on a fog server. More recently, in [28], a new solution-encoding scheme was introduced to PSO, which could effectively help the evolutionary computation to evolve faster. Xu et al. [29] presented an MOEA/D algorithm based on an enhanced adaptive neighborhood adjustment strategy, and the algorithm achieves promising performance on BigOpt2015 test problems.
Currently, there are only a few studies on DMBDOPs. Some representatives of these are studies [30,31], in which a framework is proposed and a dynamic bi-objective traveling salesman problem based on real-world streaming data is solved by the dynamic NSGA-II algorithm. These studies provide good insights into the study of DMBDOPs.

2.2. Framework Review

Developing software frameworks with metaheuristics for researching or solving real-world optimization problems has always been one of the research hotspots in the optimization community. In the last two decades, many metaheuristic-based frameworks have been presented. For example, jMetal [32] is a widely used Java-based framework for multi-objective optimization with metaheuristics, including the state-of-the-art EAs and MOP and DMOP benchmarks. Based on jMetal, researchers have developed more frameworks, such as jMetalPy [33], which aims at replicating jMetal in Python and taking advantage of the full feature sets of Python, and jMetalMSA [34], which is proposed for multiple sequence alignment in biology. In addition, ECJ is another excellent unified metaheuristic framework and is more than 20 years old. It was a genetic programming and evolutionary computation library at the outset. Moreover, from 2015 to 2019, ECJ was improved in many ways. It now integrates more components, such as estimating distribution algorithms, an ACO framework, and the neuroevolution of augmenting topologies. More new features of ECJ can be read in [35]. Many new optimization frameworks have emerged in recent years, such as PlatEMO [36], Pymoo [37], EvoCluster [38], the MOEA Framework [39], and KDT-MOEA [40]. The software framework related to DMBDOPs is jMetalSP [30], which combines the features of the jMetal framework with Apache Spark. The framework focuses on the challenges that streaming data introduces to the dynamic optimization task.

2.3. Overview

More and more studies have been applying metaheuristics to solve DMOPs, and more and more metaheuristics have been applied to static big data optimization. However, research for DMBDOPs, which consider both the scale and dynamic characteristics of the optimization problem, remains scarce. Therefore, research exploring the possibility of applying different types of metaheuristic algorithms to solve DMBDOPs is needed. In addition, there is a lack of software frameworks for designing and testing relevant metaheuristic algorithms for solving DMBDOPs. Although jMetalSP has been considered the streaming challenge in DMBDOPs, it does not introduce the challenges posed by large-scale decision variables to the dynamic optimization problem. jMetalSP takes into account the large amount of data brought in by different streaming data sources. However, the data only increase the difficulty of processing streaming data but do not affect the difficulty of solving the problem by the optimization algorithm. Therefore, a software framework that considers both the streaming properties of the data and the large number of decision variables that the optimization problem may contain is needed. This way, the type of DMBDOPs is more comprehensive, and the optimization problems are more tightly relevant to the concept of big data.

3. The Proposed Framework

Motivated by the new challenges brought by big data, JDBDO is implemented to apply metaheuristics to solving DMBDOPs. JDBDO is designed modularly to ensure sufficient flexibility and provide a clear architecture. Researchers could use the algorithms and problems integrated within the framework to efficiently achieve their own research goals. Researchers could also quickly extend modules to develop new algorithms or expand upon new problems.
Dynamic multi-objective big data optimization-oriented framework. JDBDO integrates metaheuristic algorithms so that researchers can try different ways to solve DMBDOPs and verify their ideas efficiently. Moreover, JDBDO provides different test suites for DMBDOPs. Thus, researchers could choose different problems to experiment with according to the type of their algorithms. JDBDO is now developed as a metaheuristic framework or toolkit for DMBDOPs, providing practical tools for developing algorithms and big data analysis. In addition, performance indicators, cooperating with algorithms, are provided for experimental studies. In the future, we hope to continuously enrich and extend JDBDO and make it serve as a mature platform for both big data applications and optimization research.
Optimization components. A dynamic multi-objective big data optimization process comprises a dynamic metaheuristic algorithm, operators, a termination criterion, and a restart strategy. Four dynamic metaheuristic algorithms are available in the framework, covering EAs and SI algorithms, to provide users with different algorithm templates that are easy to modify and imitate. In addition, different operators are implemented to meet different evolutionary needs. Furthermore, when the problem changes, the algorithm has to terminate the current run and restart, so termination criteria and restart strategies are also provided in MBDO. Some of these components are implemented based on our own design and some good features of jMetal [32] version 5.10.
Experience reuse and parallel computing. The complexity of dynamic multi-objective big data optimization is not just about the scale of the data. The frequency and changes define the dynamic nature of big data. When dealing with this kind of problem, computing speed and efficiency have to be considered. Therefore, this work adopts two ways to improve computational efficiency. JDBDO has been integrated with an experience reuse method based on problem similarity comparison to obtain a better initial population faster after each problem change. In addition, JDBDO implements parallel computing on Apache Spark, further improving computing efficiency.
Problems. JDBDO now integrates a test suite that contains three sets of problems of different types and difficulties for 13 DMBDOPs. Several methods for updating problems are supplied. In addition, two extra test suites containing some classical MOPs and DMOPs are provided to facilitate testing at different stages of algorithm design and implementation. Also, some of these problems provide essential support for the implementation of DMBDOPs. It has to be noted that all the test suites contain only problems with two objectives.
Analytic tools. Analyzing the solutions obtained by the algorithm is a vital issue in the research, so JDBDO has implemented different analysis tools for researchers to select, including performance indicators and data visualization tools. Users can select specific indicators to evaluate solutions obtained by their algorithms and monitor the nearly optimal PFs obtained by an algorithm in real-time during the dynamic optimization process. Moreover, a tool for analyzing the running state of the system is under development.

3.1. Dynamic Multi-Objective Big Data Optimization-Oriented Framework

As a metaheuristic software framework for dynamic multi-objective big data optimization, this framework aims to (1) design and apply modern dynamic metaheuristics for solving DMBDOPs, (2) provide a platform for optimization researchers to conduct experimental research while quickly building the whole research workflow, and (3) offer researchers the maximal flexibility to utilize or extend the integrated modules so that they can focus on their concerns without being distracted by other factors or details of implementation.
Figure 1 shows the overall framework of JDBDO. JDBDO is designed in a modularized way to maintain flexibility. The framework is split into several modules from the perspective of the modern research workflow. No matter whether algorithms, test problems, or others, the tools provide one or more interfaces and implementations to offer researchers sufficient flexibility to use the integrated algorithms or override the existing algorithms to try their ideas. Thus, they could focus only on the modules of interest and ignore the rest of the framework. In addition, the modules with oblique lines in the background in the figure indicate that they are under development and will be released in the future, and the modules with dotted line borders indicate that they support user customization and extensibility.
From a vertical perspective, the framework consists of three layers. The infrastructure layer at the bottom of Figure 1 provides research support. The Hadoop Distributed File System (HDFS) provides a primary data storage environment for the framework, and a fully distributed Hadoop cluster and Spark cluster form the distributed and parallel computing environment. The function of the Data Server is to provide a data engine to organize raw data and manage streaming data for problem updates. In addition, test suites for DMBDOPs, DMOPs, and MOPs are embedded in the Problem Set. The Algorithm Set contains different types of metaheuristics composed of dynamic EAs and dynamic SI algorithms.
The second layer shows the basic workflow for solving DMBDOPs by JDBDO. The serial numbers in the figure represent the steps for solving a DMBDOP. The Data Preprocessor module extracts update information from the Data Server and updates the problem, and then the dynamic algorithm restarts for the next optimization run. The Experience Reuse Method uses historical information to help the algorithm construct the initial population when restarting. Then, the algorithm solves the new problem. After that, the nearly optimal PS and problem features obtained at the end of every optimization run are used for updating the archive in the Experience Reuse Method. It should be noted that the Problem Change Information and the Solving Process Information in Figure 1 are two abstract concepts intended to provide the user with an idea for constructing an experience reuse method. In concrete implementation, we adopt the problem feature information and the final population information of this problem as historical experience for updating the archive in the Experience Reuse Method. However, users can develop different experience reuse methods based on this pattern without being limited to using the two types of information we adopt.
The interface layer tries to provide a user-friendly interface for the integrated methods in the framework. Users can build the whole workflow by configuring the Optimizer Runner module, in which the desired algorithm and test problem can be selected and set. The Analytic Tools module is designed to meet the analytic demands of users in the process of problem optimization. When solving a dynamic problem, users can monitor the nearly optimal PF curve for the current problem with the Dynamic Visualizer. Alternatively, users can obtain the nearly optimal PF of a static problem using the Static Visualizer after the algorithm is terminated. Moreover, users can apply different performance indicators to the resulting solutions for assessing algorithms’ performance. The visualizer of system status during algorithm execution will be added to this module soon. Furthermore, a Big Data Analyzer is under development to provide more functions like systematic big data storage and management, fundamental statistical analysis, and data mining.

3.2. Optimization Components

The optimization components of JDBDO cover the complete process of solving DMBDOPs, which includes solutions, dynamic multi-objective metaheuristic algorithms, different operators, DMBDOPs, algorithm termination criteria, and algorithm restart strategies. These components are described in detail in this section.

3.2.1. Solutions

The framework provides the Individual class, where each object of this class corresponds to a feasible solution in the problem-solving process. An Individual object contains four properties: variables, objectives, constraints, and otherProperties. variables store the list of decision variables. objectives and constraints store the objective function values and constraint values of the problem. otherProperties is used by some algorithms to store some personalized properties, such as ”Trial Number” in ABC. Furthermore, the framework provides the Population class, equivalent to an archive of all Individual objects, and further encapsulates the Individual class. Users can manipulate the whole set of solutions or one of them by using methods in this class.

3.2.2. Metaheuristic Algorithms

The dynamic multi-objective algorithms available in JDBDO include Dynamic NSGA-II [15], Dynamic NSGA-III [41,42], Dynamic MOEA/D-DE [43,44], and Dynamic ABC [45,46]. These algorithms cover population-based EA, decomposition-based EA, and the SI algorithm. All of the algorithms are based on classical static algorithms modified to solve DMBDOPs. The framework defines clear interfaces and inheritance relationships so that users can use, learn, and improve the original algorithms, or can implement new algorithms by implementing and inheriting these integrated interfaces and classes. The framework first defines the Metaheuristic interface to provide basic support for running the algorithm in a thread. Furthermore, the framework provides two abstract classes, AbstractEA and AbstractSI, which implement the Metaheuristic interface to provide basic templates for two types of metaheuristic algorithms. Users can choose which abstract class to inherit according to the type of algorithm they want to implement. Many EAs have identical basic run steps, so a unified run method is implemented in the AbstractEA class. It should be noted that when implementing some algorithms that run in a different way, such as MOEA/D, the run method needs to be overridden. Since different SI algorithms have different mechanisms, the run method is not implemented in the AbstractSI class, which only abstracts the basic functions required by some SI algorithms, so users need to override the run method in concrete algorithm implementation.

3.2.3. Operators

Different operators are implemented in the framework to cope with different algorithms. Table 1 shows all the operators that users can choose according to their needs. For example, when using the MOEA/D algorithm, simulated binary crossover (SBX) can be replaced by differential evolution (DE) crossover to obtain a variant of the MOEA/D algorithm, MOEA/D-DE. Polynomial mutation or random mutation can be used depending on the demand of users. When performing the selection operation, users can use simple binary tournament selection and make a selection based on crowding distance or select randomly. Note that, since crowding distance selection is used for elite preservation in NSGA-II, it is named NSGA-IISelection in the framework.

3.2.4. Termination Criterion

Certain criteria are needed to determine when the algorithm terminates during its run. For static algorithms, usually, the algorithm terminates when the objective functions have been evaluated or the algorithm iterates a specific number of times. However, for dynamic algorithms, the situation is a little more complicated, and the method of pre-determining a specific number of evaluations may not work well because the problem is constantly changing. Therefore, the termination criterion in this framework considers both problem updates and the number of function evaluations. In one scenario, if the problem has changed, the algorithm is terminated for the current run (i.e., it pursues nearly optimal solutions); in the opposing scenario, if the problem has not changed and objective functions have been evaluated a pre-defined number of times, the algorithm is also terminated.

3.2.5. Restart Strategies

In solving DMBDOPs, the algorithm is terminated when the problem changes, and subsequently, the algorithm needs to restart to solve the new problem. Therefore, an appropriate restart strategy needs to be designed to enable the algorithm to sense problem changes and adapt to them quickly. Two restart strategies are implemented in the framework for algorithms. The first one is that after the algorithm restarts, ζ % of the previous population is replaced with randomly generated solutions, and the obtained population is then used as the initial population to start a new round of optimization. This approach is easy to implement and works well in cases where the problem does not change severely or has a specific pattern of change; more details on this can be found in the literature [15]. Another strategy is to use the experience reuse method proposed in this paper when constructing the initial population, which is experimentally proven to ensure faster convergence of the algorithm while maintaining population diversity in both situations, whether the problem is updated in an orderly manner or randomly. The specific operation mechanism of this strategy will be further described in the subsection introducing experience reuse.

3.3. Experience Reuse and Parallel Computing

Solving big data optimization problems is a very time-consuming task that requires high performance of the algorithm and computing power. Moreover, solving DMBODPs becomes even more challenging when the problem changes rapidly. It is not realistic in terms of time to obtain the optimal PS for each problem, so some strategies should be adopted to obtain the best possible approximate optimal PS efficiently. In the process of solving a dynamic problem, some solutions in the PS of the previous problem are randomly replaced, and then that PS is used as the initial population of the new problem. This approach is very effective because periodic functions incur changes in most dynamic multi-objective optimization test problems, and the difference between the new PS and the old PS is minimal. Therefore, the use of this strategy ensures fast convergence of the algorithm. However, in many situations, the change in the problem may be random, and therefore the distribution of the solutions to the problem may vary significantly, in which case the above strategy may not help to speed up the computation. Therefore, this work proposes an experience reuse method based on experience accumulation to accelerate optimization.
A fixed-size external archive supports the experience reuse method by storing historical information in the form of quadruples ( f , p , i t e r a t i o n , t r i a l ) , where f represents the features of time series data for different problems, p is the current best approximate optimal PS for this problem, i t e r a t i o n is used to record the number of times the algorithm has been run since the quadruple was added to the archive, and  t r i a l indicates how many times this information item has been used by the experience reuse method. Before introducing the experience reuse method, the update mechanism of the archive is first explained. No history information exists when the algorithm is run for the first time, so its initial population is randomly generated according to the requirements of different problems. The archive update occurs every time the algorithm restarts and invokes the experience reuse method. As shown in Algorithm 1, the inputs of the update algorithm are the features f l a s t of the previous problem and the approximate optimal PS of that problem p . If the archive is empty, the information on the previous problem is directly added to the archive, and i t e r a t i o n and t r i a l would be initialized. Otherwise, the similarity between the input problem features and the problem features stored in the archive would be computed using the dynamic time warping (DTW) algorithm. Here, a threshold named M i n i m u m S i m i l a r i t y needs to be set, and if all the similarities computed are greater than or equal to this threshold, the problem is considered a new problem. If the archive is not yet full, the information about the new problem is stored directly in the archive. If the archive size has reached the fixed size, the least-used record will be deleted, and new problem information will be added to the archive. If the least-used record cannot be found, the archive will not be updated. The specific judgment method is shown in lines 12–18 of Algorithm 1, which means that if a record has existed for a long time but is the one used least frequently, it will be deleted. If the similarity is below the threshold, then there exists a problem in the archive that is highly similar or identical to the input problem, and the two problems need to be compared. If the hypervolume of new solutions is greater than the hypervolume of old solutions, the old PS is replaced with the new PS. It is important to note that the stored problem features are not updated, and in this simple way, it is ensured that all the information stored in the archive is for different problems. Since the experience reuse method only properly references the historical experience, it is not necessary to store all the exact information.
Algorithm 1 UpdateArchive: update the external archive
Input: last problem’s features and nearly optimal PS f l a s t , p, external archive A r c h i v e
Output: Updated archive A
1:
if  A r c h i v e =  then
2:
    Add ( ( f l a s t , p , i t e r a t i o n = 0 , t r i a l = 0 ) )
3:
else
4:
   for all  ( f i , p i ) A r c h i v e  do
5:
        s i = D T W ( f i , f l a s t )
6:
   end for
7:
   if all s i M i n i m u m S i m i l a r i t y  then
8:
       if  A r c h i v e . s i z e ( ) < f i x e d S i z e  then
9:
              Add ( ( f l a s t , p , i t e r a t i o n = 0 , t r i a l = 0 ) )
10:
      else 
11:
             R S =
12:
            for all  ( f i , p i , i t e r a t i o n i , t r i a l i ) A r c h i v e  do
13:
                if  i t e r a t i o n i  >  P r e D e f i n e d N u m b e r  then
14:
                   if  t r i a l i  <  M i n i m u m T r i a l N u m b e r  then
15:
                      R S = R S ( f i , p i , i t e r a t i o n i , t r i a l i )
16:
                   end if
17:
                end if
18:
            end if
19:
            if  R S =  then
20:
               Do not update the archive
21:
            else
22:
               Select the quaternion with the smallest trial value from R S , and mark its index as m
23:
                Remove ( ( f m , p m , i t e r a t i o n m , t r i a l m ) )
24:
                Add ( ( f l a s t , p , i t e r a t i o n = 0 , t r i a l = 0 ) )
25:
            end if
26:
       end if
27:
   else
28:
       Mark the index of minimal s i value as i d x
29:
       if  H V ( p ) > H V ( p i d x )  then
30:
             Remove ( ( f i d x , p i d x , i t e r a t i o n i d x , t r i a l i d x ) )
31:
             Add ( ( f l a s t , p , i t e r a t i o n = 0 , t r i a l = 0 ) )
32:
       end if
33:
   end if
34:
end if
35:
A = A r c h i v e
36:
return A
The experience reuse method mainly helps to reconstruct the initial population after an algorithm restart. As shown in Algorithm 2, the time series data features of the current problem, the features of the previous problem, and the approximate optimal PS of the previous problem are input. The archive is first updated with information on the previous problem. The DTW algorithm is used to calculate the similarity between features of the current problem and all features in the archive, and then the i t e r a t i o n attribute of all the archived records involved in the similarity calculation is increased by one. Then, the record with the smallest similarity is selected from the archive, and if the similarity is less than the threshold M i n i m u m S i m i l a r i t y , the new problem is considered to be the same problem as the selected one, and ζ % of the individuals in the selected PS will be replaced with randomly generated individuals. The purpose of incorporating this strategy in the experience reuse method is to ensure the diversity of the population after initializing the population using historical PS for the same problem, avoiding getting trapped in local optima, and trying to obtain better solutions while improving the computational speed. If there are no records in the archive with a problem similarity less than the threshold M i n i m u m S i m i l a r i t y , there is no problem identical to the new problem in the archive, so the PS of the problem most similar to the new one is used directly as the initial population. Since the selected problem is the most similar to the new problem but the two are not the same problem, directly using the selected PS as the initial population can serve the same purpose of maintaining population diversity as the random replacement of individuals strategy.
Algorithm 2 Experience Reuse Method Based on Problem Similarity
Input: Current problem’s features f n e w , last problem’s features and nearly optimal PS f l a s t , p, external archive A r c h i v e
Output: Initial population of the new problem p *
1:
A = U p d a t e A r c h i v e ( f l a s t , p , A r c h i v e )
2:
for all  ( f i , p i ) A  do
3:
    s i = D T W ( f i , f n e w )
4:
    i t e r a t i o n i + = 1
5:
end for
6:
m i n d e x = 0
7:
Select the minimal s i , m i n d e x = i
8:
if  s m i n d e x M i n i m u m S i m i l a r i t y  then
9:
      p * = R a n d o m R e m o v e N I n d i v i d u a l s ( p m i n d e x )
10:
    p * = R a n d o m A d d N I n d i v i d u a l s ( p m i n d e x )
11:
    t r i a l m i n d e x + = 1
12:
else
13:
    p * = p m i n d e x
14:
end if
15:
return  p *
To further enhance the computational speed of dynamic algorithms, the algorithms in the framework could utilize Apache Spark for parallel computation when performing population evaluation. Due to the presence of resilient distributed datasets (RDDs) and the fact that Spark is very Java-programming-friendly, it is relatively easy to implement parallel computation on a Spark cluster. As shown in Figure 2, a population of L i s t type can be transformed into an RDD by the p a r a l l e l i z e method in J a v a S p a r k C o n t e x t . Then, the data are loaded into memory by transformations in RDD operations (e.g., m a p ), and then the t a k e operator is called to execute the computation and re-persist the data to the external memory. The c o l l e c t operator is not used here because it is prone to memory overflow, and its operation mechanism conflicts with the distributed concept of Spark. Readers can refer to the official documentation for more details and principles about the operations of RDDs.

3.4. Problems

The test suites in the framework contain several types of test problems, making it easy for researchers to evaluate the performance of their algorithms. Different problems could help researchers improve the robustness and generalization capability of their algorithms. The problems included in test suites are listed in Table 2.
The test suites contain a variety of MOPs, DMOPs, and DMBDOPs. The MOPs are the classic ZDT1-ZDT4, ZDT6, UF1-UF7, and BigOpt2015 from the Big Data 2015 Competition. Among the DMOPs are FDA1-FDA3 and DF1-DF9. Furthermore, DMBDOPs problems are developed based on some of the above problems, including big data FDA (BDFDA) problems derived from the FDA series, big data DF (BDDF) problems derived from the DF series, and Dynamic BigOpt2015 (DBigOpt2015), which is a dynamic version of BigOpt2015. In total, there are 13 DMBDOPs. For example, DBigOpt2015 is a dynamic version of BigOpt2015. BigOpt2015 is abstracted from the electroencephalographic (EEG) signal processing problem. EEG signal processing is a dynamic problem that has a high real-time requirement. The competition only intercepts fragments of data to turn them into an MOP containing a large number of variables. This big data MOP could be mathematically defined as
min f 1 = 1 N × M i j ( S i j S 1 i j ) 2 min f 2 = 1 N 2 N i , j i ( C i j 2 ) + 1 N i ( 1 C i i ) 2
with
S = S 1 + S 2 X = A × S 1 + A × S 2 C = c o v a r ( X , A × S 1 ) v a r ( X ) × v a r ( A × S 1 ) S 1 i j [ 8 , 8 ] , i = 1 , , N , j = 1 , , M
where A , X , and S are three matrices. A is a linear transformation matrix with dimension N × N . Matrix X and matrix S separately contain N inter-dependent and independent time series. The length of each time series is M. c o v a r ( . ) represents the covariance matrix, v a r ( . ) is the variance, and C is the matrix of Pearson correlation coefficients between X and A × S 1 . The goal is to find S 1 that is as similar as possible to S and maximize the diagonal elements of C while minimizing the off-diagonal elements to zeros.
In DBigOpt2015, the time variable t is introduced to simulate the dynamic changes in time series data, turning the original static problem into a DMBDOP. Specifically, the original S dataset is made to vary with time t based on Equation (4) to obtain different S * matrices, and at the same time, different matrices X * are obtained while the matrix A remains unchanged. The definition of the time variable t is referenced in [2,51]. n t is the number of distinct steps in t, representing the severity of the change. τ t is the number of generations while t remains fixed, which reflects the frequency of change, and τ is the number of information updates sent.
S * = S × | sin ( 0.5 π t ) | t = 1 n t τ τ t
In the concrete implementation, dataset D4 and dataset D4N are used to model the BDBigOpt2015 problem. According to Equation (4), the datasets S * and X * are continuously updated by a Kafka producer, and then the producer sends out the updated datasets S * , X * , and A as a stream at regular intervals. A Kafka consumer is responsible for receiving the messages and updating the problem, and then determining whether the problem has changed; if the problem has changed, the flag that shows whether the problem has been updated would be modified.
The construction of the BDFDA and BDDF series problems is different from that of DBigOpt2015 because the FDA series and DF series are originally dynamic problems. In both sets of problems, the dynamic changes in a problem are provided by a sinusoidal function with a time variable t. Therefore, the Kafka producer is responsible for updating the time variable t and sending it out, and the Kafka consumer receives the messages and updates the problem. The other problems in the BDFDA series and BDDF series are constructed the same way as in the above example.
In addition to the update methods of DMBDOPs in the above examples, the framework contains several other methods that users can try during the use of the framework.

3.5. Analytic Tools

Comparing the performance between algorithms and analyzing the quality of the solutions are very common in the development of optimization algorithms. Therefore, several analytic tools have been integrated into the framework.
The performance indicators integrated into the proposed framework include generational distance (GD), inverted generational distance (IGD) [52], IGD+ [53], and hypervolume (HV) [54]. Moreover, performance indicators for DMOPs are also developed, which are mean IGD (MIGD) [55] and mean HV (MHV) [51]. The MIGD indicator computes the average of the IGD values in some time steps, which measures the convergence and diversity of solutions. The IGD metric at each selected time step is calculated just before the next change occurs; then, the MIGD value is calculated. The MHV calculates the average of HV values in some time steps in the same way that the MIGD is calculated. According to [56], ( z 1 + 0.5 , z 2 + 0.5 , , z M + 0.5 ) is the reference point for computing the HV value, where z j is the maximum value of the jth objective of the true optimal PF and M is the number of objectives.
Moreover, the framework integrates PF visualization components based on the XChart library, which is a lightweight and convenient Java library for plotting data. So, users can observe the PF changes in time during the process of solving DMBDOPs or obtain the final distribution of solutions in the objective space after a static metaheuristic is terminated. Figure 3 shows an example of the real-time visualization tool, where the nearly optimal PFs of BDDF1 obtained by the Dynamic ABC are shown when the time variable t = 0.0 , t = 0.3 , t = 0.7 , and t = 0.9 .

4. Usages and Test Cases

4.1. Running the Framework

In the simplest case, users only need to initialize an OptimizerRunner object, then set a specific algorithm name and a problem name to use the framework. Furthermore, users can also set a different Operator and its parameters in OptimizerRunner, on account of different demands, and also set a different KafkaProducer and KafkaConsumer to achieve different ways to update the problem. The function of the Population class is to store solutions (i.e., the Individual objects) during the optimization process, and the Individual object contains more detailed information, such as the set of variables and the values of all objective functions. Users can set the parameters of the Population object in more detail when initializing the algorithm object in OptimizerRunner. Users can refer to the documentation for more details on how to use the framework and obtain the UML class diagram related to the basic running of the proposed metaheuristic framework from Figure 4.
The sequence diagram in Figure 5 shows the basic process of solving a DMBDOP using the dynamic metaheuristic algorithms in the framework. Firstly, the algorithm is invoked, and the population is initialized according to the problem requirements. Then, the optimization process of the algorithm loops until the problem is updated. When the problem is updated, the population obtained by the algorithm is used by SimpleVisualizer to visualize the nearly optimal PF, while the ExperienceReuse object uses the nearly optimal PS to update the external archive. Finally, the algorithm is restarted with the help of the ExperienceReuse object so that it can continue solving the new problem.

4.2. Comparisons of Different DMBDOPs

In order to demonstrate the feasibility of the proposed framework, we conducted experiments on solving different DMBDOPs using the algorithms integrated into the framework. In this section, two types of problems were selected for testing: DBigOpt2015, which represents problems containing a large number of decision variables, and the BDDF series of problems, which represents problems derived from classical DMOPs.
It is necessary to note that all the experiments in this subsection and the following subsection were conducted with a workstation containing an AMD Ryzen threadripper pro 3995wx CPU, 512 GB of RAM, and 4 TB of storage. The CPU is at 2.7 GHz with 64 cores and 128 threads. The operating system is Manjaro Linux XFCE 20.1 64-bit, on which three virtual machines with CentOS Linux 7.5 64-bit are used to build the distributed environment. The cluster includes Apache Hadoop 3.2.0, Apache Spark 3.03, and Apache Kafka 2.7.1.

4.2.1. Parameter Settings

When solving the DBigOpt2015 problem, the parameters of the four algorithms are set as follows. The population size is 20 for all problems. The maximum number of function evaluations is set to 20,000 for the D4/D4N dataset. The number of decision variables D is decided by the dataset, D = 4 × 256 for D4/D4N. For DynamicNSGA-II with the SBX operator and the polynomial mutation operator, the crossover rate p c is 1.0, the mutation rate p m is 1 D , and the distribution index η is 20. For DynamicNSGA-III, operators are the same as for Dynamic NSGA-II, and the parameters are p c = 1.0 , p m = 1 D , and η = 30 . In the Dynamic MOEA/D-DE algorithm, the control parameters of the DE operator are C R = 1.0 and F = 0.5 . The control parameters for the polynomial mutation operator are p m = 1 D and η = 20 . The MaxTrial parameter used by scout bees in the Dynamic ABC algorithm is set to 200. For problem changes, the Kafka producer sends updated data once per second, n t is set to 10, and τ t is set to 60. In this case, the DBigOpt2015 problem changes every minute. For BDDF problems, D = 10 , and the population size is 100 for all problems. The maximum number of function evaluations is set to 200, and the control parameters of the four algorithms are the same as for DBigOpt2015 settings. In each BDDF problem, parameter n t = 10 , parameter τ t = 10 , and the Kafka producer sends twice the amount of update information per second, which means the problem changes every 5 s. In this experiment, the experience reuse method is initialized with empty archives, and the archives are continuously updated and the information in them is used during the optimization process.

4.2.2. Experimental Results on DBigOpt2015 and BDDF Problems

Figure 6 plots the approximate optimal PFs with the median HV value over 30 runs obtained by four algorithms in one time period. The data on the sub-figures are sampled when the optimization process starts and after changes in the problem have occurred 2, 5, and 10 times. It can be observed from Figure 6 that the position of the PFs of the problem changes as the problem is updated, and the four algorithms can adapt to the changes in the problem to obtain different approximate optimal solutions. The four algorithms perform consistently on dataset D4 and dataset D4N. Dynamic NSGA-III performs best on the first objective, Dynamic MOEA/D-DE performs best on the second objective, and Dynamic NSGA-II has the best diversity of non-dominated solution sets and performs more stably during the problem updating. In contrast, the Dynamic ABC algorithm has the least stable performance and the lowest solution quality among the four algorithms. The convergence curves corresponding to Figure 6 based on the HV indicator in Figure 7 show the convergence speed of four algorithms in solving the DBigOpt2015 problem, and reference points are set to ( 1 , 0.1 ) on the D4 dataset and ( 1.5 , 0.1 ) on the D4N dataset in order to calculate the HV values. It can be seen that Dynamic NSGA-III and Dynamic MOEA/D-DE perform better than the other two algorithms in terms of convergence speed comparison. Figure 8 shows the distribution of the non-dominated solutions obtained by the Dynamic ABC on BDDF1-BDDF9 problems in the objective space. Table 3 shows the MIGD and MHV obtained after 30 independent runs of the four algorithms on the BDDF family of problems. All metric values are computed at an interval Δ t = 0.2 over one cycle of the problem variation. The reference fronts of all problems at time step t were sampled at 100 reference points using the method presented in the literature [57]. Then, the reference points used to calculate HV values were generated based on the maximum values of the two objectives of the reference fronts. According to the MIGD values, it can be seen that, for each problem, all four algorithms can converge well and show little difference in performance, with Dynamic MOEA/D-DE having the best convergence performance. From the perspective of MHV values, there are some differences between the diversity of non-dominated solutions obtained by the four algorithms. Dynamic ABC obtains the best diversity of non-dominated solutions, while Dynamic NSGA-III obtains the worst performance. Dynamic MOEA/D-DE and Dynamic NSGA-II perform similarly.

4.3. Performance of the Experience Reuse Method

In this subsection, experiments are set up to test the performance of the experience reuse method. Dynamic NSGA-II is adopted in all experiments. In the experiment of testing the experience reuse method, the test problems that both the nearly optimal PS and the nearly optimal PF will change are chosen because the difference between using the experience reuse method or not can be seen more clearly in this kind of problem. In addition, to compare the experience reuse method and the strategy, in which a modified PS of the previous problem is used to initialize the new population, in an uncertain environment, a scenario that simulates the problem updated randomly is designed.
Two comparison experiments are designed here to test the experience reuse method. The first is conducted on the BDFDA3 problem, and the dynamic algorithm uses the randomly generated solution as the initial population directly after the algorithm restart, while the problem is updated according to the problem definition. The second is conducted on the BDDF6 problem, and the restart strategy in [15] is adopted by the algorithm for comparison. The time variable t is generated randomly within a certain range, simulating the random update scenario. Two external archives are obtained by running the Dynamic NSGA-II algorithm for 10 periods of each problem before the experiments begin. These two external archives are then passed into the experience reuse method for the formal experiments.

4.3.1. Parameter Settings

In the BDFDA3 problem, the number of variables D = 30 . The control parameters of time variable t are τ t = 5 and n t = 10 . In the BDDF6 problem, D = 10 , and time variable t keeps changing randomly in the range of [ 0.0 , 4.0 ] . For both problems, the Kafka producer sends updates twice per second. Dynamic NSGA-II adopts the same settings for these two problems. SBX and polynomial mutation are adopted. The probability of SBX p c = 1 , the probability of polynomial mutation p m = 1 D , the distribution index of both operators is set to 20. The population size is set to 100, while the maximum number of function evaluations is set to 200.

4.3.2. Experimental Results

Figure 9 shows the changes in IGD values and HV values of non-dominated solutions for BDFDA3 at time steps t = 1.0 , t = 2.0 , and t = 3.0 . Compared with the randomly initialized population, the initial population obtained using the experience reuse method has a considerably higher quality at the beginning of the optimization. The initial population is very close to the non-dominated solutions obtained after the algorithm’s convergence. In addition, the algorithm can converge significantly faster with the help of the experience reuse method. The subgraphs in Figure 10 are sampled from the BDDF6 random update process when time steps t = 1.0 , t = 2.0 , and t = 3.0 occur. It is clear that when the problem is updated randomly, the initial population modified from the previous PS is not only far from the approximate optimal PS of the new problem but also that the quality and diversity of the non-dominated solutions of the new problem obtained after the algorithm converges are somewhat different from the best case. In contrast, the algorithm that introduces the experience reuse method can still achieve good performance in this case. The convergence speed of the algorithm, the quality of the final non-dominated solutions, and the diversity of the final population are better than the strategy used for comparison.

5. Conclusions and Future Work

This paper presented a metaheuristic framework for solving DMBDOPs, aiming to provide a software framework for studying DMBDOPs and a basic platform for subsequent, more in-depth studies. The proposed framework has a lightweight architecture, and the relationship between modules is decoupled as much as possible, making each component of the framework easy to set up, modify, or extend. Four basic dynamic multi-objective metaheuristic algorithms have been implemented in the framework, and all algorithms have undergone relatively comprehensive unit testing. Moreover, the framework contains various test suites, and most importantly, 13 DMBDOPs are integrated into it. These DMBDOPs contain problems with large-scale decision variables and problems derived from classical DMOPs. In addition, an experience reuse method based on historical approximate optimal PS accumulation is proposed in this paper, which is applicable to help the restarted algorithm construct the initial population when the problem changes with no regularity.
In the experiments, four algorithms in the framework are compared on two types of DMBDOPs, and the performance of these algorithms in solving DMBDOPs is demonstrated. The experimental results show that the four algorithms differ in performance, but all of them are able to solve the test problems in the experiments effectively. Then, two sets of comparison experiments, using two initial population construction strategies and different problem updating strategies, are set up for the experience reuse method. Experimental results indicate that the experience reuse model proposed in this paper can achieve better performance in either case.
The proposed framework not only facilitates theoretical studies of DMBDOPs but also allows users to customize and implement real-world scenarios swiftly, constructing corresponding metaheuristics to meet specific application needs. For instance, consider the dynamic multi-objective portfolio optimization problem: users can easily construct a multi-objective portfolio optimization problem by utilizing the static problem interface provided by the framework based on a chosen investment model. Subsequently, by invoking the dynamic problem interface, users can introduce changing characteristics and integrate specialized data into the problem, thereby creating a standard DMBDOP. Furthermore, users could address the customized real-world problem using the integrated algorithms within the framework. For enhancing performance on specific problems, users can easily tune the integrated algorithms, extend them according to some good methods, or develop new metaheuristics utilizing the offered interfaces.
Considering that the framework has not yet been widely used, there may be bugs and vulnerabilities that have not been encountered. Therefore, we hope to improve the proposed framework in the future during the use process. In addition, we would like to integrate more dynamic metaheuristics into the framework and incorporate more test suites, such as DMBDOPs with more than two objectives and some real-life DMBDOPs.

Author Contributions

Conceptualization, X.Z. and C.Z.; methodology, X.Z.; software, X.Z. and Y.A.; validation, X.Z. and Y.A.; formal analysis, X.Z.; investigation, X.Z.; resources, C.Z.; data curation, B.Z.; writing—original draft preparation, X.Z.; writing—review and editing, X.Z., C.Z. and B.Z.; visualization, X.Z. and Y.A.; supervision, C.Z.; project administration, B.Z.; funding acquisition, B.Z. and C.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the 111 project (B16009).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data involved in this research can be accessed through the link in the main text, and the corresponding author can be contacted with any questions.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Reinsel, D.; Gantz, J.; Rydning, J. Data Age 2025: The Evolution of Data to Life-Critical. Don’t Focus on Big Data; Focus on the Data That’s Big; International Data Corporation (IDC) White Paper; IDC: Needham, MA, USA, 2017. [Google Scholar]
  2. Farina, M.; Deb, K.; Amato, P. Dynamic multiobjective optimization problems: Test cases, approximations, and applications. IEEE Trans. Evol. Comput. 2004, 8, 425–442. [Google Scholar] [CrossRef]
  3. Zhou, Z.H.; Chawla, N.V.; Jin, Y.; Williams, G.J. Big data opportunities and challenges: Discussions from data analytics perspectives [discussion forum]. IEEE Comput. Intell. Mag. 2014, 9, 62–74. [Google Scholar] [CrossRef]
  4. Peres, F.; Castelli, M. Combinatorial optimization problems and metaheuristics: Review, challenges, design, and development. Appl. Sci. 2021, 11, 6449. [Google Scholar] [CrossRef]
  5. Corus, D.; Oliveto, P.S.; Yazdani, D. Fast Immune System-Inspired Hypermutation Operators for Combinatorial Optimization. IEEE Trans. Evol. Comput. 2021, 25, 956–970. [Google Scholar] [CrossRef]
  6. Zhang, F.; Mei, Y.; Nguyen, S.; Zhang, M. Correlation coefficient-based recombinative guidance for genetic programming hyperheuristics in dynamic flexible job shop scheduling. IEEE Trans. Evol. Comput. 2021, 25, 552–566. [Google Scholar] [CrossRef]
  7. Tian, Y.; Zhang, T.; Xiao, J.; Zhang, X.; Jin, Y. A coevolutionary framework for constrained multiobjective optimization problems. IEEE Trans. Evol. Comput. 2020, 25, 102–116. [Google Scholar] [CrossRef]
  8. Yuan, J.; Liu, H.L.; Ong, Y.S.; He, Z. Indicator-based evolutionary algorithm for solving constrained multi-objective optimization problems. IEEE Trans. Evol. Comput. 2021, 26, 379–391. [Google Scholar] [CrossRef]
  9. Qiao, K.; Yu, K.; Qu, B.; Liang, J.; Song, H.; Yue, C. An evolutionary multitasking optimization framework for constrained multiobjective optimization problems. IEEE Trans. Evol. Comput. 2022, 26, 263–277. [Google Scholar] [CrossRef]
  10. Deng, W.; Xu, J.; Gao, X.Z.; Zhao, H. An enhanced MSIQDE algorithm with novel multiple strategies for global optimization problems. IEEE Trans. Syst. Man Cybern. Syst. 2020, 52, 1578–1587. [Google Scholar] [CrossRef]
  11. Hashim, F.A.; Hussain, K.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W. Archimedes optimization algorithm: A new metaheuristic algorithm for solving optimization problems. Appl. Intell. 2021, 51, 1531–1551. [Google Scholar] [CrossRef]
  12. Parouha, R.P.; Verma, P. An innovative hybrid algorithm for bound-unconstrained optimization problems and applications. J. Intell. Manuf. 2022, 33, 1273–1336. [Google Scholar] [CrossRef]
  13. Azzouz, R.; Bechikh, S.; Ben Said, L. Dynamic multi-objective optimization using evolutionary algorithms: A survey. In Recent Advances in Evolutionary Multi-Objective Optimization; Springer: Cham, Switzerland, 2017; pp. 31–70. [Google Scholar]
  14. Mavrovouniotis, M.; Li, C.; Yang, S. A survey of swarm intelligence for dynamic optimization: Algorithms and applications. Swarm Evol. Comput. 2017, 33, 1–17. [Google Scholar] [CrossRef]
  15. Deb, K.; Rao N, U.B.; Karthik, S. Dynamic multi-objective optimization and decision-making using modified NSGA-II: A case study on hydro-thermal power scheduling. In Evolutionary Multi-Criterion Optimization, Proceedings of the 4th International Conference, EMO 2007, Matsushima, Japan, 5–8 March 2007; Springer: Berlin/Heidelberg, Germany, 2007; pp. 803–817. [Google Scholar]
  16. Hatzakis, I.; Wallace, D. Dynamic multi-objective optimization with evolutionary algorithms: A forward-looking approach. In Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation, Seattle, WA, USA, 8–12 July 2006; pp. 1201–1208. [Google Scholar]
  17. Wang, F.; Li, Y.; Liao, F.; Yan, H. An ensemble learning based prediction strategy for dynamic multi-objective optimization. Appl. Soft Comput. 2020, 96, 106592. [Google Scholar] [CrossRef]
  18. Zhenzhong, W.; Jiang, M.; Xing, G.; Liang, F.; Weizhen, H.; Tan, K.C. Evolutionary dynamic multi-objective optimization via regression transfer learning. In Proceedings of the 2019 IEEE Symposium Series on Computational Intelligence (SSCI), Xiamen, China, 6–9 December 2019; pp. 2375–2381. [Google Scholar]
  19. Eaton, J.; Yang, S.; Gongora, M. Ant colony optimization for simulated dynamic multi-objective railway junction rescheduling. IEEE Trans. Intell. Transp. Syst. 2017, 18, 2980–2992. [Google Scholar] [CrossRef]
  20. Zheng, J.; Zhang, Z.; Zou, J.; Yang, S.; Ou, J.; Hu, Y. A dynamic multi-objective particle swarm optimization algorithm based on adversarial decomposition and neighborhood evolution. Swarm Evol. Comput. 2022, 69, 100987. [Google Scholar] [CrossRef]
  21. Takano, R.; Yamazaki, D.; Ichikawa, Y.; Hattori, K.; Takadama, K. Multiagent-based ABC algorithm for autonomous rescue agent cooperation. In Proceedings of the 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC), San Diego, CA, USA, 5–8 October 2014; pp. 585–590. [Google Scholar]
  22. Orouskhani, M.; Teshnehlab, M.; Nekoui, M.A. Evolutionary dynamic multi-objective optimization algorithm based on Borda count method. Int. J. Mach. Learn. Cybern. 2019, 10, 1931–1959. [Google Scholar] [CrossRef]
  23. Emrouznejad, A. Big Data Optimization: Recent Developments and Challenges; Springer: Cham, Switzerland, 2016; Volume 18. [Google Scholar]
  24. Wang, H.; Wang, W.; Cui, L.; Sun, H.; Zhao, J.; Wang, Y.; Xue, Y. A hybrid multi-objective firefly algorithm for big data optimization. Appl. Soft Comput. 2018, 69, 806–815. [Google Scholar] [CrossRef]
  25. Yi, J.H.; Deb, S.; Dong, J.; Alavi, A.H.; Wang, G.G. An improved NSGA-III algorithm with adaptive mutation operator for Big Data optimization problems. Future Gener. Comput. Syst. 2018, 88, 571–585. [Google Scholar] [CrossRef]
  26. Cho, W.K.T.; Liu, Y.Y. Parallel hybrid metaheuristics with distributed intensification and diversification for large-scale optimization in big data statistical analysis. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; pp. 3312–3320. [Google Scholar]
  27. Mishra, S.K.; Puthal, D.; Rodrigues, J.J.; Sahoo, B.; Dutkiewicz, E. Sustainable service allocation using a metaheuristic technique in a fog server for industrial applications. IEEE Trans. Ind. Inform. 2018, 14, 4497–4506. [Google Scholar] [CrossRef]
  28. Jian, J.R.; Chen, Z.G.; Zhan, Z.H.; Zhang, J. Region encoding helps evolutionary computation evolve faster: A new solution encoding scheme in particle swarm for large-scale optimization. IEEE Trans. Evol. Comput. 2021, 25, 779–793. [Google Scholar] [CrossRef]
  29. Xu, M.; Chen, Y.; Wang, D.; Chen, J. An Enhanced Adaptive Neighbourhood Adjustment Strategy on MOEA/D for EEG Signal Decomposition-Based Big Data Optimization. In Frontier Computing; Springer: Singapore, 2022; pp. 52–62. [Google Scholar]
  30. Barba-González, C.; García-Nieto, J.; Nebro, A.J.; Cordero, J.A.; Durillo, J.J.; Navas-Delgado, I.; Aldana-Montes, J.F. jMetalSP: A framework for dynamic multi-objective big data optimization. Appl. Soft Comput. 2018, 69, 737–748. [Google Scholar] [CrossRef]
  31. Barba-González, C.; Nebro, A.J.; Benítez-Hidalgo, A.; García-Nieto, J.; Aldana-Montes, J.F. On the design of a framework integrating an optimization engine with streaming technologies. Future Gener. Comput. Syst. 2020, 107, 538–550. [Google Scholar] [CrossRef]
  32. Durillo, J.J.; Nebro, A.J. jMetal: A Java framework for multi-objective optimization. Adv. Eng. Softw. 2011, 42, 760–771. [Google Scholar] [CrossRef]
  33. Benitez-Hidalgo, A.; Nebro, A.J.; Garcia-Nieto, J.; Oregi, I.; Del Ser, J. jMetalPy: A Python framework for multi-objective optimization with metaheuristics. Swarm Evol. Comput. 2019, 51, 100598. [Google Scholar] [CrossRef]
  34. Zambrano-Vega, C.; Nebro, A.J.; García-Nieto, J.; Aldana-Montes, J.F. A multi-objective optimization framework for multiple sequence alignment with metaheuristics. In Bioinformatics and Biomedical Engineering, Proceedings of the 5th International Work-Conference, IWBBIO 2017, Granada, Spain, 26–28 April 2017; Springer: Cham, Switzerland, 2017; pp. 245–256. [Google Scholar]
  35. Scott, E.O.; Luke, S. ECJ at 20: Toward a general metaheuristics toolkit. In Proceedings of the Genetic and Evolutionary Computation Conference Companion, Prague, Czech Republic, 13–17 July 2019; pp. 1391–1398. [Google Scholar]
  36. Tian, Y.; Cheng, R.; Zhang, X.; Jin, Y. PlatEMO: A MATLAB platform for evolutionary multi-objective optimization [educational forum]. IEEE Comput. Intell. Mag. 2017, 12, 73–87. [Google Scholar] [CrossRef]
  37. Blank, J.; Deb, K. Pymoo: Multi-objective optimization in python. IEEE Access 2020, 8, 89497–89509. [Google Scholar] [CrossRef]
  38. Qaddoura, R.; Faris, H.; Aljarah, I.; Castillo, P.A. Evocluster: An open-source nature-inspired optimization clustering framework in python. In Applications of Evolutionary Computation, Proceedings of the 23rd European Conference, EvoApplications 2020 (Part of EvoStar), Seville, Spain, 15–17 April 2020; Springer: Cham, Switzerland, 2020; pp. 20–36. [Google Scholar]
  39. Hadka, D. Moea Framework—A Free and Open Source Java Framework for Multiobjective Optimization. Version 2.11. 2015. Available online: http://www.moeaframework.org (accessed on 30 June 2015).
  40. Lacerda, A.S.; Batista, L.S. KDT-MOEA: A multiobjective optimization framework based on KD trees. Inf. Sci. 2019, 503, 200–218. [Google Scholar] [CrossRef]
  41. Deb, K.; Jain, H. An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: Solving problems with box constraints. IEEE Trans. Evol. Comput. 2013, 18, 577–601. [Google Scholar] [CrossRef]
  42. Jain, H.; Deb, K. An evolutionary many-objective optimization algorithm using reference-point based nondominated sorting approach, part II: Handling constraints and extending to an adaptive approach. IEEE Trans. Evol. Comput. 2013, 18, 602–622. [Google Scholar] [CrossRef]
  43. Zhang, Q.; Li, H. MOEA/D: A multiobjective evolutionary algorithm based on decomposition. IEEE Trans. Evol. Comput. 2007, 11, 712–731. [Google Scholar] [CrossRef]
  44. Li, H.; Zhang, Q. Multiobjective optimization problems with complicated Pareto sets, MOEA/D and NSGA-II. IEEE Trans. Evol. Comput. 2008, 13, 284–302. [Google Scholar] [CrossRef]
  45. Akbari, R.; Hedayatzadeh, R.; Ziarati, K.; Hassanizadeh, B. A multi-objective artificial bee colony algorithm. Swarm Evol. Comput. 2012, 2, 39–52. [Google Scholar] [CrossRef]
  46. Aslan, S.; Karaboga, D. A genetic Artificial Bee Colony algorithm for signal reconstruction based big data optimization. Appl. Soft Comput. 2020, 88, 106053. [Google Scholar] [CrossRef]
  47. Deb, K.; Sindhya, K.; Okabe, T. Self-adaptive simulated binary crossover for real-parameter optimization. In Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation, London, UK, 7–11 July 2007; pp. 1187–1194. [Google Scholar]
  48. Price, K.; Storn, R.M.; Lampinen, J.A. Differential Evolution: A Practical Approach to Global Optimization; Springer Science & Business Media: Berlin/Heidelberg, Germnay, 2006. [Google Scholar]
  49. Deb, K.; Deb, D. Analysing mutation schemes for real-parameter genetic algorithms. Int. J. Artif. Intell. Soft Comput. 2014, 4, 1–28. [Google Scholar] [CrossRef]
  50. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  51. Jiang, S.; Yang, S.; Yao, X.; Tan, K.C.; Kaiser, M.; Krasnogor, N. Benchmark Problems for CEC2018 Competition on Dynamic Multiobjective Optimisation. In Proceedings of the CEC2018 Competition on Dynamic Multiobjective Optimisation, Rio de Janeiro, Brazil, 8–13 July 2018. [Google Scholar]
  52. Van Veldhuizen, D.A. Multiobjective Evolutionary Algorithms: Classifications, Analyses, and New Innovations; Air Force Institute of Technology: Dayton, OH, USA, 1999. [Google Scholar]
  53. Ishibuchi, H.; Masuda, H.; Tanigaki, Y.; Nojima, Y. Modified distance calculation in generational distance and inverted generational distance. In Evolutionary Multi-Criterion Optimization, Proceedings of the 8th International Conference, EMO 2015, Guimarães, Portugal, 29 March–1 April 2015; Springer: Cham, Switzerland, 2015; pp. 110–125. [Google Scholar]
  54. Zitzler, E.; Thiele, L. Multiobjective optimization using evolutionary algorithms—A comparative case study. In Parallel Problem Solving from Nature, Proceedings of the 5th International Conference, Amsterdam, The Netherlands, 27–30 September 1998; Springer: Berlin/Heidelberg, Germany, 1998; pp. 292–301. [Google Scholar]
  55. Zhou, A.; Jin, Y.; Zhang, Q. A population prediction strategy for evolutionary dynamic multiobjective optimization. IEEE Trans. Cybern. 2013, 44, 40–53. [Google Scholar] [CrossRef]
  56. Jiang, S.; Yang, S. A steady-state and generational evolutionary algorithm for dynamic multiobjective optimization. IEEE Trans. Evol. Comput. 2016, 21, 65–82. [Google Scholar] [CrossRef]
  57. Tian, Y.; Xiang, X.; Zhang, X.; Cheng, R.; Jin, Y. Sampling reference points on the Pareto fronts of benchmark multi-objective optimization problems. In Proceedings of the 2018 IEEE World Congress on Computational Intelligence (WCCI 2018), Rio de Janeiro, Brazil, 8–13 July 2018. [Google Scholar]
Figure 1. Software architecture of the proposed metaheuristic framework.
Figure 1. Software architecture of the proposed metaheuristic framework.
Applsci 14 04878 g001
Figure 2. Source code for parallelized function evaluation.
Figure 2. Source code for parallelized function evaluation.
Applsci 14 04878 g002
Figure 3. Example of real-time visualization tool: part of the nearly optimal PFs of BDDF1 obtained by Dynamic ABC. (a) t = 0.0. (b) t = 0.2. (c) t = 0.7. (d) t = 0.9.
Figure 3. Example of real-time visualization tool: part of the nearly optimal PFs of BDDF1 obtained by Dynamic ABC. (a) t = 0.0. (b) t = 0.2. (c) t = 0.7. (d) t = 0.9.
Applsci 14 04878 g003
Figure 4. Class diagram related to the basic running of the proposed metaheuristic framework.
Figure 4. Class diagram related to the basic running of the proposed metaheuristic framework.
Applsci 14 04878 g004
Figure 5. Sequence diagram of running an algorithm in the framework.
Figure 5. Sequence diagram of running an algorithm in the framework.
Applsci 14 04878 g005
Figure 6. Nealy optimal PFs with median HV value among 30 runs obtained by Dynamic NSGA-II, Dynamic NSGA-III, Dynamic MOEA/D-DE, Dynamic ABC on DBigOpt2015 with datasets D4/D4N at time steps 0.0, 0.2, 0.5, 1.0. (a) D4, t = 0.0. (b) D4, t = 0.2. (c) D4, t = 0.5. (d) D4, t = 1.0. (e) D4N, t = 0.0. (f) D4N, t = 0.2. (g) D4N, t = 0.5. (h) D4N, t = 1.0.
Figure 6. Nealy optimal PFs with median HV value among 30 runs obtained by Dynamic NSGA-II, Dynamic NSGA-III, Dynamic MOEA/D-DE, Dynamic ABC on DBigOpt2015 with datasets D4/D4N at time steps 0.0, 0.2, 0.5, 1.0. (a) D4, t = 0.0. (b) D4, t = 0.2. (c) D4, t = 0.5. (d) D4, t = 1.0. (e) D4N, t = 0.0. (f) D4N, t = 0.2. (g) D4N, t = 0.5. (h) D4N, t = 1.0.
Applsci 14 04878 g006
Figure 7. Convergence curves when solving DBigOpt2015 with datasets D4/D4N at time steps 0.0, 0.2, 0.5, 1.0. (a) D4, t = 0.0. (b) D4, t = 0.2. (c) D4, t = 0.5. (d) D4, t = 1.0. (e) D4N, t = 0.0. (f) D4N, t = 0.2. (g) D4N, t = 0.5. (h) D4N, t = 1.0.
Figure 7. Convergence curves when solving DBigOpt2015 with datasets D4/D4N at time steps 0.0, 0.2, 0.5, 1.0. (a) D4, t = 0.0. (b) D4, t = 0.2. (c) D4, t = 0.5. (d) D4, t = 1.0. (e) D4N, t = 0.0. (f) D4N, t = 0.2. (g) D4N, t = 0.5. (h) D4N, t = 1.0.
Applsci 14 04878 g007
Figure 8. Nealy optimal PFs obtained by Dynamic ABC on BDDF problems. (a) BDDF1. (b) BDDF2. (c) BDDF3. (d) BDDF4. (e) BDDF5. (f) BDDF6. (g) BDDF7. (h) BDDF8. (i) BDDF9.
Figure 8. Nealy optimal PFs obtained by Dynamic ABC on BDDF problems. (a) BDDF1. (b) BDDF2. (c) BDDF3. (d) BDDF4. (e) BDDF5. (f) BDDF6. (g) BDDF7. (h) BDDF8. (i) BDDF9.
Applsci 14 04878 g008
Figure 9. Convergence curves of Dynamic NSGA-II with experience reuse method or with restart strategy 1 on BDFDA3 at time steps 1.0, 2.0, 3.0. (a) IGD, t = 1.0. (b) IGD, t = 2.0. (c) IGD, t = 3.0. (d) HV, t = 1.0. (e) HV, t = 2.0. (f) HV, t = 3.0.
Figure 9. Convergence curves of Dynamic NSGA-II with experience reuse method or with restart strategy 1 on BDFDA3 at time steps 1.0, 2.0, 3.0. (a) IGD, t = 1.0. (b) IGD, t = 2.0. (c) IGD, t = 3.0. (d) HV, t = 1.0. (e) HV, t = 2.0. (f) HV, t = 3.0.
Applsci 14 04878 g009
Figure 10. Convergence curves of Dynamic NSGA-II with the experience reuse method or with restart strategy 2 on BDDF6 at time steps 1.0, 2.0, 3.0. (a) IGD, t = 1.0. (b) IGD, t = 2.0. (c) IGD, t = 3.0. (d) HV, t = 1.0. (e) HV, t = 2.0. (f) HV, t = 3.0.
Figure 10. Convergence curves of Dynamic NSGA-II with the experience reuse method or with restart strategy 2 on BDDF6 at time steps 1.0, 2.0, 3.0. (a) IGD, t = 1.0. (b) IGD, t = 2.0. (c) IGD, t = 3.0. (d) HV, t = 1.0. (e) HV, t = 2.0. (f) HV, t = 3.0.
Applsci 14 04878 g010
Table 1. Operators in the proposed framework.
Table 1. Operators in the proposed framework.
OperatorImplementation
Crossovertwo-point crossover, simulated binary crossover [47],
differential evolution crossover [48]
MutationPolynomial mutation [49], random mutaton
SelectionBinary tournament selection [50], random selection,
crowding distance selection [50]
Table 2. Problems integrated into the proposed framework.
Table 2. Problems integrated into the proposed framework.
ProblemDescription
Multi-Objective Optimization Problems
ZDT1-ZDT4, ZDT6Unconstrained MOPs
UF1-UF7Unconstrained MOPs for the CEC2009
BigOpt2015MOP of Big Data 2015 Competition
Dynamic Multi-Objective Optimization Problems
FDA1-FDA3DMOPs with continuous search space
DF1-DF9DMOPs for CEC2018
Dynamic Multi-Objective Big Data Optimization Problems
BDFDA1-BDFDA3DMBDOPs derived from FDA1-FDA3
BDDF1-BDDF9DMBDOps derived from DF1-DF9
DBigOpt2015Dynamic version of BigOpt2015
Table 3. The MIGD values and MHV values of Dynamic NSGA-II, Dynamic NSGA-III, Dynamic MOEA/D-DE, Dynamic ABC for 30 independent runs on BDDF problems. The best result in each row is bolded.
Table 3. The MIGD values and MHV values of Dynamic NSGA-II, Dynamic NSGA-III, Dynamic MOEA/D-DE, Dynamic ABC for 30 independent runs on BDDF problems. The best result in each row is bolded.
ProblemIndicatorA1A2A3A4
BDDF1MIGD1.8500 × 10 4 1.4809 × 10 3 6.4360 × 10 5 1.0036 × 10 3
MHV4.3401 × 10 1 4.1502 × 10 1 4.4289 × 10 1 4.4732 × 10 1
BDDF2MIGD2.2304 × 10 4 1.7448 × 10 3 1.0527 × 10 4 1.3398 × 10 3
MHV6.5972 × 10 1 6.4354 × 10 1 6.5491 × 10 1 6.6033 × 10 1
BDDF3MIGD1.8964 × 10 4 1.2837 × 10 2 1.4779 × 10 3 1.3903 × 10 3
MHV4.0194 × 10 1 2.4583 × 10 1 3.9074 × 10 1 4.2186 × 10 1
BDDF4MIGD1.3389 × 10 2 1.4031 × 10 2 3.6891 × 10 3 1.3702 × 10 2
MHV8.1468 × 10 1 7.1039 × 10 1 8.2782 × 10 1 8.2764 × 10 1
BDDF5MIGD3.7284 × 10 3 3.2760 × 10 3 4.7790 × 10 4 2.7026 × 10 3
MHV7.0020 × 10 1 6.3933 × 10 1 7.6172 × 10 1 7.9461 × 10 1
BDDF6MIGD4.2031 × 10 2 4.1738 × 10 2 4.0031 × 10 2 4.1093 × 10 3
MHV5.1172 × 10 1 4.3911 × 10 1 5.4137 × 10 1 5.5103 × 10 1
BDDF7MIGD4.6651 × 10 4 6.9730 × 10 3 5.7113 × 10 3 4.2090 × 10 3
MHV7.1815 × 10 1 6.0376 × 10 1 6.5238 × 10 1 6.7601 × 10 1
BDDF8MIGD2.7726 × 10 3 2.3101 × 10 3 4.6662 × 10 4 6.1093 × 10 4
MHV6.9368 × 10 1 6.1332 × 10 1 7.2841 × 10 1 7.4483 × 10 1
BDDF9MIGD3.1520 × 10 3 2.9101 × 10 4 1.1152 × 10 3 2.7406 × 10 3
MHV7.1287 × 10 1 6.9002 × 10 1 7.8224 × 10 1 8.0122 × 10 1
A1: Dynamic NSGA-II. A2: Dynamic NSGA-III. A3: Dynamic MOEA/D-DE. A4: Dynamic ABC.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zheng, X.; Zhang, C.; An, Y.; Zhang, B. A Metaheuristic Framework with Experience Reuse for Dynamic Multi-Objective Big Data Optimization. Appl. Sci. 2024, 14, 4878. https://doi.org/10.3390/app14114878

AMA Style

Zheng X, Zhang C, An Y, Zhang B. A Metaheuristic Framework with Experience Reuse for Dynamic Multi-Objective Big Data Optimization. Applied Sciences. 2024; 14(11):4878. https://doi.org/10.3390/app14114878

Chicago/Turabian Style

Zheng, Xuanyu, Changsheng Zhang, Yang An, and Bin Zhang. 2024. "A Metaheuristic Framework with Experience Reuse for Dynamic Multi-Objective Big Data Optimization" Applied Sciences 14, no. 11: 4878. https://doi.org/10.3390/app14114878

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop