Next Article in Journal
Tailoring Zinc Oxide Nanoparticles via Microwave-Assisted Hydrothermal Synthesis for Enhanced Antibacterial Properties
Previous Article in Journal
Channel State Information (CSI) Amplitude Coloring Scheme for Enhancing Accuracy of an Indoor Occupancy Detection System Using Wi-Fi Sensing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Adaptive Surrogate-Assisted Particle Swarm Optimization Algorithm Combining Effectively Global and Local Surrogate Models and Its Application

School of Computer Science and Engineering, Xi’an Technological University, Xi’an 710021, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(17), 7853; https://doi.org/10.3390/app14177853
Submission received: 21 July 2024 / Revised: 20 August 2024 / Accepted: 22 August 2024 / Published: 4 September 2024
(This article belongs to the Section Robotics and Automation)

Abstract

:
Numerous surrogate-assisted evolutionary algorithms have been proposed for expensive optimization problems. However, each surrogate model has its own characteristics and different applicable situations, which caused a serious challenge for model selection. To alleviate this challenge, this paper proposes an adaptive surrogate-assisted particle swarm optimization (ASAPSO) algorithm by effectively combining global and local surrogate models, which utilizes the uncertainty level of the current population state to evaluate the approximation ability of the surrogate model in its predictions. In ASAPSO, the transformation between local and global surrogate models is controlled by an adaptive Gaussian distribution parameter with a gauge of the advisability to improve the search process with better local exploration and diversity in uncertain solutions. Four expensive optimization benchmark functions and an airfoil aerodynamic real-world engineering optimization problem are utilized to validate the effectiveness and performance of ASAPSO. Experimental results demonstrate that ASAPSO has superiority in terms of solution accuracy compared with state-of-the-art algorithms.

1. Introduction

With the expansion of computational capabilities and the surge in data accumulation, optimization challenges in various scientific and industrial domains have become increasingly complex. The evolutionary algorithms (EAs) have emerged as an outperform optimization technique within this region, capable of handling the aforementioned complexities [1]. However, the mechanism of EAs is rooted in population-based iteration, which requires an extensive iterative phase to determine the fitness values of a large population. In many engineering fields, to evaluate these fitness functions for complex optimization tasks is a significant computational overhead, and this is classified as an expensive optimization problem (EOP) [2,3,4], such as in the field of aerodynamic design optimization. To achieve consistency between a design’s aerodynamic metrics in computational fluid dynamics simulations, the real-world aerodynamic performance can be time-consuming, which may take anywhere from minutes to potentially hours for a single evaluation.
Due to the challenges mentioned, the academic community has shifted towards the surrogate-assisted evolutionary algorithm (SAEA) [5], and the SAEA has gained significant scholarly interest [6,7,8]. The SAEA has proven to be effective for EOPs [9,10], as it can reduce computational overheads. Based on the available dataset, either singular [11] or composite models [12] are used to assess the feasibility of potential solutions. Contemporary methodologies, such as polynomial regression [13], support vector machines [14], radial basis function networks [14,15], artificial neural networks [16], and Gaussian processes (also known as Kriging model) [17,18], are commonly used in surrogate model.
The SAEA commonly uses global surrogate models to improve search capability [19,20,21]. Liu et al. proposed harnessing Gaussian functions for surrogate model articulation to enhance the SAEA’s prowess in curating a robust global surrogate construct [18]. By utilizing sample mapping techniques, the dimensionality of decision variables obtains a contraction [22]. To redress the algorithm’s tapering exploratory vigor in its mature phases, Wang et al. provided a global model governance blueprint [23], in which optimal fitness values concerned with ambiguous solutions are discerned through the amalgamated surrogate model.
However, in recent years, there has been a focus on local surrogate models. These models explore the proximity of the global optimal solution’s locus. Based on the latent potential of historic datasets, Sun et al. proposed an avant-garde strategy to use semi-supervised learning, optimally harnessing both annotated and non-annotated datasets to amplify algorithmic precision [24]. Yu et al. provided a social learning particle swarm optimization method for granular, localized spatial exploration [25]. Additionally, cooperation between global and local surrogate models comes into prominence as a new optimization method. Yu et al. proffered archival samples to cultivate both global and local surrogate models. The strategy meticulously calibrates a balance between the dual models to minimize the computational impositions of the genuine objective function.
Despite the widespread use of global and local surrogate models, there is a lack of effective control mechanisms during global and local surrogate model transitions. Additionally, due to constraints in single evaluation metrics, it is difficult to precisely describe the state of surrogate models, which can impact their effectiveness during transformation. Both of these reduce the effectiveness of the local surrogate model and its ability to explore locally.
Based on the above considerations, this paper firstly presents a new methodology to address situations with high levels of uncertainty. To navigate these complexities, we record points with significant uncertainty and use them to formulate a normal distribution function, which enables the self-adaptive modulation of parameters. This adaptation improves the algorithm’s ability to explore new areas and avoid becoming stuck in local optima. An archive of the region with uncertain solutions is created, guided by a normal distribution function. Archival data serve as the basis for recalibrating adaptive parameters to determine whether to transition to the local surrogate model. The composite model is used for evaluation within this surrogate space to guide the particle swarm optimization (PSO) algorithm in its optimization efforts. Its effectiveness is continuously evaluated against the adaptive parameters within the local surrogate model to determine if exiting this mode is necessary. This cyclical optimization process can balance exploration and exploitation. The main contributions of this paper can be concluded as follows.
(1) An effective evaluation criterion is proposed to estimate the confidence of a surrogate model in its predictions. The criterion combines the effectiveness of the surrogate model and the uncertainty level of the current state and can direct well the optimization behavior according to the exact state of the surrogate model.
(2) An adaptive Gaussian distribution parameter is used to select the surrogate models, which serves as a gauge of the advisability about the local surrogate model. The local exploration is enhanced, which promotes the ability to search the entire feasible solution space more effectively.
(3) The combination of the confidence criterion and the parameter controlled method cooperates as a comprehensive strategy, which can improve the overall performance and reliability of our algorithm in solving complex expensive optimization problems.
The subsequence of the paper unfolds as follows. Section 2 introduces the surrogate model techniques used in our algorithm. Section 3 provides a detailed description of the proposed ASAPSO algorithm. In Section 4, we present the experimental outcomes of ASAPSO on both benchmark functions and an intricate engineering challenge, comparing them with other state-of-the-art algorithms. Finally, Section 5 provides conclusions and discusses future works.

2. Related Techniques

2.1. Surrogate Models

In the last few decades, many construction methods of surrogate models have been introduced. A surrogate model involves three primary facets, which are data preprocessing, the initiation of the surrogate model, and the process for its ongoing updates and oversight. To refine the accuracy of fitness value forecasts, a variety of model formulation techniques are available. The highlighted methodologies include the polynomial regression model, Gaussian process model, radial basis function model, and several similar approaches. More information on these methodologies is provided in the subsequent discussions.

2.2. Strategies for Surrogate Model

2.2.1. Polynomial Regression Model

Polynomial regression (PR) [26] is prominently recognized as a favored surrogate model in the domain of multidisciplinary design optimization. Its core mathematical articulation is presented as follows:
f x = β 0 + i = 1 m β i x i + i = 1 m j > = i m β ij x i x j   ( i , j = 1 , 2 , ... , m )
where the unknown parameters are methodically organized to compose a column vector. x i represents the i-th design variable, and f(xi) symbolizes the predicted response associated with the i-th design variable. Furthermore, β i stands for the undetermined weight coefficients. The encompassing equation can be structured as follows:
f ^ = x β ^
To tackle this intricate scenario, the least squares approach is employed, culminating in the subsequent equation.
β ^ = x T x 1 x T f
The polynomial response method stands out for its robust continuity and consistency, making it adapt at diminishing the effects of digital noise. Moreover, it enhances the efficiency of the optimization search trajectory. Through the analysis of the coefficients of individual components in the polynomial, one can gauge the impact of each parameter on the cumulative system response. Notably, while higher polynomial orders can provide more nuanced fits, they might lead to overfitting. This occurs when models become excessively intricate in the pursuit of accommodating data, and it will potentially lead to misrepresentative assumptions. For example, in the context of a Taylor series expansion, while integrating more terms can refine the model’s approximation, it simultaneously runs the risk of overfitting to anomalous data, thereby jeopardizing the fidelity of genuine response values.

2.2.2. Gaussian Process

The Gaussian process (GP) [27,28], alternatively referred to as the Kriging model, is represented by the following expression:
Y x = f x + z x
where Y(x) is the function that is under consideration for solving. f(x) is a set of selected basis functions. z x denotes a stochastic process characterized by a mean of 0, variance σ 2 , and covariance that is not zero. The covariance matrix is cov Y , Y = σ 2 φ , which is used to measure the correlation between two or more sets of random variables.
cov X , Y = E X μ X Y μ Y = E XY μ X μ Y
Covariance fundamentally quantifies the interrelation between multiple sets of random variables. GP models are esteemed for their proficiency in offering predictions of elusive functions, coupled with assessments of the affiliated prediction uncertainties. Such attributes have catalyzed their widespread integration into stochastic algorithmic evolutionary approaches. Nonetheless, GP requires elements like the covariance matrix, which, in high-dimensional contexts, presents complexities and might culminate in less-than-optimal outcomes.

2.2.3. Radial Basis Function

Radial basis function (RBF) [14,29,30,31] is classified within the ambit of local approximation neural networks. With modifications to a limited set of weights, it can proficiently approximate any non-linear function. This model adeptly addresses patterns that are intricate to discern in systems, underpinning its foundation on the presumption that data integrity persists notwithstanding noise perturbations. As a multivariate discrete data interpolation technique, the RBF model interpolates across intricate design spaces via the weighted summation of rudimentary functions. These elementary functions are designated as kernel functions. Intrinsically, these kernel functions manifest as radially symmetric functions centered on the sample points. Given a set encompassing n sample points, the RBF model’s formulation is presented subsequently.
f ^ x = w ^ φ = i = 1 n w i φ x c i
RBF distinguishes itself as a non-parametric modeling approach, seamlessly fitting an extensive array of functions, spanning from discrete to pronouncedly non-linear variants. A pivotal merit lies in their conservative computational expenditure during model construction, paired with a stalwart convergence propensity. Such attributes have catalyzed the widespread incorporation of RBF models across diverse applications.

3. Adaptive Surrogate-Assisted Particle Swarm Optimization (ASAPSO)

In real-world applications, the challenges often arise from small datasets. This requires swift problem-solving strategies due to the constraints of limited data. To achieve this, a harmonious blend of precision and expeditious optimization processes is necessary. The recent methodology of cherry-picking the optimal fitness value for integration into the local surrogate model raises concerns.
The benefits of the integration between surrogate models and PSO in optimization problems are widely pointed out. Firstly, in terms of simplicity and ease of implementation, PSO is known for its straightforward implementation compared to other evolutionary algorithms like genetic algorithms (GAs). It requires fewer parameters to tune, which makes it easier to adapt and apply to different problems. Secondly, in terms of fast convergence speed, PSO tends to converge faster to optimal or near-optimal solutions, especially in high-dimensional search spaces. This can be particularly advantageous in applications where computational efficiency is crucial. Finally, in terms of flexibility and adaptability, PSO is highly flexible and can be easily adapted to different optimization problems by adjusting the velocity and position update rules. This flexibility allows for customization to specific problem domains without requiring significant modifications to the core algorithm.
Specifically, there is a risk of neglecting areas of uncertainty where multiple solutions exist but the optimal solution remains elusive. This vulnerability may inadvertently direct the algorithm towards a local optimum. The proposed ASAPSO algorithm in this paper introduces two key innovations. Firstly, an amalgamated surrogate model is implemented within the overarching surrogate model, endorsing the dual evaluation paradigm that emphasizes uncertainty. During the transition interplay between the global and local spheres, adaptive parameters that follow a normal distribution are introduced.
The ensemble surrogate model emerges by melding RBF, PR, and KRG surrogates under the aegis of the OWS management paradigm. The evaluation matrix of uncertain solutions hinges on computing the root mean square error disseminated across the triad of surrogate models, as illustrated in Figure 1.
As illustrated in Figure 1, the red region represents an area of uncertainty, hinting at a more extensive unexplored zone nearby. Within this zone is the domain tied to the optimal fitness value. The label of red arrow points to an uncertain solution guiding the algorithm in pursuit of the best result. This method encourages the algorithm to traverse both the landscape of uncertain solutions and the area containing the peak fitness value, prompting it to delve into and refine these areas after each assessment. Such a repetitive approach amplifies the algorithm’s local probing prowess.
The ensemble surrogate model is the aggregate result obtained through a weighted summation of the outputs generated by all individual members. This relationship is formulated as follows:
f e n = w 1 f 1 ^ + w 2 f 2 ^ + w 3 f 3 ^
where f i ^ stands for the i output of each member, 1 ≤ i ≤ 3, and w i represents the weight assigned to the i output, as defined:
w i = 0.5 e i 2 e 1 + e 2 + e 3
where e i signifies the root mean square error of the i model and the weighted aggregation method is employed [32].
In transitioning between the global and local surrogate models, the specific procedure is delineated as follows. A new k-value emerges from a normal distribution with a mean derived from the ustep and a standard deviation set at 0.1 during each assessment. This is realized through the given equation.
k = rand i ustep , 0.1
For the first half of the surrogate model, we utilize its incremental function curve. We postulate that amidst a plethora of uncertain solutions, the probability of pinpointing the optimal fitness value point intensifies within this segment of the region. As a result, the ensuing equation is employed to chronicle the uncertainty information over intervals and to influence the amplitude of the k-value grounded on these archived data.
un = size u / 1000
where un symbolizes the ensemble of uncertain solutions archived by the PSO algorithm for every iteration. The tally of uncertain solutions is monitored per iteration via Goodstep. The initial step size is set at 0.001, in accordance with the recommendations set forth in reference [33], and is subsequently recalibrated following the determination of the number of uncertain solutions within the global surrogate model.
ustep = 1 c ustep + c mean A Sstep
where ustep signifies the parameter archive storing uncertainty values and their mean value for each iteration. c is a value between 0 and 1. In a typical setting, the threshold is initialized at 0.5 to begin with balanced probabilities.
The primary objective of this adaptive adjustment is to identify whether a transition to either a local or a global surrogate model is warranted. Such adaptability broadens the exploration spectrum across the entirety of the population, thereby reducing the potential of the algorithm becoming ensnared in a local optimum. Additionally, this methodology adeptly mitigates the challenges of suboptimal convergence observed when solely depending on a static value for decision-making criteria. In the final analysis, the ustep is designated its expected value through the application of a normal distribution function and maintains a fixed standard deviation of 0.1. This deliberate decision ensures the procurement of values from the primary half of the normal distribution curve. In tandem, this study harnesses three unique surrogate models which are RBF, PR, and KRG. These models underpin the dual evaluation benchmarks of the amalgamated surrogate model and the uncertainty resolution via the OWS methodology.
The selection of individuals not only possesses optimal adaptation metrics, but also exhibits pronounced uncertainty resolutions. Such a discerning selection mechanism steers the PSO algorithm towards pinpointing the apex value. The local surrogate model integrates the foremost 75% of the ordered fitness values extracted from the global surrogate model and the latter 25% from the sorted uncertain solutions. Subsequently, the population is subjected to a de-emphasis procedure, drawing from the global surrogate model, thus shaping the population intended for the local surrogate model. Within this localized model, the amalgamated model becomes pivotal for predicting response values, thereby forecasting the fitness values that the surrogate model anticipates for points situated within this domain. If the fitness value derived from the local surrogate model, less the least initial population value, falls below 0.001, or if said fitness value is inferior to the initial population’s minimum, the algorithm persists within the global surrogate model’s ambit. On the contrary, if the value meets or surpasses this threshold, the course of action redirects to the global surrogate model. A visual representation of the algorithm’s architectural blueprint can be viewed in Figure 2.
The whole process of ASAPSO can be represented as Algorithm 1.
Algorithm 1 The pseudo-code of ASAPSO
Output: the global solution with its real fitness
begin
t = 1 ;
 Use Latin hypercube sampling to initialize the population POP and evaluate it
 Save POP with its fitness in DB
while computational budget is not reaching the limitation do
   t = t + 1 ;
  Construct the PR model according to Section 2.2.1, the GP model according to Section 2.2.2, and RGB model according to Section 2.2.3;
  Cooperate the models by optimal weighted surrogate (OWS);
  Construct the local surrogate model by the optimal weighting approach;
  Evaluate and select uncertainty solutions;
  Transform the local and global surrogate models by transformation mechanism;
  Update the global best solution;
end while
end

4. Experiments and Analysis

4.1. Comparative Experiment on Benchmark Functions

(1)
Experimental environment
In this paper, the source code corresponding to both the comparative and the proposed algorithms was implemented in MATLAB 2018b. For experimental evaluations, we employed a computational platform with the following specifications: an Intel CPU operating at 3.2 GHz, a memory of 16 GB RAM, and running on the Windows 10 operating system.
(2)
Benchmark Function
For the analysis, the test function suite was derived from the CEC2014 series benchmark test functions. Numerous problems within this suite have been predominantly used in various stochastic algorithmic evolutionary approaches [28].

4.2. Comparative Algorithm

For the experimentation, a selection was made that encompassed one foundational swarm intelligence algorithm and two enhanced versions of the initial algorithm. Unique surrogate model management methodologies are embodied by each of these algorithms, which are particle swarm optimization (PSO), a Gaussian process surrogate model-assisted evolutionary algorithm (GPEME) [18], and committee-based active learning for surrogate-assisted PSO (CALSAPSO). This curated selection aimed to facilitate comprehensive comparative scrutiny among these algorithms.
(1) Particle Swarm Optimization (PSO). PSO is a computational approach that emulates the collective behavior observed in the social phenomena of bird flocks or fish schools. Introduced by Kennedy and Eberhart in 1995 [34,35], scholars have proposed these variations to either accelerate convergence [36] or amplify population diversity, the latter being a strategy to diminish the likelihood of confinement to local optima.
(2) GPEME. This represents a Kriging-based SAEA that incorporates the lower confidence bound as its pivotal criterion for the selection of solutions worthy of evaluation, specifically tailored for median scale problems.
(3) CALSAPSO. Within the domain of the SAEA, the strategy of active learning is integrated using a triad of models. An integrated surrogate model arises from the evaluation of the root mean square error. Moreover, a localized surrogate model is crafted by scrutinizing the samples marked by the highest uncertainty, harmonized with the identification of potential optimal solutions as inferred from the integrated surrogate model. It is noteworthy that PSO is the driving force behind the optimization, and the algorithm delineated in this paper functions in harmony with the PSO framework. The parameter configuration for the comparative algorithms is illustrated in Table 1.

4.3. Comparison Experiment

Table 2 shows the results from the ten-dimensional tests derived from the five CEC2014 test sets. Every algorithm was executed individually on each test function with the same machine, completing a total of 20 runs, while addressing the resource-intensive optimization challenge. Evaluations were confined to a peak of 120 for 10D and 240 for the 20-dimensional (20D) tasks, starting with a population size of 50. These numerical trials took place with decision variable dimensions set at D = 10 and D = 20, specifically targeting the resource-intensive optimization issue. The convergence profiles of ASAPSO, PSO, GPEME, and CALSAPSO on four functions are illustrated in Figure 3, Figure 4, Figure 5, and Figure 6, respectively. From Figure 3, Figure 4, Figure 5 and Figure 6, it can be further substantiated that ASAPSO exhibits faster convergence on almost all functions compared to other compared algorithms. In all tables, the best ranks are shown in bold. Table 3 and Table 4 is the Friedman test of all algorithms on 10D and 20D, respectively.
Additionally, to further validate the effectiveness of the proposed algorithm, comparative experiments were conducted on the CEC2015 expensive optimization problem test set. From Figure 7 and Table 5, it can be observed that the proposed algorithm exhibits optimal advantages in both convergence speed and accuracy.
The experimental results reveal that ASAPSO, due to its effective transition between local and global surrogate models, can more successfully explore the locally uncertain regions where potential optimal solutions may exist. Therefore, ASAPSO exhibits optimal search performance. Moreover, the successful operation of the transition mechanism also benefits from the accurate evaluation of the models.

4.4. Adaptive Parameter Behavioral Analysis

To further substantiate the efficacy of adaptive parameters grounded in a normal distribution, a side-by-side analysis is undertaken between the algorithm that employs adaptive parameters and its counterpart without such modulation. This evaluative process spans four test functions, holding the parameters at D = 10 and D = 20, and imposing an evaluation ceiling of 12D. The comparison charts are illustrated in Figure 8, Figure 9, Figure 10 and Figure 11. It is worth noting that within these figures, ASAPSO-WGP denotes the version of the proposed algorithm that forgoes the use of adaptive parameters. From Figure 8, Figure 9, Figure 10 and Figure 11, it can be further substantiated that the proposed algorithm exhibits faster convergence on almost all functions compared to other compared algorithms.
Through an analytical catenation encompassing the proffered algorithm, a pair of classical algorithms (namely PSO and GPEME), and an avant-garde algorithm (CALSAPSO), discernible augmentations in performance are manifested by the proposed algorithm, especially when D = 10 is under scrutiny. An in-depth perusal of the convergence trajectories unveils that our proposed algorithm not only attains a more rapid initial convergence but also preserves a modicum of local exploratory prowess during the advanced phases. When extended to D = 20, its efficacy transcends that of both PSO and GPEME, marginally eclipses CALSAPSO, and underscores its dominance over algorithms devoid of an adaptive framework. In a further series of trials, bifurcated by the presence and absence of adaptive parameters, it is evident that ASAPSO, when armored with the adaptive parameter regimen, registers superior outcomes across a multitude of functions at D = 10, albeit with nuances that are less conspicuous. It merits particular emphasis that, at D = 20, ASAPSO elucidates pronounced refinements in trio functions, namely the Elliptic function, Griewank function, and Rastrigin function. While the apex fitness value optimized for the Rosenbrock function does not particularly dazzle when juxtaposed with its parameter-absent counterpart, a deeper dive into the convergence trajectory illuminates that ASAPSO, when synergized with adaptive parameters, indeed fares better in the Rosenbrock function, evincing more pronounced oscillatory patterns. This revelation accentuates the algorithm’s amplified exploratory finesse, thereby confirming the efficacy of the strategy proposed in this paper.

4.5. Real-World Application of Airfoil Design

The recent research on airfoil aerodynamic optimization has focused on various methods and technologies to enhance the performance of airfoil designs in different applications. Researchers have been implementing genetic algorithms to optimize airfoil geometries, focusing on improving aerodynamic efficiency and lift-to-drag ratios. One study detailed the use of a genetic algorithm model to optimize airfoil geometry specifically for low Reynolds number conditions [37]. Artificial neural networks (ANNs): Feed-forward artificial neural networks have been employed as surrogate models to predict the lift and drag coefficients of NACA airfoils. These models help in reducing computational time while maintaining optimization accuracy [38]. Aerodynamic shape optimization: Advances in aerodynamic shape optimization include using physics-informed methods and machine learning to improve the performance of airfoils under various flow conditions. For instance, one study proposed a modified metric-based proper orthogonal decomposition approach for enhancing airfoil designs [39].
In this study, an aerodynamic design optimization problem pertaining to airfoil geometries is employed to critically assess the efficacy of ASAPSO in practical applications. Recognizing that the evaluation of prospective airfoil configurations demands rigorous computational simulations rooted in fluid dynamics, the XFOIL platform is harnessed as an instrumental tool for the analysis and design of subsonic, isolated, single-segment airfoils. The airfoil aerodynamic optimization problem, conceived by Professor Mark Drela of the Massachusetts Institute of Technology, emerges as a quintessential application for the SAEA. This is attributed to the intricate computational fluid dynamics models imperative for prognosticating the performance metrics of potential airfoil configurations. Intrinsically, the optimization challenge revolves around the enhancement of the airfoil’s geometry to maximize the lift coefficient-to-drag coefficient ratio. Concurrently, it is paramount to curtail the number of evaluations. Delving into the technical nuances, the airfoil’s geometric intricacies are encapsulated by the Hicks–Henne function. Introduced by Hicks and Henne in the concluding years of the 1970s, this function leverages parametric techniques to modulate airfoil morphologies by characterizing the extent of variances in curvature and thickness. Subsequent to this, the derived variations are meticulously superposed onto the foundational curvature and thickness dimensions of the archetypal airfoil. Intriguingly, the Hicks–Henne typology [32] amalgamates a foundational airfoil with a set of ten decision variables. The NACA0012 airfoil is a commonly used symmetrical airfoil, widely used in aviation fields such as low-speed stunt aircraft main wings and high-performance helicopter rotors. Due to its specific geometric shape and aerodynamic characteristics, precise computational simulations of it require the consideration of multiple factors, including fluid dynamics, pressure distribution on the airfoil surface, and lift and drag under different flight conditions. These calculations not only require accuracy, but also need to consider the effective utilization of computing resources to ensure the efficiency and accuracy of the calculations. Thus, the optimization design of NACA0012 airfoil is quietly difficult to solve. The problem is thus articulated as follows:
max θ 1 , θ 2 , ... , θ 10 C L / D M , R e , A o A , H i c k s θ 1 , θ 2 , ... , θ 10
s . t . 0.001 θ i 0.001 , i = 1.7
0.006 θ i 0.006 ,   i = 2.5
0.009 θ i 0.009 ,   i = 3.4
0.002 θ i 0.002 ,   i = 6.10
0.007 θ i 0.007 ,   i = 8.9
In the provided equation, C L / D signifies the fitness value of the lift-to-drag ratio, while H i c k s denotes the ten decision variables formulated using a Hicks–Henne function. The parameters M , R e , A o A represent predefined values for the Mach number, Reynolds number, and angle of attack. Within this study, two algorithms, CALSAPSO and GPEME, are evaluated alongside the newly proposed algorithm. The primary metric for gauging their performance is the lift-to-drag ratio. Results of this analysis can be found in Table 6. It is noteworthy that the algorithm presented in this paper outperforms the others, particularly in terms of the maximum, median, and average values of the lift-to-drag ratio.
An instance of the symmetric airfoil NACA0012 is globally renowned and frequently employed as a benchmark for theoretical analyses and wind tunnel comparisons. Its geometry is parameterized at 0.5 before being simulated via the Xfoil software (0.0.8). It then undergoes an optimization process using ASAPSO, capped at 120 evaluations. In the provided visual, the initial airfoil is shown in green and its optimized counterpart in red, emphasizing the airfoil’s smooth and inherently symmetrical contour.
The comparison of the original airfoil geometry with ASAPSO airfoil geometry is given in Figure 12. From Figure 12, it can be illustrated that the proposed ASAPSO algorithm exhibits better optimization airfoil geometry compared to NACA0012.
The integration of surrogate models with PSO can significantly enhance the efficiency of engineering design processes, particularly in fields like aerospace, automotive, and renewable energy. By optimizing complex systems such as airfoils or turbine blades, managers can achieve better performance outcomes, a reduced time-to-market, and lower development costs. By reducing the computational cost of simulations and improving the convergence speed of optimization algorithms, the proposed method can lower the overall cost of engineering projects. This makes advanced optimization techniques more accessible to smaller firms and industries with limited resources. The optimization of aerodynamic designs, such as airfoils, directly contributes to energy efficiency. For example, optimizing wind turbine blades can lead to increased energy capture, while optimizing aircraft wing designs can reduce fuel consumption. These improvements have significant environmental benefits, contributing to the reducing of carbon emissions and promoting sustainability.

5. Conclusions

This paper presents an adaptive surrogate-assisted particle swarm optimization (ASAPSO), and it enables the dynamic calibration of switching conditions between the localized and globalized surrogate models with a normal distribution adaptive technique. ASAPSO is guided towards optimal solution spaces by the amalgamated surrogate model and an uncertainty resolution mechanism. ASAPSO outperforms other PSO paradigms and SAEAs in identifying viable solutions within limited computational resources. The superiority is particularly evident in multi-modal functions, where the improvements are clearly noticeable. However, it is important to note that there are inherent limitations to the algorithm. Specifically, its optimization effectiveness is somewhat reduced when applied to unimodal functions, and the adaptive parameter strategy is less effective in scenarios with lower dimensionality. These findings lead to two important considerations. Multi-modal conundrums are characterized by an abundance of local optima. Although our surrogate model can achieve partial optimization by switching modalities as explained here, navigating towards the global optimum remains a complex task. Secondly, the limited number of local optima in unimodal functions, compared to their multi-modal counterparts, suggests that our adaptive parameter strategy may not demonstrate the same level of resilience when faced with the former types of functions.

Author Contributions

Writing and editing, S.Q.; software and methodology, F.L.; supervision, Z.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by the National Foreign Expert Program of the Ministry of Science and Technology (Grant No. G2023041037L) and the Shaanxi Natural Science Basic Research Project (Grant No. 2024JC-YBMS-502).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

All authors declare no conflict of interest. All authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Bangyal, W.H.; Shakir, R.; Rehman, N.U.; Ashraf, A.; Ahmad, J. An Improved Seagull Algorithm for Numerical Optimization Problem. In Proceedings of the International Conference on Swarm Intelligence, Shenzhen, China, 14–18 July 2023; Springer Nature: Cham, Switzerland, 2023; pp. 297–308. [Google Scholar]
  2. Jin, Y. A comprehensive survey of fitness approximation in evolutionary computation. Soft Comput. 2005, 9, 3–12. [Google Scholar] [CrossRef]
  3. Shan, S.; Wang, G.G. Survey of modeling and optimization strategies to solve high-dimensional design problems with computationally-expensive black-box functions. Struct. Multidiscip. Optim. 2010, 41, 219–241. [Google Scholar] [CrossRef]
  4. Shilane, D.; Liang, R.H.; Dudoit, S. Computational Intelligence in Expensive Optimization Problems; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  5. Chugh, T.; Jin, Y.; Miettinen, K.; Hakanen, J.; Sindhya, K. A surrogate-assisted reference vector guided evolutionary algorithm for computationally expensive many-objective optimization. IEEE Trans. Evol. Comput. 2016, 22, 129–142. [Google Scholar] [CrossRef]
  6. Tian, J.; Hou, M.; Bian, H.; Li, J. Variable surrogate model-based particle swarm optimization for high-dimensional expensive problems. Complex Intell. Syst. 2023, 9, 3887–3935. [Google Scholar] [CrossRef]
  7. Bangyal, W.H.; Nisar, K.; Soomro, T.R.; Ag Ibrahim, A.A.; Mallah, G.A.; Hassan, N.U.; Rehman, N.U. An improved particle swarm optimization algorithm for data classification. Appl. Sci. 2022, 13, 283. [Google Scholar] [CrossRef]
  8. Bangyal, W.H.; Malik, Z.A.; Saleem, I.; Rehman, N.U. An analysis of initialization techniques of particle swarm optimization algorithm for global optimization. In Proceedings of the 2021 International Conference on Innovative Computing (ICIC), Lahore, Pakistan, 9–10 November 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–7. [Google Scholar]
  9. Jin, Y.; Olhofer, M.; Sendhoff, B. A framework for evolutionary optimization with approximate fifitness functions. IEEE Trans. Evol. Comput. 2002, 6, 481–494. [Google Scholar]
  10. Wang, H.; Jin, Y.; Jansen, J.O. Data-driven surrogate-assisted multiobjective evolutionary optimization of a trauma system. IEEE Trans. Evol. Comput. 2016, 20, 939–952. [Google Scholar] [CrossRef]
  11. Jin, Y.; Sendhoff, B. Reducing fifitness evaluations using clustering techniques and neural network ensembles. In Proceedings of the Genetic and Evolutionary Computation Conference, Seattle, WA, USA, 26–30 June 2004; pp. 688–699. [Google Scholar]
  12. Lim, D.; Ong, Y.-S.; Jin, Y.; Sendhoff, B. A study on metamodeling techniques, ensembles, and multi-surrogates in evolutionary computation. In Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation, London, UK, 7–11 July 2007; pp. 1288–1295. [Google Scholar]
  13. Zhou, Z.; Ong, Y.S.; Nguyen, M.H.; Lim, D. A study on polynomial regression and Gaussian process global surrogate model in hierarchical surrogate-assisted evolutionary algorithm. In Proceedings of the 2005 IEEE Congress on Evolutionary Computation, Edinburgh, UK, 2–5 September 2005; Volume 3, pp. 2832–2839. [Google Scholar]
  14. Regis, R.G. Evolutionary programming for high-dimensional constrained expensive black-box optimization using radial basis functions. IEEE Trans. Evol. Comput. 2014, 18, 326–347. [Google Scholar] [CrossRef]
  15. Sun, C.; Jin, Y.; Zeng, J.; Yu, Y. A two-layer surrogate-assisted particle swarm optimization algorithm. Soft Comput. 2015, 19, 1461–1475. [Google Scholar] [CrossRef]
  16. Li, F.; Shen, W.; Cai, X.; Gao, L.; Wang, G.G. A fast surrogate-assisted particle swarm optimization algorithm for computationally expensive problems. Appl. Soft Comput. 2020, 92, 106303. [Google Scholar] [CrossRef]
  17. Guo, D.; Chai, T.; Ding, J.; Jin, Y. Small data driven evolutionary multi-objective optimization of fused magnesium furnaces. In Proceedings of the 2016 IEEE Symposium Series on Computational Intelligence (SSCI), Athens, Greece, 6–9 December 2016; pp. 1–8. [Google Scholar]
  18. Liu, B.; Zhang, Q.; Gielen, G.G.E. A Gaussian process surrogate model assisted evolutionary algorithm for medium scale expensive optimization problems. IEEE Trans. Evol. Comput. 2013, 18, 180–1921. [Google Scholar] [CrossRef]
  19. Lim, D.; Jin, Y.; Ong, Y.S.; Sendhoff, B. Generalizing surrogate-assisted evolutionary computation. IEEE Trans. Evol. Comput. 2009, 14, 329–355. [Google Scholar] [CrossRef]
  20. Sun, C.; Zeng, J.; Pan, J.; Xue, S.; Jin, Y. A new fitness estimation strategy for particle swarm optimization. Inf. Sci. 2013, 221, 355–370. [Google Scholar] [CrossRef]
  21. Zhou, Z.; Ong, Y.S.; Nair, P.B.; Keane, A.J.; Lum, K.Y. Combining global and local surrogate models to accelerate evolutionary optimization. IEEE Trans. Syst. Man Cybern. Part C 2006, 37, 66–76. [Google Scholar] [CrossRef]
  22. Sammon, J., Jr. A nonlinear mapping for data structure analysis. IEEE Trans. Comput. 1969, 100, 401–409. [Google Scholar] [CrossRef]
  23. Wang, H.; Jin, Y.; Doherty, J. Committee-based active learning for surrogate-assisted particle swarm optimization of expensive problems. IEEE Trans. Cybern. 2017, 47, 2664–2677. [Google Scholar] [CrossRef] [PubMed]
  24. Sun, C.; Jin, Y.; Tan, Y. Semi-supervised learning assisted particle swarm optimization of computationally expensive problems. In Proceedings of the Genetic and Evolutionary Computation Conference, Kyoto, Japan, 15–19 July 2018; pp. 45–52. [Google Scholar]
  25. Yu, H.; Tan, Y.; Zeng, J.; Sun, C.; Jin, Y. Surrogate-assisted hierarchical particle swarm optimization. Inf. Sci. 2018, 454, 59–72. [Google Scholar] [CrossRef]
  26. Lv, Z.; Wang, L.; Han, Z.; Zhao, J.; Wang, W. Surrogate-assisted particle swarm optimization algorithm with Pareto active learning for expensive multi-objective optimization. IEEE/CAA J. Autom. Sin. 2019, 6, 838–849. [Google Scholar] [CrossRef]
  27. Pan, J.; Liu, N.; Chu, S.; Lai, T. An efficient surrogate-assisted hybrid optimization algorithm for expensive optimization problems. Inf. Sci. 2021, 561, 304–325. [Google Scholar] [CrossRef]
  28. Buche, D.; Schraudolph, N.N.; Koumoutsakos, P. Accelerating evolutionary algorithms with Gaussian process fitness function models. IEEE Trans. Syst. Man Cybern. Part C 2005, 35, 183–194. [Google Scholar] [CrossRef]
  29. Zapotecas Martínez, S.; Coello Coello, C.A. MOEA/D assisted by RBF networks for expensive multi-objective optimization problems. In Proceedings of the 15th Annual Conference on Genetic and Evolutionary Computation, Amsterdam, The Netherlands, 6–10 July 2013; pp. 1405–1412. [Google Scholar]
  30. Branke, J.; Schmidt, C. Faster convergence by means of fitness estimation. Soft Comput. 2005, 9, 13–20. [Google Scholar] [CrossRef]
  31. Sun, C.; Jin, Y.; Cheng, R.; Ding, J.; Zeng, J. Surrogate-assisted cooperative swarm optimization of high-dimensional expensive problems. IEEE Trans. Evol. Comput. 2017, 21, 644–660. [Google Scholar] [CrossRef]
  32. Goel, T.; Haftka, R.T.; Shyy, W.; Queipo, N.V. Ensemble of surrogates. Struct. Multidiscipl. Optim. 2007, 33, 199–216. [Google Scholar] [CrossRef]
  33. Chugh, T.; Chakraborti, N.; Sindhya, K.; Jin, Y. A data-driven surrogate-assisted evolutionary algorithm applied to a many-objective blast furnace optimization problem. Mater. Manuf. Process. 2017, 32, 1172–1178. [Google Scholar] [CrossRef]
  34. Bonyadi, M.R.; Michalewicz, Z. Particle swarm optimization for single objective continuous space problems: A review. Evol. Comput. 2017, 25, 1–54. [Google Scholar] [CrossRef]
  35. Eberhart, R.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the MHS’95. Proceedings of the Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995; IEEE: Piscataway, NJ, USA, 1995; pp. 39–43. [Google Scholar]
  36. Clerc, M.; Kennedy, J. The particle swarm-explosion, stability, and convergence in a multidimensional complex space. IEEE Trans. Evol. Comput. 2002, 6, 58–73. [Google Scholar] [CrossRef]
  37. Ümütlü, H.C.A.; Kiral, Z. Airfoil shape optimization using Bézier curve and genetic algorithm. Aviation 2022, 26, 32–40. [Google Scholar] [CrossRef]
  38. Kieszek, R.; Majcher, M.; Syta, B.; Kozakiewicz, A. Feed-forward artificial neural network as surrogate model to predict lift and drag coefficient of NACA airfoil and searching of maximum lift-to-drag ratio. J. Theor. Appl. Mech. 2024, 62, 521–534. [Google Scholar] [CrossRef]
  39. Zhang, C.; Chen, H.; Xu, X.; Duan, Y.; Wang, G. Aerodynamic shape optimization using a physics-informed hot-start method combined with modified metric-based proper orthogonal decomposition. Phys. Fluids 2024, 36, 084106. [Google Scholar] [CrossRef]
Figure 1. Diagram of uncertain solution- and optimum fitness value-guided optimization.
Figure 1. Diagram of uncertain solution- and optimum fitness value-guided optimization.
Applsci 14 07853 g001
Figure 2. Structure diagram of ASAPSO algorithm.
Figure 2. Structure diagram of ASAPSO algorithm.
Applsci 14 07853 g002
Figure 3. Convergence of ASAPSO, PSO, GPEME, and CALSAPSO on the D = 10, D = 20 of Ellipsoid function.
Figure 3. Convergence of ASAPSO, PSO, GPEME, and CALSAPSO on the D = 10, D = 20 of Ellipsoid function.
Applsci 14 07853 g003
Figure 4. Convergence of ASAPSO, PSO, GPEME, and CALSAPSO on the D = 10, D = 20 Griewank function.
Figure 4. Convergence of ASAPSO, PSO, GPEME, and CALSAPSO on the D = 10, D = 20 Griewank function.
Applsci 14 07853 g004
Figure 5. Convergence of ASAPSO, PSO, GPEME, and CALSAPSO on the D = 10, D = 20 Rastrigin Function.
Figure 5. Convergence of ASAPSO, PSO, GPEME, and CALSAPSO on the D = 10, D = 20 Rastrigin Function.
Applsci 14 07853 g005
Figure 6. Convergence of ASAPSO, PSO, GPEME, and CALSAPSO on the D = 10, D = 20 Rosenbrock Function.
Figure 6. Convergence of ASAPSO, PSO, GPEME, and CALSAPSO on the D = 10, D = 20 Rosenbrock Function.
Applsci 14 07853 g006
Figure 7. Convergence of ASAPSO, PSO, GPEME, and CALSAPSO on CEC2015 expensive problems.
Figure 7. Convergence of ASAPSO, PSO, GPEME, and CALSAPSO on CEC2015 expensive problems.
Applsci 14 07853 g007aApplsci 14 07853 g007b
Figure 8. D = 10, D = 20 Elliptic function comparison chart.
Figure 8. D = 10, D = 20 Elliptic function comparison chart.
Applsci 14 07853 g008
Figure 9. D = 10, D = 20 Rosenbrock function comparison chart.
Figure 9. D = 10, D = 20 Rosenbrock function comparison chart.
Applsci 14 07853 g009
Figure 10. D = 10, D = 20 Griewank’s function comparison chart.
Figure 10. D = 10, D = 20 Griewank’s function comparison chart.
Applsci 14 07853 g010
Figure 11. D = 10, D = 20 Rastrigin’s function comparison chart.
Figure 11. D = 10, D = 20 Rastrigin’s function comparison chart.
Applsci 14 07853 g011
Figure 12. Comparison of the original airfoil geometry with ASAPSO airfoil geometry.
Figure 12. Comparison of the original airfoil geometry with ASAPSO airfoil geometry.
Applsci 14 07853 g012
Table 1. Parameters of the compared algorithms.
Table 1. Parameters of the compared algorithms.
AlgorithmParametric
ASAPSOk = 0.001, ustep = 0.01, theta = FES/FESMAX, ω = 0.9 theta 0.5
PSOc1 = c2 = 1.49445, ω = 0.9 theta 0.5
GPEMEOmega = 2, CR = 0.8, F = 0.8, n_S = 2*D
CALSAPSOc1 = c2 = 1.49445, theta = FES/FESMAX, ω = 0.9 theta 0.5
Table 2. Results of algorithms run independently 20 times on the test functions.
Table 2. Results of algorithms run independently 20 times on the test functions.
ProblemsDimValueASAPSOPSOGPEMECALSAPSO
Ellipsoid10Mean6.42 × 10−12.21 × 1027.66 × 1017.99 × 100
Std3.4 × 1004.65 × 1014.99 × 1014.63 × 100
20Mean2.95 × 1004.31 × 1023.39 × 1018.96 × 100
Std5.42 × 10−18.66 × 1012.77 × 1001.54 × 10−1
Rosenbrock10Mean2.55 × 1014.31 × 1028.02 × 1013.19 × 101
Std1.93 × 1002.86 × 1019.89 × 1003.63 × 100
20Mean6.12 × 1013.50 × 1021.85 × 1021.27 × 102
Std2.66 × 1019.33 × 1016.87 × 1018.28 × 101
Griewank10Mean9.02 × 10−21.66 × 1024.20 × 1011.48 × 10−1
Std1.76 × 10−34.36 × 1018.51 × 1006.11 × 10−2
20Mean1.11 × 10−12.98 × 1024.32 × 1011.52 × 100
Std3.64 × 10−26.75 × 1015.23 × 1005.85 × 10−1
Rastrigin10Mean2.15 × 1011.30 × 1022.58 × 1014.35 × 101
Std8.03 × 1005.84 × 1019.36 × 1003.87 × 100
20Mean6.15 × 1012.58 × 1021.69 × 1021.24 × 102
Std1.09 × 1018.76 × 1017.53 × 1013.87 × 101
Table 3. Friedman test of algorithms on 10D.
Table 3. Friedman test of algorithms on 10D.
AlgorithmRanking
ASAPSO1.00
PSO4.00
GPEME2.75
CALSAPSO2.25
Table 4. Friedman test of algorithms on 20D.
Table 4. Friedman test of algorithms on 20D.
AlgorithmRanking
ASAPSO1.00
PSO4.00
GPEME3.00
CALSAPSO2.00
Table 5. Friedman test of algorithms on CEC2015 expensive problems.
Table 5. Friedman test of algorithms on CEC2015 expensive problems.
AlgorithmRanking
ASAPSO1.67
PSO3.87
GPEME2.40
CALSAPSO2.06
Table 6. Results of the lift-over-drag ratio on the aerodynamic airfoil design problem after 25 runs.
Table 6. Results of the lift-over-drag ratio on the aerodynamic airfoil design problem after 25 runs.
AlgorithmBestMedianMean
ASAPSO105.3399.9899.62
CALSAPSO100.2798.6599.01
GPEME96.3894.8694.35
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qu, S.; Liu, F.; Cao, Z. An Adaptive Surrogate-Assisted Particle Swarm Optimization Algorithm Combining Effectively Global and Local Surrogate Models and Its Application. Appl. Sci. 2024, 14, 7853. https://doi.org/10.3390/app14177853

AMA Style

Qu S, Liu F, Cao Z. An Adaptive Surrogate-Assisted Particle Swarm Optimization Algorithm Combining Effectively Global and Local Surrogate Models and Its Application. Applied Sciences. 2024; 14(17):7853. https://doi.org/10.3390/app14177853

Chicago/Turabian Style

Qu, Shaochun, Fuguang Liu, and Zijian Cao. 2024. "An Adaptive Surrogate-Assisted Particle Swarm Optimization Algorithm Combining Effectively Global and Local Surrogate Models and Its Application" Applied Sciences 14, no. 17: 7853. https://doi.org/10.3390/app14177853

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop