Next Article in Journal
Silicon and Iron as Resource-Efficient Anode Materials for Ambient-Temperature Metal-Air Batteries: A Review
Next Article in Special Issue
Internal Cracks and Non-Metallic Inclusions as Root Causes of Casting Failure in Sugar Mill Roller Shafts
Previous Article in Journal
Lignin Nanoparticles as A Promising Way for Enhancing Lignin Flame Retardant Effect in Polylactide
Previous Article in Special Issue
Investigations of Machining Characteristics in the Upgraded MQL-Assisted Turning of Pure Titanium Alloys Using Evolutionary Algorithms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mechanical Identification of Materials and Structures with Optical Methods and Metaheuristic Optimization

by
Elisa Ficarella
1,
Luciano Lamberti
1,* and
Sadik Ozgur Degertekin
2
1
Dipartimento di Meccanica, Matematica e Management, Politecnico di Bari, 70126 Bari, Italy
2
Department of Civil Engineering, Dicle University, 21280 Diyarbakır, Turkey
*
Author to whom correspondence should be addressed.
Materials 2019, 12(13), 2133; https://doi.org/10.3390/ma12132133
Submission received: 27 May 2019 / Revised: 22 June 2019 / Accepted: 24 June 2019 / Published: 2 July 2019

Abstract

:
This study presents a hybrid framework for mechanical identification of materials and structures. The inverse problem is solved by combining experimental measurements performed by optical methods and non-linear optimization using metaheuristic algorithms. In particular, we develop three advanced formulations of Simulated Annealing (SA), Harmony Search (HS) and Big Bang-Big Crunch (BBBC) including enhanced approximate line search and computationally cheap gradient evaluation strategies. The rationale behind the new algorithms—denoted as Hybrid Fast Simulated Annealing (HFSA), Hybrid Fast Harmony Search (HFHS) and Hybrid Fast Big Bang-Big Crunch (HFBBBC)—is to generate high quality trial designs lying on a properly selected set of descent directions. Besides hybridizing SA/HS/BBBC metaheuristic search engines with gradient information and approximate line search, HS and BBBC are also hybridized with an enhanced 1-D probabilistic search derived from SA. The results obtained in three inverse problems regarding composite and transversely isotropic hyperelastic materials/structures with up to 17 unknown properties clearly demonstrate the validity of the proposed approach, which allows to significantly reduce the number of structural analyses with respect to previous SA/HS/BBBC formulations and improves robustness of metaheuristic search engines.

1. Introduction and Theoretical Background

An important type of inverse problems is to identify material properties involved in constitutive equations or stiffness properties that drive the mechanical response to applied loads. Since displacements represent the direct solution of the general mechanics problem for a body subject to some loads and kinematic constraints, the inverse solution of the problem is to identify structural properties corresponding to a given displacement field {u(x, y, z), v(x, y, z), w(x, y, z)}. The term “structural properties” covers material parameters (e.g., Young’s modulus, hyperelastic constants, viscosity etc.) and stiffness terms including details on material constituents (e.g., tension/shear/bending terms in composite laminates, fiber orientation and ply thickness etc.) evaluated for the region of the body under investigation.
The finite element model updating technique (FEMU) [1,2,3] and the virtual fields method (VFM) [2,3,4] are the most common approaches adopted in the literature for solving mechanical characterization problems. In general, FEMU is computationally more expensive than VFM but the latter method may require special cares in selecting specimen shape, virtual displacement fields and kinematic boundary conditions to simplify computations entailed by the identification process and obtain realistic results.
In the FEMU method, experimentally measured displacement fields are compared with their counterparts predicted by a finite element model simulating the experiment. This comparison is made at a given set of control points. If boundary conditions and loads are properly simulated by the FE model, computed displacements match experimental data only when the actual structural properties are given in input to the numerical model. The difference between computed displacement values and measured target values may be expressed as an error functional Ω that depends on the unknown material/structural properties to be identified. Hence, the inverse problem of identifying NMP unknown mechanical properties may be stated as an optimization problem where the goal is to minimize the error functional Ω. That is:
{ Min   [ Ω ( X 1 , X 2 , , X NMP ) = 1 N CNT j = 1 N CNT ( δ FEM j δ j ¯ δ j ¯ ) 2 ] G p ( X 1 , X 2 , , X NMP ) 0 X i L X i X i U { i = 1 , , N M P p = 1 , , N C
where: XPROP (X1,X2,…,XNMP) is the design vector containing the NMP unknown properties ranging between the lower bounds “L” and the upper bounds “U”; NCNT is the number of control points at which FE results are compared with experimental data; δ FEM j and δ j ¯ , respectively, are the computed and target displacement values at the jth control point; Gp(XPROP) define a set of NC constraint functions depending on unknown properties that must be satisfied in order to guarantee the existence of a solution for the inverse problem. Buckling loads or natural frequencies can also be taken as target quantities in the optimization process: in this case, the displacement field of the structure is described by the corresponding normalized mode shape.
Non-contact optical techniques [5,6,7] such as moiré, holography, speckle and digital image correlation are naturally suited for material/structure identification because they can accurately measure displacements in real time and gather full field information without altering specimen conditions. The full field ability of optical techniques allows to select the necessary amount of experimental data for making the results of identification process reliable. Furthermore, their versatility also allows to choose the best experimental set-up for the inverse problem at hand. Based on the type of illumination varying from coherent or partially coherent light to white light, the magnitude of measured displacements may range from fraction of microns (using, for example, lasers and interferometry) to some millimetres (using, for example, grating projection, image correlation and white light), thus covering a wide spectrum of materials and structural identification problems.
Regardless of the way displacement information are extracted from recorded images, all optical methods share a common basic principle. The light wave fronts hitting the specimen surface are modulated by the deformations undergone by the tested body. By comparing the wave fronts modulated by the body surface and recorded by a sensor before and after deformation a system of fringes forms on the specimen surface; each fringe represents the locus of an iso-displacement region. The spatial frequency distribution of fringes can be used for recovering strain fields. Material anisotropy, presence of local defects (e.g., dislocations in crystalline structures) and/or damage (e.g., delamination or cracks) produce fringe distortions or changes in spatial frequency of fringe patterns.
The inverse problem (1) is in general highly nonlinear and in all likelihood non-convex, especially if there are many parameters to be identified. Furthermore, the error functional Ω is not explicitly defined and each new evaluation of Ω entails a new finite element analysis. Such a non-smooth optimization problem cannot be handled efficiently by gradient-based algorithms. In fact, their utilization has continuously been decreasing in the last 10 years. For example, in the case of soft materials (a rather complicated subject) just a few studies using Levenberg-Marquardt or Sequential Quadratic Programming techniques (see, for example, [8,9,10,11,12,13,14,15,16]) have earned at least one or two citations per year.
Global optimization methods can explore larger fractions of design space than gradient-based algorithms. This results in better trial solutions and higher probability of avoiding premature convergence to local minima. A purely random search allows in principle to explore the whole design space but it may be computationally unaffordable because the number of trial solutions yielding reductions of Ω rapidly decreases as the optimization process progresses. In order to rationalize search process and improve computational speed of global optimization, metaheuristic algorithms have been developed inspired by evolution theory, medicine, biology and zoology, physics and astronomy, human sciences etc. Trial designs are randomly generated according to the selected inspiring principle. Metaheuristic methods have been successfully utilized practically in every field of science and engineering.
Genetic algorithms (GA) [17,18], evolution strategies (ES) [19,20,21] and simulated annealing (SA) [22,23] were among the first metaheuristic optimization methods to be developed in the early ‘1980s and are still widely utilized nowadays. The basic difference between GA/ES and SA is that the former algorithms operate with a population of candidate designs while the latter algorithm, at least in its classical implementation, considers one trial design at a time and then further develops it.
Swarm intelligence algorithms mostly inspired by animals’ behavior are other population-based algorithms developed since early 1990s. They still attract the attention of optimization experts that continue to propose new algorithms: the most popular methods are particle swarm optimization (PSO) [24], ant colony optimization (ACO) [25], artificial bee colony (ABC) [26], firefly algorithm (FFA) [27], bat algorithm (BA) [28], and cuckoo search (CS) [29].
Social sciences and human activities have been for almost 20 years another important source of inspiration for metaheuristic algorithms, yet they not as popular as swarm intelligence methods. Among others, we can mention tabu search (TS) [30], harmony search (HS) [31], imperialist competitive algorithm (ICA) [32], teaching-learning based optimization (TLBO) [33], search group algorithm (SGA) [34], and JAYA [35].
Astronomy, physics (electromagnetism, optics, classical mechanics, etc.) and natural phenomena have provided another prolific field of inspiration, especially in the last 10–15 years: for example, big bang-big crunch (BBBC) [36], gravitational search algorithm (GSA) [37], charged system search (CSS) [38], colliding bodies optimization (CBO) [39], ray optimization [40], water evaporation optimization (WEO) [41], thermal exchange optimization (TEO) [42], and cyclical parthenogenesis algorithm (CPA) [43], just to mention a few.
A rapid survey of the optimization literature produced over the last 15 years reveals that GA [44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62], DE [63,64,65,66,67,68,69,70,71,72,73,74,75], SA [76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97], HS [98,99,100,101,102,103,104,105,106,107,108,109,110,111,112] and PSO [113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133] are the most popular metaheuristic algorithms used in mechanical identification problems. In order to improve computational efficiency of identification process, GA and SA were often hybridized [134,135,136,137]. Similarly, PSO was hybridized with many other algorithms including, for example, GA [138,139], GA and ACO [140] and other swarm intelligence methods [141]. HS was hybridized with GA [142] and PSO/RO [143].
BBBC [144,145,146,147,148,149] was more often utilized than ICA [150,151], ACO [152,153], JAYA [154,155] and machine learning [156]. However, there are quite less studies on inverse problems employing BBBC than for GA, DE, SA, HS and PSO. Such a difference may be explained with the informal argument that BBBC was developed much later than GA, DE, SA, HS and PSO.
The many studies listed above is a direct consequence of the blooming of metaheuristic methods favored by the exponentially increasing computational power. Applications of metaheuristic algorithms to inverse problems with special emphasis on material characterization and structural damage detection are critically reviewed in [157,158,159,160,161]. From the stand point of algorithmic formulation, it should be noted that SA is the only metaheuristic algorithm inherently capable of bypassing local optima. However, HS and BBBC include very important features that should be possessed by any population-based algorithm. In particular, HS stores all candidate designs (i.e., those forming the population and additional designs kept in memory from previous iterations) in a matrix called harmony memory. Values assigned to optimization variables can be extracted from this memory to form new trial designs. This allows one to carry out an adaptive search while avoiding stagnation. BBBC utilizes the concept of center of mass, which makes it possible to follow the evolution of the average characteristics of the population over the optimization process. GA and PSO instead may suffer from premature convergence, stagnation, sensitivity to problem formulation. Furthermore, GA and PSO include more internal parameters than SA, HS and BBBC, which increases the amount of heuristics in the optimization process. The same arguments may be used for DE whose performance is strongly dependent on the crossover/mutation scheme implemented in the algorithm. Based on these considerations, we decided to develop advanced formulations of SA, HS and BBBC for material/structural identification problems.
Lamberti et al. attempted to improve the convergence speed of SA (e.g., [77,81,83,84,162,163]), HS (e.g., [164,165,166]) and BBBC (e.g., [165,166]) in inverse and structural optimization problems. While these SA/HS/BBBC variants clearly outperformed referenced algorithms in weight minimization of skeletal structures, improvements in computational cost were less significant for inverse problems as those variants evaluated gradients of error functional Ω using a “brute-force” approach based on finite differences. This occurred in spite of having enriched metaheuristic search with gradient information. In order to overcome this limitation, this study presents new hybrid formulations of SA, HS and BBBC that are significantly more efficient and robust than the algorithms currently available in the literature. For that purpose, low-cost gradient evaluation and approximate line search strategies are introduced in order to generate higher quality trial designs and a very large number of descent directions. Populations of candidate designs are renewed very dynamically by replacing the largest number of designs as possible. All algorithms use a very fast 1-D probabilistic search derived from simulated annealing.
The new algorithms—denoted as Hybrid Fast Simulated Annealing (HFSA), Hybrid Fast Harmony Search (HFHS) and Hybrid Fast Big Bang-Big Crunch (HFBBBC)—are tested in three inverse elasticity problems: (i) mechanical characterization of a composite laminate used as substrate in electronic boards (four unknown elastic constants); (ii) mechanical characterization and layup identification of a composite unstiffened panel for aeronautical use (four unknown elastic constants and three unknown layup angles); (iii) mechanical characterization of bovine pericardium patches used in biomedical applications (sixteen unknown hyperelastic constants and the fiber orientation). Sensitivity of inverse problem solutions and convergence behavior to population size and initial design/population is evaluated in statistical terms.
The rest of this article is structured as follows: Section 2, Section 3 and Section 4, respectively, describe the new SA, HS and BBBC formulations developed here trying to point out the theoretical aspects behind the proposed enhancements and critically compare the new formulations with currently available SA/HS/BBBC variants including those developed in [77,81,83,84,162,166]. Section 5 presents the results obtained in the inverse problems. Finally, Section 6 discusses the main findings of this study.

2. Hybrid Fast Simulated Annealing

The flow chart of the new HFSA algorithm developed in this study is shown in Figure 1. HFSA includes a multi-level and multi-point formulation combining global and local annealing, evaluation of multiple trial points, and line search strategies based on fast gradient computation. The hybrid nature of HFSA derives from the fact that metaheuristic search is enriched by approximate line searches. Similar to classical SA, the proposed algorithm starts with setting an initial design vector X0 as the current best record XOPT. The corresponding cost function value ΩOPT = Ω(XOPT) is computed. Set the counter of cooling cycles as K = 1 and the maximum number of cooling cycles as KMAX = 100. Set the initial temperature T0 equal to 0.1 near to the target value 0 of the error functional Ω.

2.1. Step 1: Generate A New Trial Design with “Global” Annealing by Perturbing All Design Variables

Since determination of sensitivities ∂Ω/∂xj entails new structural analyses, material parameters taken as optimization variables are perturbed as follows:
x j   =   x OPT , j     ( x OPT , j l x OPT , j l 1 ) | Ω OPT l Ω OPT l 1 | X OPT l X OPT l 1 × N RND , j   ×   OPT , l 1 / OPT , l ( j   =   1 , , NMP )
where XOPTl and XOPTl−1 are the best records for the last two iterations; (XOPTl) and (XOPTl−1) are the corresponding values of error functional; NRND,j is a random number in the interval (0,1).
If Ω(XOPTl) < Ω(XOPTl−1), (XOPTlXOPTl−1) is a descent direction with respect to the previous best record XOPTl−1 while −(XOPTlXOPTl−1) may be a descent direction with respect to the current best record XOPTl. The approximate gradient of Ω is computed as |ΩOPTl − ΩOPTl−1|/||XOPTlXOPTl−1||: the absolute value accounts for the “−” sign included in Equation (2). The NRND,j random number preserves the heuristic character of the SA search while the ΩOPT,l−1/ΩOPT,l ratio forces the optimizer to take a large step along a potentially descent direction. Using approximate gradient evaluation allows computational cost of the inverse problem to be drastically reduced with respect to other SA applications [76,77,81,83,84].
A trial design XTR(xOPT,1 + Δx1, xOPT,2 + Δx2,…,xOPT,NMP−1 + ΔxNMP−1, xOPT,NMP + ΔxNMP) is hence formed.

2.2. Step 2: Evaluation of the New Trial Design

If Ω(XTR) < Ω(XOPT), XTR is set as the new best record XOPT. Step 5 is executed in order to check for convergence and reset parameters K and TK.
If Ω(XTR) > Ω(XOPT), a “mirroring strategy” is used to perturb design along a descent direction. In fact, since Ω(XTR) > Ω(XOPT) yields (XTR − XOPT)T ¯ Ω(XOPT) > 0, the −(XTR − XOPT)T ¯ Ω(XOPT) < 0 condition is expected to be satisfied thus defining the new descent direction (XTRnew − XOPT)≡−(XTR − XOPT). The new candidate design XTRnew is defined as:
XTRnew = 2XOPTXTR
If the mirror trial point XTRnew yet does not improve XOPT (i.e., if Ω(XTRnew) > Ω(XOPT)), the cost function Ω(X) is approximated by a 4th order polynomial that passes through the five trial points XTR, XINT, XOPT, XINT’’ and XTRnew where XINT is randomly generated on the segment limited by XTR and XOPT while XINT’’ is randomly generated on the segment limited by XOPT and XTRnew. A local 1-D coordinate system is set for the segment limited by XTR and XTRnew: the origin is located at XOPT and coordinates are normalized with respect to the distance from the origin. The trial point XOPT* at which the approximate error functional ΩAPP(X) takes its minimum value is determined. An exact analysis is performed at XOPT* and the real value of error functional Ω(XOPT*) is computed. The following cases may occur.
If Ω(XOPT*) < Ω(XOPT), XOPT* is reset as the current best record. Hence, Step 5 is executed in order to check for convergence and reset K and TK.
If Ω(XOPT*) > Ω(XOPT), trial designs XTR, XINT, XOPT*, XINT’’ and XTRnew are evaluated with the Metropolis’ criterion. The cost function variation ΔΩs = [Ω(Xs) − Ω(XOPT)] is computed for these designs (the s subscript denotes TR, INT’, OPT*, INT” and TRnew, respectively). For the trial design yielding the smallest increment ΔΩs > 0 (in all likelihood XOPT*), the Metropolis’ probability function is defined as:
P ( Δ s ) =   e Δ Ω s ( r = 1 NDW Δ Ω r / NDW ) T K
where NDW are the trial points at which error functional was higher than the previously found best records up the current iteration. The ΔΩr terms are the corresponding cost penalties. The ratio ∑r=1, NDW ΔΩr/NDW accounts for the general formation of all previous trial designs and normalizes probability function with respect to cost function changes.
The design Xs is provisionally accepted or certainly rejected according to the Metropolis’ criterion:
Materials 12 02133 i001
where NRDs is a random number defined in the interval (0, 1).
If Xs may be accepted from Equation (5), it is added to the database Π, that includes all trial designs that could not improve the current best record. Hence, Step 4 is executed.
If all of the Xs points are rejected from Equation (5), Step 3 is executed.

2.3. Step 3: Generate New Designs with “Local” Annealing by Perturbing One Variable at A Time

In classical SA, Mann cycles are completed and a total of Mann·NMP analyses are performed. Here, derivatives ∂Ω/∂xj (j = 1,2,…,NMP) computed at XOPT are sorted in ascending order from the minimum value to the maximum value. Following this order, design variables are perturbed one by one as:
xjTR,1d = xOPT,j ± (xjU − xjL)·NRND,j   (j = 1,2,…,NMP)
where the sign “+” is used if ∂Ω/∂xj < 0 while the sign “−” is used if ∂Ω/∂xj > 0. If xjTR violates side constraints, it is reset as follows:
Materials 12 02133 i002
Each new design XTRj(xOPT,1,xOPT,2,…,xjTR,…,xOPT,NMP−1,xOPT,NMP) defined with Equations (6) and (7) is evaluated and the current best record is updated if it holds Ω(XTRj) < Ω(XOPT). Conversely, if Ω(XTRj) > Ω(XOPT), the mirror trial point XTRj,mirr(xOPT,1,xOPT,2,2xOPT−xjTR,…,xOPT,NMP−1,xOPT,NMP) is evaluated. Two scenarios may occur: (i) if Ω(XTRj,mirr) < Ω(XOPT), XTRj,mirr is set as the new best record XOPT; (ii) if also Ω(XTRj,mirr) > Ω(XOPT), XTRj and XTRj,mirr are evaluated with the Metropolis criterion (5) and the best of them is eventually set as the current best record. The 1-D search lasts until no improvement in design is achieved over two consecutive cycles.
Similar to the “global” annealing strategy, the 1-D probabilistic search attempts to generate trial designs lying on descent directions. However, perturbation initiates from the most sensitive variables in order to capture the effect of each single variable in a more efficient way. In fact, in the global annealing search, the vector (XTRXOPT) was defined so as to have (XTRXOPT)T ¯ Ω(XOPT) < 0, thus forming a descent direction. However, non-linearity of cost function made such a condition be not sufficient for improving design. In view of this, “local” annealing selects the most important terms (XTRXOPT)j∂Ω/∂xj that form the cost function variation and promptly correct them should they not contribute effectively to the reduction of cost function.

2.4. Step 4: Evaluation of Trial Designs that Satisfy Metropolis’ Criterion

If there are no trial designs for which the cost function decreases, HFSA extracts from the database Π (including designs that satisfy the Metropolis’ criterion) the design XjBEST for which the cost function value is the least, and then sets this design as the current best record. Hence, the increase in cost is minimized each time all improvement routines failed and the 1-D local annealing search could not improve design.

2.5. Step 5: Check for Convergence and Eventually Reset Parameters for A New Cooling Cycle

If the annealing cycles counter K > 3, HFSA utilizes the following convergence criterion:
Max { Max [ | Ω OPT , K Ω OPT , K 1 | Ω OPT , K ; | Ω OPT , K 1 Ω OPT , K 2 | Ω OPT , K 1 ; | Ω OPT , K 2 Ω OPT , K 3 | Ω OPT , K 2 ] ; Max [ | | X OPT , K X OPT , K 1 | | | | X OPT , K | | ; | | X OPT , K 1 X OPT , K 2 | | | | X OPT , K 1 | | ; | | X OPT , K 2 X OPT , K 3 | | | | X OPT , K 2 | | ] } ε CONV
where ΩOPT,K and XOPT,K, respectively, are the best record and corresponding design vector obtained in the Kth cooling cycle. The convergence parameter εCONV is set equal to 10−7.
If the criterion (8) is satisfied or K = KMAX, go to Step 6.
Conversely, if K < 3 or stopping criterion is not satisfied (also for K ≥ 3), the number of cooling cycles is reset as K = K + 1. The temperature is adaptively reduced as TK+1 = βK TK where:
β K = [ r = 0 K 1 β r / K ]   ×   Max [ 0.95 / ( 1 + N REJE N TRIA ) ; ( 1 Ω FIN , K 1 Ω INIT , K 1 ) ]
INIT,K−1 and ΩFIN,K−1, respectively, are the cost function values at the beginning and at the end of current annealing cycle. NREJE is the number of trial designs rejected out of total number of trial designs NTRIA generated in the current cooling cycle.

2.6. Step 6: End Optimization Process

HFSA terminates the optimization process and writes the output data in the results file.

3. Hybrid Fast Harmony Search

The new HFHS algorithm developed in this research is now described in detail. Like HFSA, HFHS enriches the metaheuristic search with gradient information and approximate line searches. This is done at low computational cost. Furthermore, the HS engine is enhanced by a 1-D probabilistic search based on simulated annealing. Like many state-of-the-art HS variants, internal parameters such as harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are adaptively changed by HFHS in the optimization process based on convergence history. Last, classical harmony refinement process based on bandwidth parameter (bw) is replaced by another random movement that forces HFHS to refine the new harmony moving along a descent direction. The flow chart of the new algorithm is presented in Figure 2.
The initial population of NPOP solutions is randomly generated using the following equation:
x j k = x j L + ρ j k   ( x j U x j L ) ( k   =   1 , 2 , N POP ;   j   =   1 , 2 , , NMP )
where ρjk is a random number uniformly generated in the (0,1) interval.
These designs are sorted in ascending order according to values taken by the error functional Ω.
[ HM ] = [ x 1 1 x 2 1 x 1 N POP 1 x 1 N POP x 2 1 x 2 2 x 2 N POP 1 x 2 N POP x NMP 1 1 x NMP 1 2 x NMP 1 N POP 1 x NMP 1 N POP x NMP 1 x NMP 2 x NMP N POP 1 x NMP N POP ]
As mentioned above, the present HFHS algorithm does not require initialization of internal parameters HMCR, PAR and bw.

3.1. Step 1: Generation and Adjustment of a New Harmony with Adaptive Parameter Selection

Let XOPT = {xOPT,1,xOPT,2,…,xOPT,NMP} be the best design stored in the population corresponding to ΩOPT. The gradient of error functional with respect to design variables ¯ Ω(XOPT) is computed at XOPT. For each variable, a random number NRND,j is extracted from the (0,1) interval.
If NRND,j > HMCR, the new value xTR,j assigned to the jth optimization variable (j = 1,2,…,NMP) currently perturbed is:
Materials 12 02133 i003
where ∂Ω/∂xj is the cost function sensitivity for the jth design variable currently perturbed, μj = (∂Ω/∂xj)/|| ¯ Ω(XOPT)|| is the sensitivity coefficient normalized with respect to the gradient vector modulus. Sensitivities ∂Ω/∂xj are computed with Equation (22), which will be described later on in this section. Using the ‘+’ sign if it holds ∂Ω/∂xj < 0 and the ‘−’ sign if it holds ∂Ω/∂xj > 0, allows to generate trial points lying on descent directions.
If HMCR is small, Equation (12) is more likely to be used. The (xTR,j − xOPT,j) perturbations given to each design variable are weighted by sensitivities to form the cost function variation ΔΩTR for the new harmony XTR. This variation is expressed by the scalar product ΔΩTR between the gradient ¯ Ω(XOPT) and the search direction STRT = (XTRXOPT) formed by the new harmony and the current best record. If ΔΩTR < 0, STRT is a descent direction. In order to make STRT a descent direction, all increments (xTR,j − xOPT,j) ∂Ω/∂xj must hence be negative. The following strategy is adopted to retain or adjust the perturbation given to the current design variable (j = 1,2,…,NMP):
Materials 12 02133 i004
Hence, Equations (12) and (13) randomly generate new trial designs that must lie on descent directions. Sensitivities are computed at the current best record to improve convergence speed. The mirroring strategy implemented by the second relationship of Equation (13) attempts to transform the non-descent direction STR into the descent direction −STR by perturbing design in the opposite direction.
The effect of the distance of current best record XOPT from side constraint boundaries is taken into account by perturbing design variables by the largest step as possible (i.e., (xOPT,j − xjL) or (xjU − xOPT,j)) along the currently defined descent direction. This allows to maximize the improvement in cost function.
If NRND,j < HMCR, the new value xTR,j assigned to the jth variable is defined as (j = 1,2,…,NMP):
x T R , j = x ^ T R , j H M + ( N R N D , j 0.5 ) × M a x [ ( x ^ T R , j H M x ^ T R , j H M , l e s s ) , ( x ^ T R , j H M , m o r e x ^ T R , j H M ) ]
where x ^ TR , j HM , less and x ^ TR , j HM , more are two adjacent values to the x ^ TR , j HM value stored in [HM], such that x ^ TR , j HM , less < x ^ TR , j HM < x ^ TR , j HM , more . Unlike classical HS and advanced formulations [163,164,167], HFHS does not select the value xTR,j from the jth column of the harmony memory storing values of the corresponding variable for each design of the population. This enhances diversity of optimization process and allows to avoid stagnation.
By considering the difference (NRND,j − 0.5), it is possible to increase or reduce the x ^ TR , j HM value. The xTR,j value is then adjusted with Equation (13) to make also step (xTR,j − xOPT,j) lie on a descent direction. Conversely, in classical HS and [164,165,166], the pitch adjusting operation did not include any information on how much the design may be sensitive to the currently analyzed variable.
If it holds also NRND,j < Min(HMCR,PAR), the xTR,j value is finally pitch adjusted as:
x T R , j p i t c h , a d j = x T R , j + λ s c a l e × N R N D , j × | x T R , j x O P T , j | N G t o t × N G p i t c h , a d j ( j   =   1 , 2 , , NMP )
where NGpitch,adj is the number of previously pitch adjusted trial designs; NGtot is the total number of trial designs generated in the optimization search. The NGpitch,adj parameter is reset as (NGpitch,adj + 1) if the number of pitch adjusted variables included in a new harmony is larger than the number of design variables perturbed with Equation (12) using gradient information.
The scale parameter λscale is set as:
λ scale = { ( X TR , j x OPT , j ) < 0 1 ( X TR , j x OPT , j ) > 0 1 ( j   =   1 , 2 , , NMP )
Equations (14)–(16) replace the bandwidth parameter bw usually used in many HS variants. The new harmony XTR(xTR,1,xTR,2,…,xTR,NMP) can be decomposed in NMP sub-harmonies XTR,j(xOPT,1,xOPT,2, …,xTR,j,…,xOPT,NMP) obtained by perturbing only one design variable at a time, and that lie on descent directions. These movements are amplified by the scale factor defined by Equation (16). Furthermore, Equation (15) accounts also for optimization history. In fact, since NGpitch,adj/NGtot decreases as optimization progresses, the perturbation step defined to pitch adjust each design variable gets finer as the optimum is approached.
The case NRND,j < HMCR and NRND,j > PAR is dealt with Equation (13) eventually including the mirroring strategy. Hence, the present algorithm intrinsically pitch adjusts design variables and tries anyhow to improve the current design.
As mentioned above, new values of HMCR and PAR parameters are randomly generated in each new iteration and adapted based on optimization history. In the qth iteration, HMCR and PAR are set as:
HMCR q = HMCR extracted q × Ω aver , end q 1 Ω aver , init q 1 × NG pitch , adj NG gradient
PAR q = PAR extracted q × Ω aver , end q 1 Ω aver , init q 1 × | | X OPT , end X WORST , end | | q 1 | | X OPT , init X WORST , init | | q 1 × NG pitch , adj NG gradient
In Equations (17) and (18), Ωaver,initq−1 and Ωaver,endq−1, respectively, are the average values of cost function for the trial designs included in the harmony memory at the beginning and the end of the previous optimization iteration (the Ωaver,endq−1/Ωaver,initq−1 ratio should always be smaller than 1). XOPT,init and XWORST,init, XOPT,end and XWORST,end, respectively, denote the best and worst designs at the beginning and the end of the previous iteration. NGgradient is the number of trial designs generated by including gradient information: this parameter is reset to (NGgradient + 1) if the number of design variables perturbed with Equation (12) is greater than NMP/2.
Random values HMCRextractedq and PARextractedq are defined as:
{ H M C R e x t r a c t e d q = 0.01 + ξ H M C R × ( 0.99 0.01 ) P A R e x t r a c t e d q = 0.01 + ξ P A R × ( 0.99 0.01 )
where ξHMCR and ξPAR are two random numbers in the interval (0,1). The bounds of 0.01 and 0.99 set in Equation (19) allow all possible values of internal parameters to be covered [168].
In the first iteration (q = 1), it obviously holds HMCRq = HMCRextractedq and PARq = PARextractedq.
Equations (17) and (18) rely on the following rationale: the error functional may decrease more rapidly if large perturbations are given to many variables. This is more likely to happen when gradient information is directly utilized, that is when it holds NRND,j > HMCR. In order to increase the probability of using Equation (12) for many design variables, the HMCR value randomly generated is scaled by the (Ωaver,endq−1/Ωaver,initq−1) ratio. The generation process of new harmonies is hence forced to be consistent with the current rate of reduction of Ω.
Furthermore, HMCRq is scaled by the NGpitch,adj/NGgradient ratio. If the number of new harmonies generated via pitch adjusting tends to be smaller than the number of new harmonies directly generated including gradient information (i.e., if NGpitch,adj/NGgradient < 1), it is more logical to keep following such a trend.
Similar arguments hold for the PARq parameter. Pitch adjustment is performed if NRND,j < Min(HMCR,PAR). Besides information on cost function reduction rate (Ωaver,endq−1/Ωaver,initq−1), Equation (18) accounts for population diversity. In fact, pitch adjusting is less effective as population becomes less sparse, that is when the ||XOPT,endXWORST,end||/||XOPT,initXWORST,init|| ratio decreases. Again, the NGpitch,adj/NGgradient ratio preserves the current trend of variation of the pitch adjusting rate parameter.

Determination of Sensitivities of Ω

Since the error functional is implicitly defined, gradients are determined by approximate line search. The cost function variation ΔΩk = [Ω(Xk) − Ω(XOPT)] that occurs by moving from the best design XOPT to the kth design Xk stored in the harmony memory is determined for all designs. The corresponding distance ΔSk = ||XkXOPT|| is computed. The approximate (i.e., “average”) gradient along each direction Sk = (XkXOPT) is computed as ΔΩk/ΔSk. Since the new harmony must lie on a descent direction, the Sk vectors must be transformed into descent directions Sdesck = (XOPTXk), that is Sdesck = −ΔSk. Three descent directions are considered: (i) best direction SBEST corresponding to the largest cost variation between candidate designs (this is the opposite direction to (XWORSTXOPT)); (ii) steepest descent direction SFAST corresponding to the largest gradient ΔΩk/ΔSk; (iii) the second best direction S2ndBEST corresponding to the second largest cost variation between candidate designs (this is the opposite direction to (X2ndWORSTXOPT)).
Figure 3 illustrates the formation of the descent directions and their mutual positions with respect to the gradient of cost functional. Any descent direction should be within the region limited by SBEST, S2ndBEST and SFAST. However, if the problem is highly nonlinear, it may happen that cost function oscillates along these directions and exceeds the current optimum cost. For this reason, step sizes are taken along SBEST, S2ndBEST and SFAST. The scale factors βBEST, β2ndBEST and βFAST are defined so that SBESTunit, S2ndBESTunit and SFASTunit are unit vectors. If ||SBEST|| < 1 or ||S2ndBEST|| < 1 or ||SFAST|| < 1, the corresponding unit direction coincides with the original direction and remains a descent direction. In order to check if unit directions are descent directions, three new trial designs XGR(1), XGR(2) and XGR(3) are defined as:
{ X GR ( 1 ) = X OPT + S BEST unit X GR ( 2 ) = X OPT + S 2ndBEST unit X GR ( 3 ) = X OPT + S FAST unit
The SBESTunit, S2ndBESTunit and SFASTunit unit vectors are classified as descent directions if the following conditions hold true, respectively:
{ Ω ( X GR ( 1 ) ) Ω OPT < 0 S BEST u n i t     d e s c e n t Ω ( X GR ( 2 ) ) Ω OPT < 0 S 2ndBEST u n i t     d e s c e n t Ω ( X GR ( 3 ) ) Ω OPT < 0 S FAST u n i t     d e s c e n t
At this point, it is very likely that there will be between one and three unit descent directions in the neighborhood of the current best record. Sensitivities are hence defined as follows (j = 1,2,…NMP):
x j = Min { [ Ω ( X GR ( 1 ) ) Ω OPT ] S BEST u n i t ( x GR , j ( 1 ) x OPT , j ) S BEST u n i t ; [ Ω ( X GR ( 2 ) ) Ω OPT ] S 2ndBEST u n i t ( x GR , j ( 2 ) x OPT , j ) S 2ndBEST u n i t ; [ Ω ( X GR ( 3 ) ) Ω OPT ] S FAST u n i t ( x GR , j ( 3 ) x OPT , j ) S FAST u n i t }
Equation (22) shows that the directional derivative along a unit direction is scaled by the direction cosines in order to get sensitivities with respect to optimization variables. The minimum in Equation (22) accounts for the possibility of having non-descent unit directions. In the limit case of three non-descent directions, sensitivity is set equal to the minimum positive value so as to minimize the cost function increment in the neighborhood of the current best record. The approximate gradient evaluation strategy implemented by HFHS allows computational cost of the identification process to be significantly reduced with respect to previously developed HS variants. Once derivatives are computed, Step 1 is completed in the same way as described before.

3.2. Step 2: Evaluation of The New Trial Design

The quality of the new harmony XTR defined in Step 1 is evaluated in this step. In classical HS, if the new trial design XTR is better than the worst design XWORST currently stored in the harmony memory, it replaces the worst design in [HM]. The sophisticated generation mechanism developed in this research makes the new trial design have a high probability of improving also the current best record. The following cases may occur: (i) Ω(XTR) < ΩOPT; (ii) Ω(XTR) > ΩOPT.
If Ω(XTR) < ΩOPT, the worst design is removed and the new harmony XTR is set as the current best record. The former optimum design becomes the second best design stored in the population. The remaining (NPOP − 2) designs are analyzed. Let (XNPOP−2)r be a generic harmony of these (NPOP − 2) designs. For each remaining harmony (XNPOP−2)r, the approximate gradient with respect to the current optimum is determined as ΔΩr/ΔSr, where ΔΩr = [Ω((XNPOP−2)r) − ΩOPT] and ΔSr = ||(XNPOP−2)rXOPT||. Let XNPOP−2FAST be the harmony corresponding to the largest approximate gradient. Each (XNPOP−2)r harmony is tentatively updated using Equation (23), with r∈(NPOP − 2):
(XNPOP−2)r,new = (XNPOP−2)r + ηBEST × [XOPT − (XNPOP−2)r] + η2ndBEST × [XOPT − (XNPOP−2)r] + ηFAST × [(XOPTXNPOP−2FAST)]
where ηBEST, η2ndBEST and ηFAST are three random numbers extracted in the (0,1) interval.
If Ω((XNPOP−2)r,new) < Ω((XNPOP−2)r), the new harmony (XNPOP−2)r,new replaces the old harmony (XNPOP−2)r. Otherwise, the new harmony is discarded and the hold harmony is kept in the population. The population is reordered based on the cost of each harmony. Equation (23) introduces a sort of social behavior that induces harmonies to approach the two best designs stored in the population and to reduce the cost function as fastest as possible.
If Ω(XTR) > ΩOPT, the new harmony XTR is compared with the rest of the population. Let us assume that XTR ranks pth in the population of NPOP designs. The former worst design is removed from the population and the former second worst design becomes the new worst design. Hence, there are (p − 1) better designs than XTR and (NPOP − p) worse designs than XTR.
The (NPOP − p) designs are analyzed similarly to what is done for Ω(XTR) < ΩOPT. New harmonies are defined using Equation (24), with r∈(NPOP−p):
(XNPOP−p)r,new = (XNPOP−p)r + ηBEST·[XOPT − (XNPOP−p)r] + η2ndBEST·[XOPT − (XNPOP−p)r] + ηFAST·[(XOPTXNPOP−pFAST)]
where ηBEST, η2ndBEST and ηFAST are three random numbers in the (0,1) interval. The new harmony (XNPOP−p)r,new replaces the old harmony (XNPOP−p)r if it yields a lower value of error functional. The population is reordered based on the new values of Ω.
This strategy has the following rationale. Whilst XTR could not replace the optimum, it has a higher quality than other designs of the population. Hence, the other individuals try to imitate its behavior, at least approaching the optimum and improving their positions.

3.3. Step 3: Perform 1-D “Local” Annealing Search

If Step 2 could not improve XOPT, the 1-D “local” annealing search mechanism described in Section 2 is utilized. Variables are perturbed in the neighborhood of XOPT based on the magnitude of sensitivities ∂Ω/∂xj. Trial designs that yield a positive increment ΔΩs > 0 with respect to ΩOPT and satisfy the Metropolis’ criterion replace the worst designs stored in the harmony memory [HM].

3.4. Step 4: Check for Convergence

As the optimization process proceeds towards the global optimum, population sparsity must decrease. For this reason, the “average” design is defined as Xaver = ( k = 1 N POP X k ) / N POP . The average value of error functional Ωaver is defined as Ωaver = ( k = 1 N POP Ω ( X k ) )   / N POP .
The following termination criterion is utilized in this research:
Max { STD {   X 1 X aver   , X 2 X aver   , , X NPOP X aver   } | | X aver | |   ; STD { Ω 1 ,   Ω 2 , ,   Ω k , ,   Ω NPOP } Ω a v e r } ε CONV
where the convergence limit εCONV is set equal to 10−15, smaller than the double precision limit used in computing technology. Steps 1 to 4 are repeated until the HFHS algorithm converges to the global optimum.

3.5. Step 5: End Optimization Process

The present HFHS algorithm terminates the optimization process and writes the output data in the results file.

4. Hybrid Fast Big Bang-Big Crunch

The new HFBBBC algorithm developed in this research is described in detail in this section. The strength points of the present formulation with respect to state-of-the-art BBBC variants can be summarized as follows. First, similar to HFSA and HFHS, computation of sensitivities does not entail new structural analyses. Second, descent directions from which XTR is generated are more accurately selected. Third, population is dynamically updated so as to simulate an explosion about the center of mass but with all new trial designs lying on potentially descent directions. The hybrid nature of the HFBBBC algorithm comes from the combination of the explosion/contraction process with 1-D local annealing and line search mechanisms. The flow chart of the algorithm is shown in Figure 4.
Like HFHS, the initial population of NPOP designs used by HFBBBC is generated with Equation (10). The present algorithm does not require any setting of internal parameters except for the population size NPOP. Error functional is evaluated for all candidate solutions. The best design XOPT corresponding to the lowest value of error functional ΩOPT is determined.

4.1. Step 1: Definition of The Center of Mass

The coordinates of the center of mass of the population XCM(xCM,1,xCM,2,…,xCM,NMP) are defined as:
x CM , j = ( k = 1 N POP x j k Ω k ) / ( k = 1 N POP 1 Ω k ) ( j   =   1 , , NMP )
where xj,k is the value of the jth optimization variable stored in the kth trial design, Ωk is error functional value for the kth trial design. Penalty functions can be used to sort designs. The weighting coefficients 1/Ωk make position of center mass be more sensitive to the best designs stored in the population.

4.2. Step 2: Evaluation of The Center of Mass and Progressive Update of XCM As Current Best Record

Error functional is evaluated at XCM. As mentioned above, BBBC formulations usually converge to the optimum design by updating the position of XCM. However, there is no guarantee that the new XCM may be the center of a better population. Since XCM represents a weighted average of candidate designs, its quality will be somewhere in between the worst and best individuals included in the population. The present HFBBBC algorithm considers two cases: (i) XCM is better than XOPT; (ii) XCM is worse than XOPT.
In [164,165,167], XCM was reset as XOPT if case (i) occurred. The worst design included in the population was replaced by XCM and a new center of mass was defined. The same was done until case (ii) occurred. That approach allows one to avoid performing a new explosion about each new center of mass, thus saving NPOP structural analyses with respect to classical BBBC. However, it was replaced only one design at a time while classical BBBC renews the whole population each time XCM is updated. In order to overcome this limitation without increasing computational cost, the following strategy has been implemented in this study.
If XCM is better than XOPT, it is reset as XOPT. The former best record becomes the second best design. The worst design is removed from the population. Any direction defined as (XOPTXk) is a descent direction with respect to Xk because Ω(Xk) > Ω(XOPT): the design improves as we move away from Xk. However, (XOPTXk) is also opposite to (XkXOPT), which is a non-descent direction with respect to XOPT. If cost functional gradient changes smoothly, a direction which was descent for Xk may remain descent also for XOPT. In view of this, HFBBBC tentatively updates designs as:
Xktentative = Xk + (1 + ξk)(XOPTXk)    (k = 1,…,NPOP − 1)
where ξk is a random number in the interval (−1,1). If ξk∈(−1,0), Xktentative lies between Xk and XOPT; if ξk∈(0,1), Xktentative lies beyond XOPT.
Since the Xktentative designs are potentially better than the Xk designs as they have been defined by moving towards the current best record or trying to improve XOPT itself, a population including the (NPOP − 1) designs Xktentative and the current best record XOPT should be of higher quality than the current population. Consequently, the new center of mass XCMtentative should be better than the former center of mass XCM defined for the original population and further improve XOPT.
In order to reduce computational cost, approximate values of error functional ΩAPP(Xktentative) are determined as:
{ Ω ( X k t e n t a t i v e ) = Ω ( X k )   | | X k t e n t a t i v e X k | | | | X OPT X k | | i f   ξ k ( 1 , 0 ) Ω ( X k t e n t a t i v e ) = Ω ( X k )   | | X k t e n t a t i v e X OPT | | | | X OPT X k | | i f   ξ k ( 0 , 1 )   ,   ( k   =   1 , 2 , , N POP     1 )
The approximate position of the center of mass XCMtentative is determined with Equation (26) using the Xktentative vectors and the approximate values of error functional ΩAPP(Xktentative). The real value of error functional is evaluated at XCMtentative. If XCMtentative is better than XOPT, it is reset as XOPT. The Xktentative designs replace the original designs Xk and a new loop is performed using Equations (27) and (28). If XCMtentative does not improve any more the current best record XOPT, a new center of mass (XCMtentative)’ is defined by changing only the weights of the designs that lie between Xk and XOPT: that is, Equation (27) is used only for ξk∈(−1,0). This is done because the Xktentative designs lying beyond XOPT could violate side constraints because of the very large perturbations given to variables.
If (XCMtentative)’ improves the current best record, it is reset as XOPT. The Xktentative designs generated for ξk∈(−1,0) replace the corresponding Xk designs. A new loop is performed using Equations (27) and (28). This process is repeated until a new center of mass improves the current best record.
If both points XCMtentative and (XCMtentative)’ do not improve XOPT, the Xktentative designs are moved back to the corresponding Xk designs and Step 3 is executed.
Similar to classical BBBC, population is renewed each time the position of the center of mass is updated. However, the present algorithm requires only one or two structural analyses to evaluate XCMtentative and (XCMtentative)’ vs. between the rather broad range of 0.1NPOP to NPOP analyses (often sensitive to the optimization problem at hand) required by state-of-the-art BBBC algorithms (see for example [169]).

4.3. Step 3: Evaluation of The Center of Mass and Formation of new Trial Designs Different from XCM

The case Ω(XCM) > Ω(XOPT) (i.e., XCM is worse than XOPT) is the most likely to occur because the center of mass averages the properties of the NPOP designs included in the population and hence it should rank between XWORST and XOPT. The present algorithm utilizes a computationally inexpensive approach. A new trial design XTRmirr is defined with the mirroring strategy. That is:
XTRmirr = (1 + ηMIRR)⋅XOPT−ηMIRR × XCM
where ηMIRR is a random number in the interval (0,1). The mirroring strategy attempts to turn the non-descent direction (XCMXOPT) into the descent direction (XTRmirrXOPT). Using a random number smaller than one limits the search in the neighborhood of the current best record.
If Ω (XTRmirr) < Ω (XOPT), this trial design replaces the current best record which becomes the second best design of the population. The worst design is removed from the population. The optimization process is continued with Step 2 to generate (NPOP − 1) Xktentative designs, renew population and update position of XCM; convergence is checked in Step 5.
If Ω(XTRmirr) > Ω(XOPT), the mirroring strategy (29) is judged not effective and a new trial design must be generated by combining a set of descent directions. Since Ω(XCM) > Ω(XOPT), the (XCMXOPT) vector is a non-descent direction with respect to the current best record. However, the opposite direction SOPTCM = −(XCMXOPT) ≡ (XOPTXCM) may be a descent direction, especially if the gradient of cost function is smooth. Similar to the HFHS algorithm described in Section 3, the approximate gradient of Ω(X) is determined also for HFBBBC. All (XOPTXk) vectors are opposite to directions (XkXOPT) that were non-descent with respect to XOPT. For each design Xk the approximate gradient ∇Ωkappr = [Ω(Xk) − Ω(XOPT)]/||XkXOPT|| is calculated. The SBEST = (XOPTXWORST) direction corresponding to the largest variation of cost function between two candidate designs, the SFAST = (XOPTXFAST) direction corresponding to the largest ∇Ωkappr, and the S2ndBEST = (XOPTX2ndBEST) direction corresponding to the second best design are considered. A new trial design XTR is defined as:
XTR = XOPT + ηOPT−CMSOPTCM + ηBESTSBEST + η2ndBESTS2ndBEST + ηFASTSFAST
where ηOPT–CM, ηBEST, η2ndBEST and ηFAST are four random numbers generated in the (0,1) interval.
The generation of a new trial solution XTR with Equation (30) is illustrated in Figure 5 for an inverse problem with two variables. If the error functional gradient is smooth enough, a descent direction S will satisfy the condition ST ¯ Ω(XOPT) < 0, as it appears to be for SOPTCM, SBEST, SFAST and S2ndBEST directions in the figure. By summing up the steps taken on SOPTCM, SBEST, SFAST and S2ndBEST, a trial design XTR lying on a descent direction can be obtained.
The present HFBBBC algorithm directly perturbs variables with respect to the current best record and hence generates higher quality designs. The effect of the average properties of the population described by XCM is now taken into account by considering the SOPTCM direction.
The quality of XTR is evaluated as usual. If Ω(XTR) < Ω(XOPT), the trial design replaces the current best record and the worst design is removed from the population. Step 2 is performed to eventually renew population and update position of XCM; convergence check is performed in Step 5. Otherwise, the 1-D “local” annealing search of Step 4 is executed.

4.4. Step 4: Perform 1-D “Local” Annealing Search

HFBBBC utilizes the same probabilistic search mechanism implemented in HFSA and HFHS. However, the position of the center of mass is updated each time a trial design improves the current best record. The new center of mass and its mirror point with respect to the current best record also are evaluated to check for further improvements in design or to define additional points satisfying the Metropolis criterion.

4.5. Step 5: Check for Convergence and Perform A New Explosion If Necessary

HFBBBC checks if the best design of the population has been improved in the current optimization cycle. Convergence check is performed after operations entailed by Steps 3 and 4. Let be (XOPT)init and (XOPT)final the best designs at the beginning and at the end of the current optimization cycle. If (XOPT)final is better than (XOPT)init, HFBBBC checks for convergence using the same criterion, Equation (25), adopted for HFHS. If convergence is reached, Step 6 is executed. Otherwise, Steps 1 to 4 are repeated until HFBBBC converges to the global optimum.
If (XOPT)init is equal to (XOPT)final, the current optimization cycle did not improve design in spite of the numerous improvement routines available in HFBBBC. For this reason, a new explosion is performed about XOPT trying to generate a higher quality population. The following equation is utilized:
x j k = x OPT , j ρ j k   ( x OPT , j x CM , j ) ( k   =   1 , 2 , , N POP ;   j   =   1 , 2 , , NMP )
where ρjk is a random number in the interval (0,2) to generate the jth variable of the kth design. The interval (0,2) is large enough to avoid stagnation near the current best record. If x j k <   x j L or x j k >   x j U , x j k is reset to x j k = (   x j L + x OPT , j ) / 2 or x j k = (   x OPT , j + x j U ) / 2 , respectively. The new population is generated by perturbing the current best record XOPT along the direction −(XOPTXCM), opposite to the non-descent direction (XCMXOPT). The rationale of Equation (31) is to search for descent directions with respect to XOPT by decomposing a potentially descent direction in its components.
The new designs are compared with the previous population and only the best NPOP designs are retained in the new population. This elitist strategy allows to keep the XOPT design in the population passed into the next optimization iteration should all of the new NPOP designs generated with Equation (31) be worse than XOPT. The optimization process is reprised from Step 1.

4.6. Step 6: End Optimization Process

The HFBBBC algorithm terminates the optimization process and writes the output data in the results file.

5. Test Problems and Results

The HFSA, HFHS and HFBBBC algorithms developed in this study for mechanical identification problems were tested on two composite structures and a hyperelastic biological membrane. They were compared with other SA/HS/BBBC variants (e.g., [77,81,83,84] and their successive enhancements [162,163,164,165,166]) including gradient information in the optimization search, as well as with adaptive harmony search [170,171], big bang-big crunch with upper bound strategy (BBBC-UBS) [172], JAYA [35], MATLAB Sequential Quadratic Programming (MATLAB-SQP) [173] and ANSYS built-in optimization routines [174]. The ANSYS built-in optimization routines (e.g., gradient-based, zero order and response surface approximation) were run in cascade or alternated in order to maximize their efficiency.
The abovementioned comparison should be considered very indicative for the following reasons:
  • Adaptive HS [170,171] and BBBC-UBS [172] represent state-of-the-art formulations of harmony search and big bang-big crunch, which have been successfully utilized in many optimization problems. In particular, the adaptive HS algorithm adaptively changes internal parameters without any intervention by the user: this approach is very similar to what is done by HFHS. The BBBC-UBS algorithm [172] immediately discharges trial designs that certainly would not improve the current best design included in the population, thus saving computational cost; this elitist strategy is somehow consistent with the rationale followed by HFSA, HFHS and HFBBBC that always try to generate trial designs lying on descent directions.
  • JAYA [35] is one of the most recently developed metaheuristic algorithms that has soon emerged as a very powerful method and gathered great consideration from optimization experts. The basic idea of JAYA is very simple yet very effective: search process tries to move toward the best design and avoid the worst design of the population. Besides this, JAYA is very easy to implement and does not have internal parameters to be tuned. The basic formulation of JAYA was successfully used in the damage detection problems solved in [153,154]. In [175,176], JAYA’s computational efficiency was improved by adding an elitist strategy, which is conceptually similar to that used by BBBC-UBS. However, such a strategy may become computationally ineffective for inverse problems as it entails a new finite element analysis each time a design of the population is updated. Nevertheless, it is interesting to compare HFSA, HFHS and HFBBBC with JAYA also.
  • SQP is universally reputed by optimization experts the best gradient-based method available in the literature. The method is globally convergent, does not require setting of move limits and does not suffer from premature convergence. The successful use of MATLAB-SQP in highly nonlinear inverse problems taken from very different fields (e.g., optical super-resolution with evanescent illumination, visco-hyperelasticity of cell membranes, damage detection etc.) is well documented in the literature (see, for example, [11,12,14,177,178,179]).
The structural analyses entailed by the optimization process to evaluate the error functional Ω were performed with the commercial finite element program ANSYS® [174]. Each metaheuristic search engine and MATLAB-SQP were properly interfaced with the finite element solver. Since the gradient of error functional is not explicitly available and evaluating Ω entails structural analyses, the present algorithms computed approximate gradients as described in Section 2, Section 3 and Section 4. Partial derivatives ∂Ω/∂Xi (i = 1,…,NMP) and δ FEM j / X i required by previously developed SA variants [77,81,83,84] were instead evaluated with centered finite differences (δXi = XOPT,i/10,000). Consequently, weighting coefficients μi = (∂Ω/∂Xi)/|| ¯ Ω(XOPT)|| were determined as:
j = 1 N CNT [ | ( δ FEM j δ j ¯ ) / δ j ¯ | 2 δ j X i ] / N CNT Ω ( X OPT , 1 , X OPT , 2 , , X OPT , i , , X OPT , NMP ) .
SQP-MATLAB computed the ¯ Ω(XOPT) gradient with forward finite differences and progressively updated the [B] matrix involved in the (ST[B]S)/2 term of the quadratic approximation of the error functional Ω. The search direction S represents the solution of the approximate sub-problem built in each iteration. The [B] matrix is initially set equal to the unit matrix and finally converges to the Hessian matrix of the error functional Ω.
Before running optimizations with the new algorithms HFSA, HFHS and HFBBBC, we tried to simplify the previously developed SA/HS/BBBC formulations [77,81,83,84,162,163,164,165,166] adapting them to inverse problems. The goal was to drastically reduce the number of structural analyses required by the identification process. For example, in the case of SA variants used in [77,81,83,84], the global annealing search equation involving sensitivities ∂Ω/∂Xi is replaced by:
XiTR = XOPT,i + (XiU − XiL) ρI × ΩOPT,l−1OPT,l    (i = 1,…,NMP)
The new trial design XTR thus obtained is evaluated with respect to the current best record XOPT. If Ω(XTR) < Ω(XOPT), XTR is reset as the current best record and a new trial point is defined with Eq. (32). Conversely, if Ω(XTR) > Ω(XOPT), a new trial point XTRnew = 2XOPTXTR is defined via mirroring strategy and evaluated with respect to XOPT.
If Ω(XTRnew) < Ω(XOPT), XTRnew is reset as XOPT and a new trial design is generated with Equation (32). Conversely, if it still holds Ω(XTRnew) > Ω(XOPT), the error functional Ω(X) is approximated by a 4th order polynomial ΩAPP(X) passing through the five trial points XTR, XINT, XOPT, XINT’’ and XTRnew where XINT is a randomly generated trial point between XTR and XOPT while XINT’’ is another randomly generated trial point between XOPT and XTRnew. If there is a point XOPT* minimizing the approximate error functional in the segment limited by XTR and XTRnew, a new exact structural analysis is performed to compute Ω(XOPT*). If Ω(XOPT*) > Ω(XOPT), trial designs XTR, XINT, XOPT*, XINT’’ and XTRnew are accepted or rejected based on Metropolis’ criterion. Finally, the trial design with the smallest value of error functional is reset as XOPT. If ΩAPP(X) does not have any minima, the 1-D local annealing search is performed until the current best record is updated.
The above described SA strategy—denoted as SA-NGR in the rest of this article—does not require any exact gradient evaluation but it includes a rather simple line search strategy, which does not ensure trial designs to be lying on descent directions. The (XiU − XiL) step used in Equation (32) may result in larger perturbations and hence less optimization cycles. However, the total number of structural analyses required in the identification process does not change substantially with respect to the SA variants including gradient evaluations [77,81,83,84].
In the case of HS algorithm, any trial design XTR is defined as:
XTR = XOPT + ρFASTSFAST + ρBESTSBEST
where: SFAST and SBEST, respectively, are the steepest descent and the best directions moving from population designs towards the current best record XOPT (the same nomenclature used for the derivation of Equation (22) applies also in this case); ρFAST and ρBEST are two random numbers in the interval (0,1), respectively, generated for SFAST and SBEST. Unlike HS variants [164,166,167], Equation (33) directly utilizes approximate line search to define descent directions.
If the new trial design XTR is better than the worst design XWORST included in the harmony memory matrix [HM], it replaces it and the updated population is re-ordered to determine the new current best record. Conversely, if Ω(XTR) > Ω(XWORST), a new trial point XTRnew = 2·XOPTXTR is defined via mirroring strategy. If it holds again Ω(XTRnew) > Ω (XWORST) (this may be due to nonlinearity and non-convexity of the inverse problem), the 4th order polynomial approximation of Ω described above is performed to find a point of minimum X* yielding Ω(X*) < Ω(XWORST). Should this search be unsuccessful, mirroring strategy and approximate line search are repeated until a trial design better than the worst design stored in the harmony memory is found.
Similar to SA-NGR, this simplified HS formulation—denoted as HS-NGR in the rest of this article—does not evaluate the gradient of error functional. However, it considers only two potentially descent directions (yet defined from approximate line search) and hence it is forced to repeatedly perform mirroring of trial designs, polynomial approximation of error functional and 1-D annealing search. While the classical HS strategy of replacing only the worst design is adopted also by HS-NGR without following any elitist criterion, the main architecture of HS based on the use of HMCR and bandwidth parameters is not retained. In fact, HS-NGR always uses the same Equation (33) to generate new trial designs regardless of the fact that the current trend of variation of the error functional would suggest performing exploitation rather than exploration. Consequently, HS-NGR may be unsuccessful in global search and it attempts to correct this problem by carrying out a local search, which usually entails many structural analyses. Furthermore, replacing only the worst design results in an extra number of FE analyses. This counterbalances the reduction in the number of analyses achieved by not computing gradients via finite differences.
In the case of the BBBC algorithm, any trial design XTR is always defined as:
XTR = XCM + ρFAST,CM × SFAST,CM + ρBEST,CM × SBEST,CM
where: SFAST,CM and SBEST,CM, respectively, are the steepest descent and the best directions moving towards the center of mass of the population (definitions are the same as for Equation (22) but XOPT is replaced by XCM); ρFAST and ρBEST are two random numbers in the interval (0,1), respectively, generated for SFAST,CM and SBEST,CM. Unlike BBBC variants [165,166] and similar to Equation (33) used for HS-NGR, Equation (34) directly utilizes approximate line search to define descent directions.
The new trial design XTR is evaluated in the same way as in the SA-NGR algorithm and a new explosion in the neighborhood of the center of mass is performed only if the current XOPT could not be improved.
Since the above described simplified BBBC variant is a gradient free algorithm, it will be denoted as BBBC-NGR in the rest of the article. At first glance, BBBC-NGR has to deal with two critical aspects: (i) solution is perturbed with respect to the center of mass of the population rather than with respect to the current best record; (ii) a limited number of potentially descent directions are considered in the formation of a new trial solution. Consequently, the reduction of computational cost granted by the smaller number of explosions may by counterbalanced by the additional structural analyses performed in the attempt of improving current best record with mirroring strategy and 4th order approximation of error functional Ω.

5.1. Mathematical Optimization Benchmark: Random Minimum Square Problem

The inverse problem (1) basically is a least square problem. A randomized version of the problem can be stated in the general form for NMP design variables as:
{ Min   Ω = i = 1 NMP ( x i η i ) 2 1 x i 1
where ηi (i = 1,…,NMP) are random numbers generated in the (−1,1) interval. The cost function of this problem is unimodal and has a global minimum located at XTRG12,…,ηNMP) and leading to ΩMIN = 0. The random numbers ηi in Equation (35) introduce noise in the least square optimization process similar to the noise that may be caused by optical measurements. Here, the target vector XTRG12,…,ηNMP) was selected by averaging five randomly generated vectors.
In order to carry out a preliminary comparison between the present algorithms and other optimizers, the problem (35) was solved with NMP = 100 or NMP = 500 using HFSA, HFHS, HFBBBC, adaptive HS [170,171], BBBC-UBS [172], JAYA [35,175,176] and SQP-MATLAB. Using NMP = 500 variables allowed to simulate the use of a fairly large number of control points at which the optically measured displacements are compared with finite element results.
The population size for HFHS, HFBBBC and JAYA was set as 20, 200, 500 and 1000 in order to analyze sensitivity of convergence behavior to NPOP. Because of the random nature of metaheuristic search engines, 20 independent optimization runs were carried out for each setting of NPOP and NMP. HFSA and SQP-MATLAB runs were started from the best point and center of mass of each initial population defined for HFHS, HFBBBC and JAYA. These points were very far from the target solution: in fact, initial values of Ω ranged between 26.33 and 34.15 with an average percent error on variables ranging between 541.4% and 1392.7%.
The present algorithms found very competitive designs with SQP-MATLAB but required up to three function evaluations to complete the optimization process: on average, 3196 (HFSA), 3562 (HFHS) and 3813 (HFBBBC) vs. 1650 (SQP-MATLAB). However, the average optimized cost and standard deviation on optimized cost were significantly smaller for the present algorithms, which converged to more precise solutions than SQP-MATLAB: in particular, (3.798 ± 1.149) × 10−12, (2.899 ± 2.623) × 10−12 and (7.816 ± 4.241) × 10−13, respectively, for HFSA, HFHS and HFBBBC vs. (1.044 ± 0.948) × 10−10 obtained by SQP-MATLAB. The higher precision of HFSA, HFHS and HFBBBC is confirmed by the larger deviation of optimized designs from the target solution XTRG seen in the case of SQP-MATLAB. It should be noted that, since the target optimum XTRG contains some very small values (for example, of the order of 7 × 10−4), even a small difference between some component of XOPT and XTRG may make average deviation increase by a large extent.
While convergence behavior of the present SA/HS/BBBC variants was rather insensitive to population size, the efficiency of the gradient-based optimizer decreased for increasing population size due to the larger sparsity of design variable values. Adaptive HS variants [170,171] were outperformed by HFSA, HFHS and HFBBBC as they found some intermediate designs with an average cost of 0.174 after 10000 function evaluations. BBBC-UBS [172] was much more efficient than adaptive HS and its convergence speed improved with population size. However, cost function evaluated after 10000 analyses for BBBC-UBS is still 1.467 × 10−9, three orders of magnitude higher than for the present HS/BBBC/SA algorithms. JAYA [35,175,176] was slightly more efficient than BBBC-UBS and arrived at the cost function value of 1.297 × 10−9 after 9850 analyses. However, its computational speed significantly decreased with population size.
Results gathered in this preliminary test confirmed the ability of the present algorithms to solve least square type problems including random noise. This conclusion will be proven true in the next sections also for the three identification problems solved in this study.

5.2. Woven Composite Laminate

The first inverse problem solved in this study regards the mechanical characterization of an 8-ply woven-reinforced fiberglass-epoxy composite laminate used as substrate for printed circuit boards (see Figure 6a). The error functional Ω to be minimized depends on four unknown elastic constants, Ex, Ey, Gxy and νxy. The target values of material properties were provided by the industrial partner involved in the project: Ex = 25000 MPa, Ey = 22000 MPa, Gxy = 5000 MPa and νxy = 0.280.
The optimization process entailed by this identification problem attempts to match the in-plane displacements u generated by a vertical load of 140 N that produces 3-point bending. A 46 mm long, 13 mm tall and 1.2 mm thick specimen was cut from the laminate and submitted to 3-point bending. Target displacements u included in the error functional Ω were measured with Phase Shifting Electronic Speckle Pattern Interferometry (PS-ESPI) [5,6,7]. The double-illumination interferometer used in the speckle measurements is schematized in Figure 6b; the symmetric illumination beams make the setup be sensitive to u-displacements. Illumination is realized with a 35 mW He-Ne laser (λ = 632.8 nm). The angle of illumination θ is 20°. Hence, sensitivity of optical set up is λ/2sinθ = 925.1 nm. Fringe patterns were processed following guidelines illustrated in [180]. More details on the ESPI measurements carried out for this identification problem can be found in [76,77].
The ESPI phase pattern containing displacement information is shown in Figure 6c while Figure 6d shows the finite element model including control paths parallel to the Y-axis of symmetry of the specimen. The specimen was modelled in ANSYS with 4-nodes plane elements under the assumption of plane stress. Element size was selected so as to have mesh independent solutions and nodes located in correspondence of the control points defined on the recorded image. The error functional Ω was built by comparing FE results and experimental data at 78 control points. The following bounds were taken for material parameters in the optimization process: 3000 ≤ Ex ≤ 50000 MPa, 2000 ≤ Ey ≤ 50000 MPa, 1000 ≤ Gxy ≤ 50000 MPa and 0.01 ≤ νxy ≤ 0.45. These bounds are large enough not to have any effect on the results of the identification problem.
The population size of all HS and BBBC variants considered in this study was set equal to 10, hence 2.5 times as large as the number of unknown material parameters. The same was done for JAYA. Values of Ω corresponding to the best design, worst design and center of mass of the initial population are 0.180, 0.862 and 0.365, respectively. The corresponding average (maximum) deviations from target properties are 33.8% (56.1%), 37.4% (61.1%) and 41% (53%), respectively. HFSA, ANSYS and MATLAB-SQP optimizations were started from each of the three points mentioned above. Thirty independent optimization runs were carried out starting from different initial populations (yet keeping NPOP = 10) to statistically evaluate algorithms’ performance.
The results of the identification process are summarized in Table 1. The “SA-Grad” notation refers to the ISA algorithm developed in [77], which combined global and local annealing search strategies based on finite difference evaluation of ¯ Ω(XOPT), and was successfully applied to this test case. All HS/BBBC/SA variants determined material properties with a great deal of accuracy. In fact, the largest error, made on the Poisson’s ratio, never exceeded 0.941%. The optimized solutions of HFSA, HFHS and HFBBBC correspond to the lowest errors on material properties. SA-Gradient [77] also was very accurate but required up to 85% more FE analyses than the present algorithms.
The maximum residual error on displacements was always lower than 3%, localized near on the closest control path to the applied load. This happened because the u-displacement field is symmetric about the Y-axis (i.e., loading direction) and hence u-displacements approach to zero near this axis. The average error on displacements evaluated for the identified material properties was about 0.6% for all algorithms.
Table 1 shows that HFHS and HFBBBC were faster than HFSA as they required, respectively, 210 and 222 structural analyses to complete the optimization process vs. 257 analyses required by HFSA. The proposed algorithms were between 15% and 23% faster than the simplified algorithms SA/HS/BBBC-NGR and the number of explosions and 1-D local annealing searches were substantially reduced by the present formulations. The very small number of optimization variables (only four unknown parameters) defined for this inverse problem somehow limited the ability of HFHS and HFBBBC of building a large number of descent directions.
Remarkably, statistical dispersion on identified material properties, residual error on displacements and required number of finite element analyses evaluated over the thirty independent runs was less than 0.11% thus proving the robustness of the proposed algorithms.
For the sake of brevity, Table 1 does not report the results obtained by AHS [170,171], BBBC-UBS [172], JAYA [35,175,176], MATLAB-SQP [173] and ANSYS [174]. The gradient-based optimizer of ANSYS converged after 48 iterations and about 200 structural analyses to a solution (Ex = 24898 MPa; Ey = 22306 MPa; Gxy = 5225 MPa; νxy = 0.223) with about 20.3% error on Poisson’s ratio. MATLAB-SQP was more accurate than ANSYS (Ex = 25006 MPa; Ey = 22026 MPa; Gxy = 4971 MPa; νxy = 0.288) but yet its solution has a 2.8% error on Poisson’s ratio after 45 iterations and about 215 structural analyses. Adaptive HS [170,171] was the slowest algorithm overall: in fact, average error on material properties for the solution Ex = 24767 MPa; Ey = 21777 MPa; Gxy = 5190 MPa; νxy = 0.279 was still higher than 1.5% after about 400 structural analyses. BBBC-UBS [172] found the solution Ex = 25045 MPa; Ey = 21991 MPa; Gxy = 4889 MPa; νxy = 0.277 after about 350 structural analyses: this solution is critical with respect to Poisson’s ratio for which there is a 2.2% error. Finally, JAYA [35,175,176] obtained the properties Ex = 24991 MPa; Ey = 21979 MPa, Gxy = 5044 MPa; νxy = 0.257 after 25 iterations and 250 structural analyses; although elastic moduli were identified very precisely, the Poisson’s ratio error increased to 9.2%.
The above listed data confirm the superiority of the proposed SA/HS/BBBC formulations over metaheuristic algorithms that do not use line search to generate trial solutions belonging to descent directions. The elitist strategies of BBBC-UBS and JAYA are more heuristic and do not form descent directions in a direct way unlike HFHS and HFBBBC.
The convergence curves obtained for the best optimization runs of HS/BBBC/SA variants, JAYA and gradient-based optimizers are compared in Figure 7. HFBBBC was the fastest algorithm to significantly reduce the error functional but it then had a fairly long step with little improvements in Ω value. This allowed HFSA and HFHS to have an average convergence rate similar to HFBBBC. The small number of design variables made it difficult for HFSA to recover the initial gap in cost function (i.e., 0.180 vs. 0.365). HS-NGR, BBBC-NGR, SA-NGR and SA-Grad showed oscillatory behavior because they formed trial designs considering only one or two potentially descent directions at a time. JAYA’s best run started from a population including better designs than the other algorithms but the search process of this method clearly suffered from the lack of a direct generation of descent directions. In fact, JAYA achieved the last 30% of its total reduction in error functional with respect to the best initial value of Ω = 0.136 over 22 iterations out of a total number of 25 iterations.
MATLAB-SQP was faster than SA-NGR and SA-Grad, comparable in convergence rate with HFSA and HS-NGR for some iterations but definitely slower than HFHS and HFBBBC. Furthermore, its convergence curve became very similar to that of JAYA after 11 iterations. The extra iterations required by JAYA, ANSYS and MATLAB-SQP for completing optimization process were due to their difficulty in converging to the correct value of Poisson’s ratio.

5.3. Axially Compressed Composite Panel for Aeronautical Use

The goal of the second inverse problem solved in this study was to identify mechanical properties and ply orientations of a IM7/977-2 graphite-epoxy composite laminate (43 cm long, 16.5 cm tall and 3 mm thick) for aeronautical use. The panel, subject to axial compression, was not reinforced by any stiffener. According to the manufacturer, the laminate included 1/3 of the layers oriented at 0° (i.e., in the axial direction Y), 1/3 oriented at 90° (i.e., in the transverse direction X) and 1/3 oriented at ±45°. The error functional Ω to be minimized depends on seven unknown structural parameters: four elastic constants Ex, Ey, Gxy and νxy and three ply orientations θ0, θ90 and θ45 of the laminate (corresponding, respectively, to nominal angles 0°, 90° and ±45°). The target values of elastic constants indicated by the industrial partner involved in the project are very typical for the IM7/977-2 material: Ex = 148300 MPa, Ey = 7450 MPa, Gxy = 4140 MPa and νxy = 0.01510.
The optimization process entailed by this identification problem attempts to match the fundamental buckling mode shape of the axially compressed composite panel. Mode shape is normalized with respect to the maximum out-of-plane displacement wmax occurring at the onset of buckling. Hence, the target quantity of the optimization process is the normalized out-of-plane displacement wnorm defined as w/wmax.
Figure 8a shows the experimental set-up used for this test case. The axial load is applied to the specimen by imposing a given end-shortening to the panel top edge while bottom edge is fixed. The figure shows the MTS AllianceTM RT/30 testing machine and the grips that realize loading and constraint conditions. In the experiments, end-shortening was progressively increased to 1.5 mm by moving downwards the testing machine cross-bar.
The buckling shape of the panel was measured with a white light double illumination projection moiré set-up [6,181,182]. The experimental setup included two slide projectors (Kodak Ektalite® 500, USA) and a standard digital camera (CANON® Eos 350, 8 Mpix CMOS sensor, Japan) mounted on a tripod; the optical axis of the camera is orthogonal to the panel surface. The illumination angle θ limited by the optical axis of each projector and the optical axis of the camera (i.e., the angle between the direction of illumination and the viewing direction) is 18° while the nominal pitch of the grating is 317.5 μm (80 lines/inch). It can be seen from the figure that projectors are placed symmetrically about the optical axis of the camera. Each projector projects a system of lines and the wave fronts carrying these lines in the space interfere to form an equivalent grating, which is then modulated by the specimen surface. This condition is equivalent to projecting a grating from infinity. The optical set-up is sensitive to out-of-plane displacements which modulate the projected lines making them curve.
The pitch of the projected grating (pj) measured on the reference plane (i.e., the surface of the undeformed panel) was 3623.3 μm with a magnification factor of 10.85. Therefore, the sensitivity of the optical setup, pj/2tanθ, was 5575.7 μm. The double illumination, together with the subtraction of the phases of the two systems of lines operated via software, produced a phase distribution on both the reference plane and the observed surface that is equivalent to the case of projection from infinity. The phase pattern corresponding to the onset of buckling is shown in Figure 8b. Image processing was done with the HoloStrain software developed by Sciammarella et al. [183].
The experiment was simulated by a finite element model including 8-node shell elements (Figure 8c): again, the selected mesh size guarantees mesh independent solutions and the correspondence between control nodes and image pixels. An eigenvalue buckling analysis was performed in order to determine the critical load of the panel and the corresponding buckled shape.
The error functional Ω was built by comparing finite element results and experimental data at 86 control points along the AB path sketched in Figure 8c. Such a path was chosen in view of the observed symmetry of buckling mode which resembles a typical Euler mode with one half-wave. The maximum out-of-plane displacement occurs at the center of the panel where the origin of the coordinate system X-Y is placed. The following bounds were taken for material parameters in the optimization process: 10000 ≤ Ex ≤ 1,000,000 MPa, 1000 ≤ Ey ≤ 1,000,000 MPa, 500 ≤ Gxy ≤ 20000 MPa and 0.001 ≤ νxy ≤ 0.1. Ply orientations were made to vary as follows: −50° ≤ θ0 ≤ 50°, 0° ≤ θ90 ≤ 91°, 10° ≤ θ45 ≤ 70°. Similar to the woven composite laminate problem, the bounds imposed on the unknown structural properties are large enough not to affect results of identification process.
The population size of all HS and BBBC variants considered in this study was set equal to 15, slightly more than two times the number of unknown structural parameters. JAYA also was executed with NPOP = 15. Values of Ω corresponding to the best design, worst design and center of mass of the initial population are 0.891, 5.149 and 2.303, respectively. The corresponding maximum deviations from target elastic properties are 130.8%, 521% and 730.6%, respectively. Similar to the previous test problem, HFSA, ANSYS and MATLAB-SQP optimization runs were started from the best and worst points as well as from the center of mass of the initial population generated for HS, BBBC and JAYA. Thirty independent optimization runs (keeping NPOP = 15) were carried out to statistically evaluate algorithms’ performance.
Table 2 presents the results obtained for this inverse problem. The “SA-Grad” notation now refers to the SA algorithm of Refs. [83,84], which evaluated gradients of error functional via finite differences and used up to [(2·NMP + 1) + 2·NMP] descent directions − out of a total of (2NMP − 1) + 2·NMP potentially available directions—per iteration to form new trial designs. It can be seen that structural properties were identified more accurately by HFSA, HFHS and HFBBBC, that obtained an average error on properties ranging between 0.197 (HFSA) and 0.266% (HFBBBC). The maximum residual error on buckling mode shape evaluated for the identified structural properties never exceeded 2.75% for the present algorithms while was about 3.5% for SA/HS/BBBC-NGR and SA-Grad. Average error on mode shape was always lower than 1.9% for the present algorithms vs. about 2.1% for the other algorithms. The largest errors on w-displacements were localized near control path boundaries A and B, that is where displacements tend to zero and numerical noises may occur.
HFHS and HFBBBC were again faster than HFSA as they required, respectively, 478 and 431 structural analyses to complete the optimization process vs. 596 analyses required by HFSA. The proposed algorithms were between 23% and 36% faster than SA/HS/BBBC-NGR variants and even up to 117% faster than SA-Grad. Furthermore, they required much less explosions and 1-D local annealing searches. The fact that HFHS, hybrid HFBBBC and HFSA could reduce the number of finite element analyses with respect to HS/BBBC/SA-NGR and SA-Grad more significantly than for the woven composite problem confirms that increasing the number of design variables may allow the present algorithms to generate more descent directions and speed up the optimization search. Interestingly, the number of structural analyses required in the axially compressed panel identification problem was on average about two times as large as that required in the woven composite laminate identification problem, hence close to the ratio 7 to 4 existing between unknown parameters.
Statistical dispersion on identified properties, residual error on displacements and required number of FE analyses evaluated over the thirty independent runs remained below 0.17% thus confirming the robustness of HFSA, HFHS and HFBBBC. For the sake of brevity, Table 2 does not report the results obtained for AHS [170,171], BBBC-UBS [172], JAYA [35,175,176], MATLAB-SQP and ANSYS also for this problem. MATLAB-SQP and ANSYS converged to solutions (respectively, Ex = 148960 MPa; Ey = 7594 MPa; Gxy = 4189 MPa; νxy = 0.01488; θ0 = 0.07150°; θ90 = 86.328; θ45 = 43.931° and Ex = 146550 MPa; Ey = 7255 MPa; Gxy = 4146 MPa; νxy = 0.01558; θ0 = 0.1205°; θ90 = 88.172; θ45 = 41.681°) that still have, respectively, 4.1% and 7.4% errors on ply orientations. This happened after about 600 structural analyses, hence for a higher computational cost than that required by HFSA, HFH and HFBBBC.
As expected, adaptive HS [170,171] and BBBC-UBS [172] were outperformed by HFSA, HFHS and HFBBBC. In particular, adaptive HS converged to the solution Ex = 146760 MPa; Ey = 7266 MPa; Gxy = 4092 MPa; νxy = 0.01534; θ0 = 0.08884°; θ90 = 87.632°; θ45 = 45.808° after about 1650 structural analyses, about 3.5 times slower than HFHS. Furthermore, BBBC-UBS found the solution Ex = 147275 MPa; Ey = 7562 MPa; Gxy = 3996 MPa; νxy = 0.01540; θ0 = 0.03891°; θ90 = 88.513°; θ45 = 46.002° after about 1250 structural analyses, about three times slower than HFBBBC. In spite of such a large computational cost, residual errors on structural properties still ranged between 2.6 and 3.5%.
JAYA [35,175,176] converged to the following solution: Ex = 149703 MPa; Ey = 7451 MPa; Gxy = 4999 MPa; νxy = 0.01525; θ0 = 0.01586°; θ90 = 85.102°; θ45 = 46.171° after 65 iterations and 975 structural analyses. Elastic moduli and Poisson’s ratio were identified very precisely (less than 1% error) but the largest error on layup angles was about 5.8%.
The convergence curves relative to the best optimization runs of SA/HS/BBBC variants, JAYA, ANSYS and MATLAB-SQP optimizers are compared in Figure 9. The present algorithms were definitely faster than SA/HS/BBBC-NGR and SA-Grad. In particular, HFSA and HFHS reduced significantly the error functional in the very first iterations but then conducted a fairly long exploitation phase. HFBBBC showed the most regular rate of reduction of Ω and finally required the lowest number of structural analyses overall. Although HFSA optimization was started from an initial design corresponding to a much higher value of Ω than the best design included in the initial population of HFHS and HFBBBC (i.e., Ω = 2.303 vs. Ω = 0.891), it immediately recovered the gap in cost function and the optimization histories of the present SA/HS/BBBC variants became very similar after about 12 iterations. Similar to the previous test problem, JAYA’s best optimization run started from a better population than those generated for the other algorithms: in fact, the best value of error functional was only 0.361. HFHS, HSFA and HFBBBC recovered the initial gap from JAYA within only 3, 6 and 8 iterations, respectively. Furthermore, after the 7th iteration, JAYA’s solutions improved slowly. The higher computational complexity of this test case made hence more evident the inherent limitation of JAYA’s formulation: the absence of a mechanism for directly defining descent directions.
MATLAB-SQP and ANSYS had the slowest converge rate with a marked oscillatory behavior in the latter case. The response surface approximation strategy implemented by ANSYS to build sub-problems (this includes a random selection of the response surface base points) was more efficient than the first-order method used in the woven composite laminate problem. However, it suffered from the noise introduced in the response surface fitting by the very different scales of ply orientations and Poisson’s ratio with respect to elastic moduli.

5.4. Bovine Pericardium Patch

The last identification problem solved in this study regarded the mechanical characterization of a glutaraldehyde treated bovine pericardium (GTBP) patch subject to inflation. The intensive experimental campaign conducted in [83,84] confirmed the indications of the industrial partner involved in the project that the GTBP patch behaves as a transversely isotropic hyperelastic material. The same conclusion was achieved from both in-plane equibiaxial tension [83] and 3D inflation [84] tests. Since the bovine pericardium patch can be considered a fibrous hyperplastic material, the error functional Ω to be minimized depends on 17 unknown material parameters: 16 hyperelastic constants (a1, a2, a3 and b1, b2, b3 for the “isotropic” deviatoric term associated to matrix properties; c2, c3, c4, c5, c6 and d2, d3, d4, d5, d6 for the “anisotropic” deviatoric term associated to fiber properties) and the fiber orientation direction cosine cosθ. More details on the transversely isotropic hyperelastic constitutive model are given in [174].
The average values of material parameters and their corresponding standard deviations found in [83,84] were: a1 = 199.255 ± 0.111, a2 = 126.110 ± 0.341, a3 = 135.758 ± 1.216, b1 = 388.077 ± 1.978, b2 = 169.234 ± 2.276, b3 = 187.116 ± 1.704, c2 = 197.506 ± 1.750, c3 = 89.359 ± 0.633, c4 = 174.382 ± 0.732, c5 = 169.645 ± 0.115, c6 = 148.225 ± 0.936, d2 = 158.541 ± 0.629, d3 = 21.608 ± 0.514, d4 = 69.229 ± 0.508, d5 = 168.032 ± 2.462, d6 = 102.076 ± 0.302 kPa and cosθ = 0.6837 ± 0.000751. Since standard deviations are very small, average values of material properties can be taken as the target result of the identification process. A further proof of the validity of the assumption made above is that cosθ = 0.6837 matches well with the angle of rotation of the iso-displacement contours seen experimentally with respect to the coordinate axes X and Y.
The optimization process entailed by the identification problem attempts to match the total displacements utot = u 2 + v 2 + w 2 of a circular membrane of diameter 40 mm and thickness 0.5 mm progressively inflated up to the maximum pressure of 12.22 kPa. An assembly view of the experimental set-up used in the inflation test is shown in Figure 10a. 3D displacement components were simultaneously measured by combining intrinsic moiré (IM) and projection moiré (PM) [5,6,7]: IM is sensitive to in-plane displacements while PM is sensitive to out-of-plane displacements.
Figure 10b shows a typical image of the inflated membrane illuminated by white light with the two modulated gratings by the deformed specimen: the printed square-dots grating (1 mm pitch) follows the evolution of the in-plane displacements u and v while the projected vertical lines grating (2 mm pitch) follows the evolution of the out-of-plane displacement w. The origin of the reference system X-Y is put in the center of the tested membrane. Images were processed with the HoloStrain software [181]. The largest in-plane displacement measured in the experiments was about 0.5 mm while the largest out-of-plane displacement was about 5.3 mm. Each displacement component was extracted from the FFT pattern of the recorded image by properly selecting spatial frequencies. More details on the experimental tests performed for this identification problem are given in [83,84].
The inflation test was simulated by the finite element model shown in Figure 10c, including 1200 quadratic solid hyperelastic elements and 8603 nodes. The FE model shows the zero-displacement boundary condition imposed at the circular edge of the membrane as well as the uniformly distributed inflation pressure acting on the membrane. Mesh size was determined via convergence analysis again taking care to match control nodes and pixels of the recorded images. In the FE analysis, the NLGEOM geometric nonlinearity option was activated in order to account for the large deformations experienced by the hyperelastic membrane.
The error functional Ω of this test problem was built by comparing ANSYS results and moiré data at 81 control points located on the horizontal control path hc along the X-axis and the vertical control path vc along the Y-axis. All hyperelastic constants were made to vary between 10 kPa and 1 MPa while cosθ could range between 0.65 and 0.75. As for the previous two test problems, the bounds imposed on material properties were large enough not to affect the solution of the identification process.
The population size of all HS and BBBC variants considered in this study was set equal to 30, hence about two times the number of unknown material parameters. JAYA’s optimization also was run with NPOP = 30. The values of Ω functional corresponding to the best design, worst design and center of mass of the initial population are 0.125, 0.570 and 0.340, respectively. The corresponding average deviation from target properties ranges between 223% and 566%. The high nonlinearity of this identification problem is confirmed by the fact that the best four designs of the population present a larger deviation from target properties than the worst design: respectively, 417%, 345%, 350% and 432% vs. 282%. HFSA, ANSYS and MATLAB-SQP optimization runs were started from the best and worst points as well as from the center of mass of the initial population generated for HS and BBBC. Thirty independent runs were executed with different initial populations (keeping NPOP = 30) to statistically evaluate performance of different optimizers.
Table 3 presents the results obtained for this inverse problem. The “SA-Grad” notation again refers to the SA algorithm derived from [83,84]. The present SA/HS/BBBC variants were once again more accurate than SA/HS/BBBC-NGR algorithms. In fact, average and maximum errors on identified material properties, respectively, ranged between 0.196 (HFHS) and 0.256% (HFSA), and between 0.594% (HFBBBC) and 0.692% (HFHS) vs. about 0.49% (average) and 1.24% (maximum) errors seen for SA/HS/BBBC-NGR. The largest residual error on utot-displacements evaluated for the present algorithms never exceed 2.9% and average residual errors were about 30% lower than for SA/HS/BBBC-NGR. The analysis of error maps revealed that largest errors are localized at X = ± 11.5 mm and Y = ± 8.5 mm, that is where the three displacement components become comparable in magnitude.
HFHS, HFBBBC and HFSA required, respectively, 1223, 1294 and 1373 structural analyses to complete the optimization process. Hence, the proposed algorithms were between 22.4% and 32.3% faster than the other algorithms and reduced the number of explosions and 1-D local annealing searches on average by a factor 5 with respect to the previous formulations. The average number of FE analyses required by HFSA, HFHS and HFBBBC for this identification problem was about 2.5 times as large as its counterpart in the axially compressed panel problem. Hence, computational cost again changed almost linearly with the number of unknown parameters.
The HFSA, HFHS and HFBBBC algorithms were very robust also for this test problem. In fact, statistical dispersion on identified properties, residual error on displacements and required number of FE analyses evaluated over the thirty independent runs remained below 0.058%. The higher amount of design freedom introduced by the larger number of unknown parameters with respect to the first two test problems allowed to generate high quality trial solutions more easily regardless of the composition of the initial population.
AHS [170,171], BBBC-UBS [172], MATLAB-SQP and ANSYS were outperformed by HFSA, HFHS and HFBBBC also in this identification problem. In particular, the MATLAB-SQP solution (a1 = 196.6, a2 = 126.8, a3 = 130.58, b1 = 392.17, b2 = 169.72, b3 = 186.97, c2 = 196.52, c3 = 89.768, c4 = 173.46, c5 = 168.98, c6 = 148.79, d2 = 159.12, d3 = 21.173, d4 = 69.507, d5 = 165.86, d6 = 98.219 kPa and cosθ = 0.6882) was obtained after about 80 optimization iterations and 1550 structural analyses (i.e., 1.13 times the computational cost of HFSA) but still had a 3.8% error on hyperelastic constants a3 and d6. ANSYS converged to a slightly worse solution than MATLAB-SQP even though it required about 100 optimization iterations and 1700 finite element analyses.
BBBC-UBS [172] converged to the solution a1 = 199.368, a2 = 123.056, a3 = 135.963, b1=386.842, b2 = 170.251, b3 = 186.915, c2 = 198.753, c3 = 89.669, c4=175.105, c5 = 169.838, c6 = 147.961, d2 = 158.403, d3 = 21.681, d4 = 69.146, d5 = 166.405, d6 = 102.738 kPa and cosθ = 0.6833 after about 2800 finite element analyses, about 2.2 times slower than HFBBBC. However, the residual error made on the a2 hyperelastic constant was still 2.4% in spite of such a large computational cost. Adaptive HS [170,171] was again the worst optimizer overall as its solution (a1 = 197.213, a2 = 125.214, a3 = 130.744, b1 = 388.321, b2 = 174.278, b3 = 186.831, c2 = 197.451, c3 = 88.118, c4 = 172.409, c5 = 168.746, c6 = 151.330, d2 = 158.462, d3 = 20.845, d4 = 69.298, d5 = 168.764, d6 = 100.741 kPa and cosθ = 0.6776) yield a residual error of 3.5% on hyperelastic constants a3 and d3, in spite of having performed about 3500 finite element analyses, about 2.9 times more than HFHS.
JAYA [35,175,176] obtained a very close solution to BBBC-UBS [172] (i.e., less than 0.25% difference on hyperelastic constants; same value of cosθ) completing the optimization process in 2380 finite element analyses (i.e., 140 iterations), still very slowly with respect to the present SA/HS/BBBC variants. The detail of the JAYA’s solution is as follows: a1 = 199.325, a2 = 122.987, a3 = 136.094, b1 = 386.915, b2 = 170.351, b3 = 187.240, c2 = 198.259, c3 = 89.655, c4 = 174.948, c5 = 170.087, c6 = 148.022, d2 = 158.558, d3 = 21.634, d4 = 69.093, d5 = 166.083, d6 = 102.753 kPa and cosθ = 0.6833.
The convergence curves obtained for the best optimization runs of HS/BBBC/SA variants, MATLAB-SQP and ANSYS optimizers are compared in Figure 11. The JAYA’s best run curve is not shown in the figure as values of Ω recorded in the first 30 iterations are off-scale. Because of the high nonlinearity of this problem, intermediate designs were sorted also in terms of deviation from target properties. This explains why convergence curves of BBBC and HS variants start from higher values of Ω than those of SA variants, ANSYS and SQP optimizers which start from the Ω = 0.340 value corresponding to the center of mass of the population. HFBBBC was definitely the fastest algorithm throughout optimization process, followed by HFSA and HFHS. However, convergence curve of HFHS was almost monotonic and this explains why HFHS finally required less finite element analyses than HFBBBC and HFSA, which instead showed fairly long steps with small improvements in solution. The optimization histories of HFSA, HFHS and HFBBBC practically coincided after 20 iterations.
The present algorithms were definitely faster than SA/HS/BBBC-NGR variants and SA-Grad. The “NGR” algorithms showed steps and oscillatory behavior in the optimization history because the number of descent directions (i.e., one or two) involved in the formation of new trial solutions was not large enough to deal with the high nonlinearity of the GTBP patch identification problem. Interestingly, SA-Grad considered a set of 69 (i.e., 4·NMP + 1) descent directions in each iteration to update design. This allowed oscillatory behavior to be limited but about one half of these 69 directions were formed by perturbing one material parameter at a time and hence yield little improvements in the error functional.
ANSYS again showed the slowest converge rate and a marked oscillatory behavior. That happened because the high nonlinearity of this identification problem reduced the efficiency of the response surface approach used by ANSYS for building the approximate sub-problem in each optimization iteration. MATLAB-SQP was competitive with SA-Grad for about 20 iterations but it then started to cycle between intermediate designs characterized by Ω = 0.01 trying to find proper values for hyperelastic constants a3 and d6. Conversely, SA-Grad used its inherent exploitation capability to improve solution in the final part of optimization history.
JAYA (the convergence curve is not shown in the figure) started its best optimization run from a population with Ω OPT = 0.476 and could not reduce the error functional value below 0.35 for the first 30 iterations. Such a behavior confirms that as problem size increases it becomes more important to update population according to the rank held in the population by the currently perturbed design. This requirement is certainly satisfied by HFHS and HFBBBC, which generate search directions SFAST, SBEST, S2ndBEST, SOPT−CM etc. (or perform low cost evaluations of sensitivities of error functional) while JAYA simply perturbs designs following the initial order assigned to the NPOP individuals. For a given design, it is also important to tailor perturbations of each variable to sensitivities of error functional. Whilst this is intrinsically done by 1D probabilistic search utilized by the present algorithms, JAYA updates variables just following the classical 1st to NMPth variable sequence. The latter may result in missing “good” values of “some” variable that potentially improve a given solution more than other values of other variables selected instead. The probability of missing good variable values clearly increases with the problem dimension as it becomes more difficult to reconstruct in the search space the path leading to the global optimum. This explains the “inertia” effect observed for JAYA, which became slower as test problem size increased. Since the ratio between population and number of optimization variables was very similar for all test cases, the “inertia” effect logically occurred also for larger population sizes (see discussion on JAYA’s results developed in Section 5.1).
In order to evaluate sensitivity of convergence behavior of HFSA, HFHS and HFBBBC algorithms to initial population and initial design, the bovine pericardium patch identification problem was also solved with a population including 90 candidate solutions. The new population of HS and BBBC included 60 additional “low quality” candidate solutions characterized by higher values of the error functional: the new worst design has Ω = 1.352 while the new center of mass has Ω = 0.709. Consequently, the average deviation from target properties varied between 223% and 1030%. The candidate solutions yielding the lowest values of Ω again did not show the smallest deviations from target properties.
Results of sensitivity analysis to population size and initial solutions are presented in Table 4. All of the present algorithms practically converged to the same material properties regardless of population size/initial design. Deviations from target material properties were slightly higher for NPOP = 90 but remained below 0.27% (average error) and 0.78% (maximum error). Residual errors on displacements also changed marginally with respect to those evaluated for NPOP = 30. HFHS and HFBBBC performed one more iteration than in the case NPOP = 30 while HFSA could eliminate one optimization cycle. The number of explosions and 1-D local annealing searches remained the same for the two populations. The number of finite element analyses required in the optimization process changed at most by 15%. This increase was due to the fact that the 90-designs population included lower quality solutions. The robustness of the present algorithms is confirmed by the convergence curves shown in Figure 12. It appears that each pair of optimization histories relative to a given algorithm practically coincided in the last 4–5 iterations.

6. Discussion and Conclusions

This study presented a hybrid framework for mechanical identification of materials and structures. The framework combined full-field measurements done with optical methods and global optimization based on metaheuristic algorithms. Such a choice was motivated by the fact that metaheuristic algorithms allow to efficiently deal with the inherent non-linearity and non-convexity of inverse problems. From the experimental point of view, using optical methods is the best approach to identification problems because these techniques provide full-field information, do not alter the state of the investigated specimen, and can precisely detect material anisotropy, presence of local defects and/or damage.
However, the “no free lunch” theorem states that no metaheuristic algorithm can outperform all other algorithms in all optimization problems. Unlike gradient-based optimizers that are still implemented in commercial software although their formulations did not change much in the last 25–30 years, most of the newly developed metaheuristic algorithms added very little to the optimization practice and their appeal quickly vanished after a very few years. This suggests that rather than proposing a new metaheuristic algorithm that improves available methods just marginally it is better to significantly improve the most powerful algorithms. For this reason, we developed three advanced versions of simulated annealing (SA), harmony search (HS) and big bang-big crunch (BBBC). These algorithms were selected as they are very well established metaheuristic optimization methods for which many successful applications to inverse problems have been documented in technical literature. Furthermore, these algorithms possess important features, which are very desirable in global optimization. In fact, SA is inherently able to bypass local optima, HS has a memory where good solutions may be stored, BBBC employs the concept of center of mass including information on the average quality of the population of trial solutions.
The rationale behind the new algorithms developed in this study—denoted as Hybrid Fast Simulated Annealing (HFSA), Hybrid Fast Harmony Search (HFHS) and Hybrid Fast Big Bang-Big Crunch (HFBBBC)—was to generate high quality trial designs lying on a properly selected set of descent directions. For that purpose, enhanced approximate line search and computationally cheap gradient evaluation strategies were developed. Besides hybridizing SA/HS/BBBC metaheuristic search engines with gradient information and approximate line search, HS and BBBC were hybridized with an enhanced 1-D probabilistic search derived from SA.
The optimization framework was tested in three inverse elasticity problems: (i) mechanical characterization of a composite laminate used as substrate in electronic boards; (ii) mechanical characterization and layup identification of an axially compressed composite panel for aeronautical use; (iii) mechanical characterization of bovine pericardium patches used in biomedical applications. The largest test case (iii) included 17 unknown parameters. Sensitivity of inverse problem solutions and convergence behavior to population size and initial design/population was statistically evaluated. A preliminary mathematical optimization problem was solved in order to train algorithms. Remarkably, HFSA, HFHS and HFBBBC were very efficient and outperformed other SA, HS and BBBC formulations, the JAYA algorithm as well as a state-of-the-art gradient-based optimizer like MATLAB-SQP. Furthermore, the present algorithms always were very robust.
An interesting fact observed from Table 1, Table 2, Table 3 and Table 4 is that the maximum residual error on displacements was about 3% for all identification problems. In order to check if this is an inherent limitation of HFSA, HFHS and HFBBBC algorithms, an in silico identification was carried out by computing displacement fields for the target material/structural properties provided by manufacturers (first two identification problems) or determined by averaging results of [83,84] (last identification problem). Optimizations were hence run to reconstruct the target displacement fields generated numerically. Remarkably, the present algorithms were always able to converge to the target material/structural properties reproducing the displacement field with zero residual errors. The observed 3% maximum error hence falls within the level of uncertainty normally entailed by a FEMU-based characterization process, a very complicated task which attempts to match experimental data and finite element results. Optically measured displacements certainly are a good target because the accuracy of speckle and moiré techniques may go down to a very small fraction of the sensitivity of experimental setup. However, in this study, control points were selected right at critical locations (i.e., near boundaries where displacements tend to zero or in transition regions where all displacement components are comparable in magnitude) where even small drifts on measured quantities may have an impact on the success of the identification process. Furthermore, a rather small set of control points was selected for building the error functional Ω. This was done in order to make the matching of experimental data and numerical results more difficult, thus testing the real ability of HFSA, HFHS and HFBBBC to find the global optimum or get very close to the global optimum. In view of this, the results presented in this article should be considered very satisfactory.
Based on the arguments discussed above, it can be concluded that the proposed hybrid framework is a powerful tool for solving inverse mechanical problems.

Author Contributions

All authors equally contributed to all phases leading to the preparation of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Marwala, T. Finite-Element Model Updating Using Computational Intelligence Techniques: Applications to Structural Dynamics; Springer: New York, NY, USA, 2010. [Google Scholar]
  2. Bruno, L. Mechanical characterization of composite materials by optical techniques: A review. Opt. Laser Eng. 2018, 104, 192–203. [Google Scholar] [CrossRef]
  3. Martins, J.M.P.; Andrade-Campos, A.; Thuillier, S. Comparison of inverse identification strategies for constitutive mechanical models using full-field measurements. Int. J. Mech. Sci. 2018, 145, 330–345. [Google Scholar] [CrossRef]
  4. Pierron, F.; Grediac, M. The Virtual Fields Method. Extracting Constitutive Mechanical Parameters from Full-Field Deformation Measurements; Springer: New York, NY, USA, 2012. [Google Scholar]
  5. Cloud, G.L. Optical Methods of Engineering Analysis; Cambridge University Press: New York, NY, USA, 1998. [Google Scholar]
  6. Sciammarella, C.A.; Sciammarella, F.M. Experimental Mechanics of Solids; Wiley: Chichester, UK, 2012. [Google Scholar]
  7. Sciammarella, C.A. A review: Optical methods that measure displacement. In Advancement of Optical Methods & Digital Image Correlation in Experimental Mechanics, Proceedings of the 2018 Annual Conference on Experimental and Applied Mechanics; Lamberti, L., Furlong, C., Lin, M.T., Sciammarella, C.A., Reu, P.L., Sutton, M.A., Eds.; Springer: New York, NY, USA, 2019; Volume 3, Chapter 3; pp. 23–52. [Google Scholar]
  8. Li, J.; Cui, Y.; English, R.E.; Noble, J.A. Ultrasound estimation of breast tissue biomechanical properties using a similarity-based non-linear optimization approach. J. Strain Anal. Eng. 2009, 33, 363–374. [Google Scholar] [CrossRef]
  9. Goenezem, S.; Borbone, P.; Oberai, A.A. Solution of the nonlinear elasticity imaging inverse problem: The incompressible case. Comput. Methods Appl. Mech. Eng. 2011, 200, 1406–1420. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Nunes, L.C.S. Mechanical characterization of hyperelastic polydimethylsiloxane by simple shear test. Mater. Sci. Eng. A 2011, 25, 1799–1804. [Google Scholar] [CrossRef]
  11. Boccaccio, A.; Frassanito, M.C.; Lamberti, L.; Brunelli, R.; Maulucci, G.; Monaci, M.; Papi, M.; Pappalettere, C.; Parasassi, T.; Sylla, L.; et al. Nanoscale characterization of the biomechanical hardening of bovine zona pellucida. J. R. Soc. Interface 2012, 9, 2871–2882. [Google Scholar] [CrossRef] [PubMed]
  12. Ju, M.L.; Mezghani, S.; Jmal, H.; Dupuis, R.; Aubry, E. Parameter estimation of a hyperelastic constitutive model for the description of polyurethane foam in large deformation. Cell. Polym. 2013, 32, 21–40. [Google Scholar] [CrossRef]
  13. Klinge, S.; Steinmann, P. Inverse problems for heterogeneous materials and its application to viscoelastic curing polymers. Comput. Mech. 2015, 55, 603–615. [Google Scholar] [CrossRef]
  14. Boccaccio, A.; Lamberti, L.; Papi, M.; De Spirito, M.; Pappalettere, C. Effect of AFM probe geometry on visco-hyperelastic characterization of soft materials. Nanotechnology 2015, 26, 325701. [Google Scholar] [CrossRef]
  15. Wu, Y.; Wang, H.; Li, A. Parameter identification model for hyperelastic and hyper-viscoelastic models. Appl. Sci. 2016, 6, 386. [Google Scholar] [CrossRef]
  16. Tan, L.; McGarry, M.D.; Van Houten, E.E.; Ji, M.; Solamen, L.; Weaver, J.B.; Paulsen, K.D. Gradient-based optimization for poroelastic and viscoelastic MR elastography. IEEE Trans. Med. Imaging 2017, 36, 236–250. [Google Scholar] [CrossRef] [PubMed]
  17. Holland, J.H. Adaptation in Natural and Artificial Systems; University of Michigan Press: Ann Arbor, MI, USA, 1975. [Google Scholar]
  18. Goldberg, D.E. Genetic Algorithms in Search, Operation and Machine Learning; Addison-Wesley: Reading, MA, USA, 1989. [Google Scholar]
  19. Rechenberg, I. Evolutionsstrategie: Optimierung Technischer Systeme nach Prinzipien der Biologischen Evolution; Frommann-Holzboog: Stuttgart, Germany, 1973. [Google Scholar]
  20. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Adaptive Scheme for Global Optimization over Continuous Spaces; International Computer Science Institute: Berkeley, CA, USA, 1995. [Google Scholar]
  21. Beyer, H.G.; Schwefel, H.P. Evolution strategies—A comprehensive introduction. Nat. Comput. 2002, 1, 3–52. [Google Scholar] [CrossRef]
  22. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  23. Van Laarhoven, P.J.M.; Aarts, E.H.L. Simulated Annealing: Theory and Applications; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1987. [Google Scholar]
  24. Clerc, M. Particle Swarm Optimization; ISTE Publishing Company: London, UK, 2006. [Google Scholar]
  25. Dorigo, M.; Stutzle, T. Ant Colony Optimization; MIT Press: Cambridge, MA, USA, 2004. [Google Scholar]
  26. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  27. Yang, X.S. Engineering Optimization: An Introduction with Metaheuristic Applications; John Wiley & Sons: Hoboken, NJ, USA, 2010. [Google Scholar]
  28. Yang, X.-S.; Gandomi, A.H. Bat algorithm: A novel approach for global engineering optimization. Eng. Comput. 2012, 29, 464–483. [Google Scholar] [CrossRef]
  29. Gandomi, A.H.; Yang, X.S.; Alavi, A.H. Cuckoo search algorithm: A metaheuristic approach to solve structural optimization problems. Eng. Comput. 2013, 29, 17–35. [Google Scholar] [CrossRef]
  30. Glover, F.; Laguna, M. Tabu Search; Kluwer Academic Publishers: Boston, MA, USA, 1997. [Google Scholar]
  31. Geem, Z.W.; Kim, J.H.; Loganathan, G. A new heuristic optimization algorithm: Harmony search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  32. Kaveh, A.; Talatahari, S. Optimum design of skeletal structures using imperialist competitive algorithm. Comput. Struct. 2010, 88, 1220–1229. [Google Scholar] [CrossRef]
  33. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput. Aided Design 2011, 43, 303–315. [Google Scholar] [CrossRef]
  34. Gonçalves, M.S.; Lopez, R.H.; Miguel, L.F.F. Search group algorithm: A new metaheuristic method for the optimization of truss structures. Comput. Struct. 2015, 153, 165–184. [Google Scholar] [CrossRef]
  35. Rao, R.V. Jaya: A simple and new optimization algorithm for solving constrained and unconstrained optimization problems. Int. J. Ind. Eng. Comput. 2016, 7, 19–34. [Google Scholar]
  36. Erol, O.K.; Eksin, I. A new optimization method: Big bang-big crunch. Adv. Eng. Softw. 2006, 37, 106–111. [Google Scholar] [CrossRef]
  37. Rashedi, E.; Nezamabadipour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar]
  38. Kaveh, A.; Talatahari, S. A novel heuristic optimization method: Charged system search. Acta Mech. 2010, 213, 267–289. [Google Scholar] [CrossRef]
  39. Kaveh, A.; Khayatazad, M. A new meta-heuristic method: Ray optimization. Comput. Struct. 2012, 112, 283–294. [Google Scholar] [CrossRef]
  40. Kaveh, A.; Mahdavi, V.R. Colliding bodies optimization: A novel meta-heuristic method. Comput. Struct. 2014, 139, 18–27. [Google Scholar] [CrossRef]
  41. Kaveh, A.; Bakhshpoori, T. A new metaheuristic for continuous structural optimization: Water evaporation optimization. Struct. Multidiscip. Optim. 2016, 54, 23–43. [Google Scholar] [CrossRef]
  42. Kaveh, A.; Dadras, A. A novel meta-heuristic optimization algorithm: Thermal exchange optimization. Adv. Eng. Softw. 2017, 110, 69–84. [Google Scholar] [CrossRef]
  43. Kaveh, A.; Zolghadr, A. Cyclical parthenogenesis algorithm for guided modal strain energy based structural damage detection. Appl. Soft Comput. 2017, 57, 250–264. [Google Scholar] [CrossRef]
  44. Kang, Y.L.; Lin, X.H.; Qin, Q.H. Inverse/genetic method and its application in identification of mechanical properties of interface in composite. Compos. Struct. 2004, 66, 449–458. [Google Scholar] [CrossRef]
  45. Lin, X.H.; Kang, Y.L.; Qin, Q.H.; Fu, D.H. Identification of interfacial parameters in a particle reinforced metal matrix composite Al6061–10%Al2O3 by hybrid method and genetic algorithm. Comput. Mater. Sci. 2005, 32, 47–56. [Google Scholar] [CrossRef]
  46. Juhas, P.; Cherba, D.M.; Duxbury, P.M.; Punch, W.F.; Billinge, S.J.L. Ab initio determination of solid-state nanostructure. Nature 2006, 440, 655–658. [Google Scholar] [CrossRef] [PubMed]
  47. Pagnotta, L. Determining elastic constants of materials with interferometric techniques. Inverse Probl. Sci. Eng. 2006, 8, 801–818. [Google Scholar] [CrossRef]
  48. Nair, A.; Taggart, D.; Vetter, F. Optimizing cardiac material parameters with a genetic algorithm. J. Biomech. 2007, 40, 1646–1650. [Google Scholar] [CrossRef] [PubMed]
  49. Bruno, L.; Felice, G.; Pagnotta, L.; Poggialini, A.; Stigliano, G. Elastic characterization of orthotropic plates via static testing. Int. J. Solids Structs. 2008, 45, 908–920. [Google Scholar] [CrossRef]
  50. Perera, R.; Ruiz, A. A multistage FE updating procedure for damage identification in large-scale structures based on multiobjective evolutionary optimization. Mech. Syst. Signal Process. 2008, 2, 970–991. [Google Scholar] [CrossRef]
  51. Monti, G.; Quaranta, G.; Marano, G.C. Genetic-algorithm-based strategies for dynamic identification of nonlinear systems with noise-corrupted response. J. Comput. Civil Eng. 2009, 24, 173–187. [Google Scholar] [CrossRef]
  52. Wang, J.; Qin, Q.H.; Kang, Y.L.; Li, X.Q. Viscoelastic adhesive interfacial model and experimental characterization for interfacial parameters. Mech. Mater. 2010, 42, 537–547. [Google Scholar] [CrossRef]
  53. Jin, Z.Y.; Cui, Z.S. Investigation on strain dependence of dynamic recrystallization behavior using an inverse analysis method. Mater. Sci. Eng. A 2010, 527, 3111–3119. [Google Scholar] [CrossRef]
  54. Caicedo, J.M.; Yun, G. A novel evolutionary algorithm for identifying multiple alternative solutions in model updating. Struct. Health Monit. 2011, 10, 491–501. [Google Scholar] [CrossRef]
  55. Chen, Z.; Diebels, S.; Peter, N.J.; Schneider, A.S. Identification of finite viscoelasticity and adhesion effects in nanoindentation of a soft polymer by inverse method. Comput. Mater. Sci. 2013, 72, 127–139. [Google Scholar] [CrossRef]
  56. Bosetti, P.; Bort, C.M.G.; Bruschi, S. Identification of Johnson-Cook and Tresca’s parameters for numerical modeling of AISI-304 machining processes. J. Manuf. Sci. Eng. 2013, 135, 051021. [Google Scholar] [CrossRef]
  57. Boonlong, K. Vibration-based damage detection in beams by cooperative coevolutionary genetic algorithm. Adv. Mech. Eng. 2014, 624949. [Google Scholar] [CrossRef]
  58. Chen, Z.; Diebels, S. Nanoindentation of soft polymers: modeling, experiments and parameter identification. Tech. Mech. 2014, 34, 166–189. [Google Scholar]
  59. Spranger, K.; Capelli, C.; Bosi, G.M.; Schievano, S.; Ventikos, Y. Comparison and calibration of a real-time virtual stenting algorithm using Finite Element Analysis and Genetic Algorithms. Comput. Methods Appl. Mech. Eng. 2015, 293, 462–480. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  60. De Tommasi, D.; Ferri, D.; Marano, G.C.; Puglisi, G. Material parameters identification and experimental validation of damage models for rubberlike materials. Eur. Polym. J. 2016, 78, 302–313. [Google Scholar] [CrossRef]
  61. Pathan, M.V.; Patsias, S.; Tagarielli, V.L. A real-coded genetic algorithm for optimizing the damping response of composite laminates. Comput. Struct. 2018, 198, 51–60. [Google Scholar] [CrossRef]
  62. Ogierman, W. Inverse identification of elastic properties of constituents of discontinuously reinforced composites. Materials 2018, 11, 2332. [Google Scholar] [CrossRef]
  63. Casciati, S. Stiffness identification and damage localization via differential evolution algorithms. Struct. Control Health Monit. 2008, 15, 436–449. [Google Scholar] [CrossRef]
  64. Tang, H.; Xue, S.; Fan, C. Differential evolution strategy for structural system identification. Comput. Struct. 2008, 86, 2004–2012. [Google Scholar] [CrossRef]
  65. Caseiro, J.F.; Andrade-Campos, A. An evolutionary-inspired optimisation algorithm suitable for solid mechanics engineering inverse problems. Int. J. Mechatron. Manuf. Syst. 2011, 4, 415–440. [Google Scholar] [CrossRef]
  66. Rao, A.R.M.; Lakshmi, K.; Venkatachalam, D. Damage diagnostic technique for structural health monitoring using POD and self-adaptive differential evolution algorithm. Comput. Struct. 2012, 106, 228–244. [Google Scholar]
  67. Vincenzi, L.; De Roeck, G.; Savoia, M. Comparison between coupled local minimizers method and differential evolution algorithm in dynamic damage detection problems. Adv. Eng. Softw. 2013, 65, 90–100. [Google Scholar] [CrossRef]
  68. Reed, H.M.; Nichols, J.M.; Earls, C.J. A modified differential evolution algorithm for damage identification in submerged shell structures. Mech. Syst. Signal Process. 2013, 39, 396–408. [Google Scholar] [CrossRef]
  69. Li, R.; Mita, A.; Zhou, J. Symbolization-based differential evolution strategy for identification of structural parameters. Struct. Control Health Monit. 2013, 20, 1255–1270. [Google Scholar] [CrossRef]
  70. Quaranta, G.; Marano, G.C.; Greco, R.; Monti, G. Parametric identification of seismic isolators using differential evolution and particle swarm optimization. Appl. Soft Comput. 2014, 22, 458–464. [Google Scholar] [CrossRef]
  71. Seyedpoor, S.M.; Shahbandeha, S.; Yazdanpanah, O. An efficient method for structural damage detection using a differential evolution algorithm-based optimisation approach. Civ. Eng. Environ. Syst. 2015, 32, 230–250. [Google Scholar] [CrossRef]
  72. Vo-Duy, T.; Ho-Huu, V.; Dang-Trung, H.; Nguyen-Thoi, T. A two-step approach for damage detection in laminated composite structures using modal strain energy method and an improved differential evolution algorithm. Compos. Struct. 2016, 147, 42–53. [Google Scholar] [CrossRef]
  73. Dinh-Cong, D.; Vo-Duy, T.; Ho-Huu, V.; Dang-Trung, H.; Nguyen-Thoi, T. An efficient multi-stage optimization approach for damage detection in plate structures. Adv. Eng. Softw. 2017, 112, 76–87. [Google Scholar] [CrossRef]
  74. Bureeat, S.; Pholdee, N. Inverse problem based differential evolution for efficient structural health monitoring of trusses. Appl. Soft Comput. 2018, 66, 462–472. [Google Scholar] [CrossRef]
  75. Seyedpoor, S.M.; Ahmadi, A.; Pahnabi, N. Structural damage detection using time domain responses and an optimization method. Inverse Probl. Sci. Eng. 2019, 27, 669–688. [Google Scholar] [CrossRef]
  76. Genovese, K.; Lamberti, L.; Pappalettere, C. A new hybrid technique for in-plane characterization of orthotropic materials. Exp. Mech. 2004, 44, 584–592. [Google Scholar] [CrossRef]
  77. Genovese, K.; Lamberti, L.; Pappalettere, C. Improved global-local simulated annealing formulation for solving non-smooth engineering optimization problems. Int. J. Solids Struct. 2005, 42, 203–237. [Google Scholar] [CrossRef]
  78. Kaimovitz, B.; Lanir, Y.; Kassab, G.S. Large-scale 3-D geometric reconstruction of the porcine coronary arterial vasculature based on detailed anatomical data. Ann. Biomed. Eng. 2005, 33, 1517–1535. [Google Scholar] [CrossRef] [PubMed]
  79. Moita, J.M.S.; Correia, V.M.F.; Martins, P.G.; Soares, C.M.M.; Soares, C.A.M. Optimal design in vibration control of adaptive structures using a simulated annealing algorithm. Compos. Struct. 2006, 75, 79–87. [Google Scholar] [CrossRef]
  80. Ryden, N.; Park, C.B. Fast simulated annealing inversion of surface waves on pavement using phase-velocity spectra. Geophysics 2006, 71, 49–58. [Google Scholar] [CrossRef]
  81. Genovese, K.; Lamberti, L.; Pappalettere, C. Mechanical characterization of hyperelastic materials with fringe projection and optimization techniques. Opt. Lasers Eng. 2006, 44, 423–442. [Google Scholar] [CrossRef]
  82. Genovese, K.; Lamberti, L.; Pappalettere, C. Identification of mechanical properties of bovine bones by combining ESPI and optimization. Proc. SPIE 2006, 6341, 634108. [Google Scholar]
  83. Cosola, E.; Genovese, K.; Lamberti, L.; Pappalettere, C. Mechanical characterization of biological membranes with moiré techniques and multi-point simulated annealing. Exp. Mech. 2008, 48, 465–478. [Google Scholar] [CrossRef]
  84. Cosola, E.; Genovese, K.; Lamberti, L.; Pappalettere, C. A general framework for identification of hyper-elastic membranes with moiré techniques and multi-point simulated annealing. Int. J. Solids Struct. 2008, 45, 6074–6099. [Google Scholar] [CrossRef]
  85. Bayissa, W.L.; Haritos, N. Structural damage identification using a global optimization technique. Int. J. Struct. Stab. Dyn. 2009, 9, 745–763. [Google Scholar] [CrossRef]
  86. Korostolev, A.; Laurberg, M.; Noller, H.F. Multistart simulated annealing refinement of the crystal structure of the 70S ribosome. Proc. Natl. Acad. Sci. USA 2009, 106, 18195–18200. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  87. Gilles, G.; Habraken, A.M.; Duchene, L. Material parameter identification of Cazacu’s model for Ti6Al4V using the simulated annealing algorithm. Mater. Sci. Forum 2010, 636–637, 1125–1130. [Google Scholar] [CrossRef]
  88. Genovese, K.; Cosola, E.; Lamberti, L.; Bux, M.V.; Grassi, F.R.; Pappalettere, C.; Carlaio, R.G. Experimental-numerical investigation on the biomimetic recovery of natural tooth structural response after porcelain veneer restoration. Strain 2012, 48, 30–48. [Google Scholar] [CrossRef]
  89. Kayikci, R.; Sonmez, F.O. Design of composite laminates for optimum frequency response. J. Sound Vib. 2012, 331, 1759–1776. [Google Scholar] [CrossRef]
  90. Lobato, F.S.; Gomes Assis, E.; Steffen Jr., V.; da Silva Nieto, A.J. Design and identification problems of rotor bearing systems using the simulated annealing algorithm. In Simulated Annealing—Single and Multiple Objective Problems; Guerra Tsuzuki, M.S., Ed.; Intech Open: Rijeka, Croatia, 2012; Chapter 10; pp. 197–216. [Google Scholar]
  91. Kourehli, S.S.; Bagheri, A.; Amiri, G.G.; Ghafory-Ashtiany, M. Structural damage detection using incomplete modal data and incomplete static response. KSCE J. Civ. Eng. 2013, 17, 216–223. [Google Scholar] [CrossRef]
  92. Green, P.L. Bayesian system identification of a nonlinear dynamical system using a novel variant of simulated annealing. Mech. Syst. Signal Process. 2015, 52–53, 133–146. [Google Scholar] [CrossRef]
  93. Astroza, R.; Nguyen, L.T.; Nestorovic, T. Finite element model updating using simulated annealing hybridized with unscented Kalman filter. Comput. Struct. 2016, 177, 176–191. [Google Scholar] [CrossRef]
  94. Keelan, J.; Chung, E.L.; Hague, J.P. Simulated annealing approach to vascular structure with application to the coronary arteries. R. Soc. Open Sci. 2016, 3, 150431–150445. [Google Scholar] [CrossRef]
  95. Cobo, P.; Francisco Simón, F. Using simulating annealing for the inverse estimation of the non-acoustical parameters of sound absorbers. Build. Acoust. 2017, 24, 295–306. [Google Scholar] [CrossRef]
  96. Seyfi, B.; Fatouraee, N.; Imeni, M. Mechanical modeling and characterization of meniscus tissue using flat punch indentation and inverse finite element method. J. Mech. Behav. Biomed. Mater. 2018, 77, 337–346. [Google Scholar] [CrossRef] [PubMed]
  97. Xiao, F.; Fan, J.; Chen, G.S.; Hulsey, J.L. Bridge health monitoring and damage identification of truss bridge using strain measurements. Adv. Mech. Eng. 2019, 11, 1–7. [Google Scholar] [CrossRef]
  98. Mun, S.; Geem, Z.W. Determination of viscoelastic and damage properties of hot mix asphalt concrete using a harmony search algorithm. Mech. Mater. 2009, 41, 339–353. [Google Scholar] [CrossRef]
  99. Suh, Y.; Mun, S.; Yeo, I. Fatigue life prediction of asphalt concrete pavement using a harmony search algorithm. KSCE J. Civ. Eng. 2010, 14, 725–730. [Google Scholar] [CrossRef]
  100. Mun, S.; Lee, S. Identification of viscoelastic functions for hot-mix asphalt mixtures using a modified harmony search algorithm. J. Comput. Civil Eng. 2011, 25, 139–148. [Google Scholar] [CrossRef]
  101. Shan, Q.; Chen, Y. Harmony search algorithm and its application to product module identification. Adv. Mat. Res. 2011, 225–226, 1100–1104. [Google Scholar] [CrossRef]
  102. Bekdas, G.; Nigdeli, S.M. Estimating optimum parameters of tuned mass dampers using harmony search. Eng. Struct. 2011, 33, 2716–2723. [Google Scholar] [CrossRef]
  103. Lee, C.; Kim, D.; Mun, S.; Yoo, P. Study on a prediction model of the tensile strain related to the fatigue cracking performance of asphalt concrete pavements through design of experiments and harmony search algorithm. Int. J. Highw. Eng. 2012, 14, 11–17. [Google Scholar] [CrossRef]
  104. Pashmforoush, F.; Fotouhi, M.; Ahmadi, M. Acoustic emission-based damage classification of glass/polyester composites using harmony search k-means algorithm. J. Reinf. Plast. Comp. 2012, 31, 671–680. [Google Scholar] [CrossRef]
  105. Fleck Fadel Miguel, L.; Fleck Fadel Miguel, L.; Kaminski, J.; Riera, J.D. Damage detection under ambient vibration by harmony search algorithm. Expert Syst. Appl. 2012, 39, 9704–9714. [Google Scholar] [CrossRef]
  106. Jahjouh, M.M.; Nackenhorst, U. Structural identification of two dimensional shear buildings using a modified adaptive harmony search algorithm. In Engineering Optimization IV; Rodrigues, H., Herskovits, J., Mota Soares, C., Miranda Guedes, J., Araujo, A., Folgado, J., Moleiro, F., Madeira, J.A., Eds.; CRC Press/Balkema: Leiden, The Netherlands, 2015; pp. 193–198. [Google Scholar]
  107. Lee, S.; Mun, S. Improving a model for the dynamic modulus of asphalt using the modified harmony search algorithm. Expert Syst. Appl. 2014, 41, 3856–3860. [Google Scholar] [CrossRef]
  108. Jin, H.; Xia, J.; Wang, Y. Optimal sensor placement for space modal identification of crane structures based on an improved harmony search algorithm. J. Zhejiang Univ. -Sci. A (Appl. Phys. Eng.) 2015, 16, 464–477. [Google Scholar] [CrossRef] [Green Version]
  109. Nhamage, I.A.; Lopez, R.H.; Fleck Fadel Miguel, L. An improved hybrid optimization algorithm for vibration based-damage detection. Adv. Eng. Softw. 2016, 93, 47–64. [Google Scholar] [CrossRef]
  110. Jahjouh, M.M.; Nackenhorst, U. A modified harmony search approach on structural identification and damage detection of wind turbine supporting structures. J. Vibroeng. 2016, 18, 103–118. [Google Scholar]
  111. Sun, P.M.; Bao, T.F.; Gu, C.S.; Jiang, M.; Wang, T.; Shi, Z.W. Parameter sensitivity and inversion analysis of a concrete faced rock-fill dam based on HS-BPNN algorithm. Sci. China Technol. Sci. 2016, 59, 1442–1451. [Google Scholar] [CrossRef]
  112. Artar, M. Damage detection on steel plane trusses via harmony search algorithm. Int. J. Eng. Technol. 2018, 4, 13–20. [Google Scholar]
  113. Schutte, J.F.; Koh, B.I.; Reinbolt, J.A.; Haftka, R.T.; George, A.D.; Fregly, B.J. Evaluation of a particle swarm algorithm for biomechanical optimization. J. Biomech. Eng. 2005, 127, 465–474. [Google Scholar] [CrossRef]
  114. Marwala, T. Finite element model updating using particle swarm optimization. Int. J. Eng. Simul. 2005, 6, 25–30. [Google Scholar]
  115. Ozel, T.; Karpat, Y. Identification of constitutive material model parameters for high-strain rate metal cutting conditions using evolutionary computational algorithms. Mater. Manuf. Process. 2007, 22, 659–667. [Google Scholar] [CrossRef]
  116. Begambre, O.; Laier, J. A hybrid particle swarm optimization-simplex algorithm (PSOS) for structural damage identification. Adv. Eng. Softw. 2009, 40, 883–891. [Google Scholar] [CrossRef]
  117. Sayedpoor, S.M. Structural damage detection using a multi-stage particle swarm optimization. Adv. Struct. Eng. 2011, 14, 533–549. [Google Scholar] [CrossRef]
  118. Xiang, J.; Liang, M. A two-step approach to multi-damage detection for plate structures. Eng. Fract. Mech. 2012, 91, 73–86. [Google Scholar] [CrossRef]
  119. Seyedpoor, S.M. A two stage method for structural damage detection using a modal strain energy based index and particle swarm optimization. Int. J. Nonlinear Mech. 2012, 47, 1–8. [Google Scholar] [CrossRef]
  120. Gökdağ, H.; Yildiz, A.R. Structural damage detection using modal parameters and particle swarm optimization. Int. J. Struct. Integr. 2012, 54, 416–420. [Google Scholar] [CrossRef]
  121. Kang, F.; Li, J.J.; Xu, Q. Damage detection based on improved particle swarm optimization using vibration data. Appl. Soft Comput. 2012, 12, 2329–2335. [Google Scholar] [CrossRef]
  122. Vaz, M.; Cardoso, E.L.; Stahlschmidt, J. Particle swarm optimization and identification of inelastic material parameters. Eng. Comput. 2013, 30, 936–960. [Google Scholar] [CrossRef]
  123. Mohan, S.C.; Maiti, D.K.; Maity, D. Structural damage assessment using FRF employing particle swarm optimization. Appl. Math. Comput. 2013, 219, 10387–10400. [Google Scholar] [CrossRef]
  124. Nandakumar, P.; Shankar, K. Identification of structural parameters using consistent mass transfer matrix. Inverse Probl. Sci. Eng. 2014, 22, 436–457. [Google Scholar] [CrossRef]
  125. Sankar, P.A.; Machavaram, R.; Shankar, K. System identification of a composite plate using hybrid response surface methodology and particle swarm optimization in time domain. Measurement 2014, 55, 499–511. [Google Scholar] [CrossRef]
  126. Nanda, B.; Maity, D.; Maiti, D. Crack assessment in frame structures using modal data and unified particle swarm optimization technique. Adv. Struct. Eng. 2014, 17, 747–766. [Google Scholar] [CrossRef]
  127. He, L.; Lian, J.; Ma, B. Intelligent damage identification method for large structures based on strain modal parameters. J. Vib. Control 2014, 20, 1783–1795. [Google Scholar] [CrossRef]
  128. Zhao, C.; Lavasan, A.A.; Barciaga, T.; Zarev, V.; Datcheva, M.; Schanz, T. Model validation and calibration via back analysis for mechanized tunnel simulations—The Western Scheldt tunnel case. Comp. Geotech. 2015, 69, 601–614. [Google Scholar] [CrossRef]
  129. Perera, R.; Fang, S.E.; Ruiz, A. Application of particle swarm optimization and genetic algorithms to multiobjective damage identification inverse problems with modelling errors. Meccanica 2015, 45, 723–734. [Google Scholar] [CrossRef]
  130. Barbieri, N.; Barbieri, R.; Tupan Silva, H.A. A methodology for identification of damage in beams. Inverse Probl. Sci. Eng. 2016, 24, 482–503. [Google Scholar] [CrossRef]
  131. Jiang, S.; Ma, S. Structural damage detection based on improved multi-particle swarm co-evolution optimization algorithm. In Structural Health Monitoring Measurement Methods and Practical Applications; Rivas-Lopez, M., Flores Fuentes, W., Sergiyenko, O., Eds.; Intech Open: Rijeka, Croatia, 2017; Chapter 3; pp. 47–64. [Google Scholar]
  132. Wei, Z.; Liu, J.; Lu, Z. Structural damage detection using improved particle swarm optimization. Inverse Probl. Sci. Eng. 2018, 26, 792–810. [Google Scholar] [CrossRef]
  133. Tran-Ngoc, H.; Khatir, S.; De Roeck, G.; Bui-Tien, T.; Nguyen-Ngoc, L.; Abdel Wahab, M. Model updating for Nam O bridge using particle swarm optimization algorithm and genetic algorithm. Sensors 2018, 18, 4131. [Google Scholar] [CrossRef]
  134. He, R.S.; Hwang, S.F. Damage detection by an adaptive real-parameter simulated annealing genetic algorithm. Comput. Struct. 2006, 84, 2231–2243. [Google Scholar] [CrossRef]
  135. Lin, Y.H.; Chang, C.L. Using modal analysis and optimization to determine elastic constants of thick composite plates. Key Eng. Mater. 2009, 419–420, 473–476. [Google Scholar] [CrossRef]
  136. Chen, C.; Huang, Y.; Li, G. Damage detection of self-anchored suspension bridge based on neural network model and genetic-simulated annealing algorithm. Adv. Mater. Res. 2011, 243–249, 1963–1967. [Google Scholar]
  137. Zhou, Y.; Zhang, Y.; Yi, W.; Jiang, Y.; Pan, Q. Structural identification of a concrete-filled steel tubular arch bridge via ambient vibration test data. J. Bridge Eng. 2017, 22, 04017049. [Google Scholar] [CrossRef]
  138. Feng, X.T.; Chen, B.R.; Yang, C.; Zhou, H.; Ding, X. Identification of visco-elastic models for rocks using genetic programming coupled with the modified particle swarm optimization algorithm. Int. J. Rock Mech. Min. Sci. 2006, 43, 789–801. [Google Scholar] [CrossRef]
  139. Sandesh, S.; Shankar, K. Application of a hybrid of particle swarm and genetic algorithm for structural damage detection. Inverse Probl. Sci. Eng. 2010, 18, 997–1021. [Google Scholar] [CrossRef]
  140. Tam, J.H.; Ong, Z.C.; Ismail, Z.; Ang, B.C.; Khoo, S.Y.; Li, W.L. Inverse identification of elastic properties of composite materials using hybrid GA-ACO-PSO algorithm. Inverse Probl. Sci. Eng. 2018, 26, 1432–1463. [Google Scholar] [CrossRef]
  141. Alkayem, N.F.; Cao, M. Damage identification in three-dimensional structures using single-objective evolutionary algorithms and finite element model updating: evaluation and comparison. Eng. Optim. 2018, 50, 1695–1714. [Google Scholar] [CrossRef]
  142. Erdogan, Y.S.; Catbas, F.N.; Bakir, P.G. Structural identification (St-Id) using finite element models for optimum sensor configuration and uncertainty quantification. Finite Elem. Anal. Des. 2014, 81, 1–13. [Google Scholar] [CrossRef]
  143. Kaveh, A.; Javadi, S.M.; Maniat, M. Damage assessment via model data with a mixed particle swarm strategy, ray optimizer, and harmony search. Asian J. Civil Eng. 2014, 15, 95–106. [Google Scholar]
  144. Tang, H.; Zhou, J.; Xue, S.; Xie, L. Big Bang-Big Crunch optimization for parameter estimation in structural systems. Mech. Syst. Signal Process. 2010, 24, 2888–2897. [Google Scholar] [CrossRef]
  145. Tabrizian, Z.; Afshari, E.; Amiri, G.G.; Ali Beigy, M.H.; Nejad, S.M.P. A new damage detection method: Big bang-big crunch (BB-BC) algorithm. Shock Vib. 2013, 20, 643–648. [Google Scholar] [CrossRef]
  146. Altomare, A.; Corriero, N.; Cuocci, C.; Moliterni, A.; Rizzi, R. The hybrid big bang-big crunch method for solving crystal structure from powder diffraction data. J. Appl. Cryst. 2013, 46, 779–787. [Google Scholar] [CrossRef]
  147. El Hafidi, A.; Martin, B.; Ersoy, S. Identification of viscoelastic material properties based on big bang-big crunch optimization method. Vibroeng. Proc. 2016, 10, 102–107. [Google Scholar]
  148. Huang, J.L.; Lu, Z.R. BB-BC optimization algorithm for structural damage detection using measured acceleration responses. Struct. Eng. Mech. 2017, 64, 353–360. [Google Scholar]
  149. Yin, Z.; Liu, J.; Luo, W.; Lui, Z. An improved Big Bang-Big Crunch algorithm for structural damage detection. Struct. Eng. Mech. 2018, 68, 735–745. [Google Scholar]
  150. Bagheri, A.; Razeghi, H.R.; Ghodrati Amiri, G. Detection and estimation of damage in structures using imperialist competitive algorithm. Shock Vib. 2012, 19, 405–419. [Google Scholar] [CrossRef]
  151. Hosseinzadeh, A.Z.; Razzaghi, S.A.S.; Ghodrati Amiri, G. An iterated IRS technique for cross-sectional damage modelling and identification in beams using limited sensors measurement. Inverse Probl. Sci. Eng. 2019, 27, 1145–1169. [Google Scholar] [CrossRef]
  152. Majumdar, A.; De, A.; Maity, D.; Maiti, D.K. Damage assessment of beams from changes in natural frequencies using ant colony optimization. Struct. Eng. Mech. 2013, 45, 387–406. [Google Scholar] [CrossRef]
  153. Chen, J.; Chen, X.; Liu, W. Complete inverse method using ant colony optimization algorithm for structural parameters and excitation identification from output only measurements. Math. Probl. Eng. 2014, 185487. [Google Scholar] [CrossRef]
  154. Du, D.-C.; Vinh, H.-H.; Trung, V.-D.; Quyen, N.-T.H.; Trung, N.-T. Efficiency of Jaya algorithm for solving the optimization based structural damage identification problem based on a hybrid objective function. Eng. Optim. 2018, 50, 1233–1251. [Google Scholar] [CrossRef]
  155. Dinh-Cong, D.; Vo-Duy, T.; Ho-Huu, V.; Nguyen-Thoi, T. Damage assessment in plate-like structures using a two-stage method based on modal strain energy change and Jaya algorithm. Inverse Probl. Sci. Eng. 2019, 27, 166–189. [Google Scholar] [CrossRef]
  156. Liu, R.; Kumar, A.; Chen, Z.; Agrawal, A.; Sundararaghavan, V.; Choudhary, A. A predictive machine learning approach for microstructure optimization and materials design. Sci. Rep. 2015, 5, 11551. [Google Scholar] [CrossRef] [Green Version]
  157. Yang, X.S. Metaheuristic algorithms for inverse problems. Int. J. Innov. Comput. Appl. 2013, 5, 76–84. [Google Scholar] [CrossRef]
  158. Gandomi, A.H.; Yang, X.S.; Talatahari, S.; Alavi, A.H. (Eds.) Metaheuristic Applications in Structures and Infrastructures; Elsevier: Waltham, MA, USA, 2013. [Google Scholar]
  159. Kaveh, A. Applications of Metaheuristic Optimization Algorithms in Civil Engineering; Springer International Publishing: Basel, Switzerland, 2017. [Google Scholar]
  160. Mendes Platt, G.; Yang, X.S.; Silva Neto, A.J. Computational Intelligence Optimization and Inverse Problems with Applications in Engineering, 1st ed.; Springer: New York, NY, USA, 2018. [Google Scholar]
  161. Alkayem, N.F.; Cao, M.; Su, Z. Structural damage detection using finite element model updating with evolutionary algorithms: a survey. Neural Comput. Appl. 2018, 30, 389–411. [Google Scholar] [CrossRef] [PubMed]
  162. Lamberti, L.; Pappalettere, C. Weight optimization of skeletal structures with multipoint simulated annealing. Comput. Model. Eng. Sci. 2007, 18, 183–221. [Google Scholar]
  163. Lamberti, L. An efficient simulated annealing algorithm for design optimization of truss structures. Comput. Struct. 2008, 86, 1936–1953. [Google Scholar] [CrossRef]
  164. Lamberti, L.; Pappalettere, C. An improved harmony-search algorithm for truss structure optimization. In Proceedings of the Twelfth International Conference on Civil, Structural and Environmental Engineering Computing, Funchal, Portugal, 1–4 September 2009. [Google Scholar]
  165. Lamberti, L.; Pappalettere, C. Truss weight minimization using hybrid Harmony Search and Big Bang-Big Crunch algorithms. In Metaheuristic Applications in Structures and Infrastructures; Gandomi, A.H., Yang, X.S., Talatahari, S., Alavi, A.H., Eds.; Elsevier: Waltham, MA, USA, 2013; Chapter 9; pp. 207–240. [Google Scholar]
  166. Degertekin, S.O.; Lamberti, L. Comparison of hybrid metaheuristic algorithms for truss weight optimization. In Proceedings of the Third International Conference on Soft Computing Technology in Civil, Structural and Environmental Engineering, Cagliari, Italy, 3–6 September 2013. [Google Scholar]
  167. Lamberti, L.; Pappalettere, C. A fast big bang-big crunch optimization algorithm for weight minimization of truss structures. In Proceedings of the Second International Conference on Soft Computing Technology in Civil, Structural and Environmental Engineering, Chania, Greece, 6–9 September 2011. [Google Scholar]
  168. Carbas, S.; Saka, M.P. Optimum topology design of various geometrically nonlinear latticed domes using improved harmony search method. Struct. Multidiscip. Optim. 2012, 45, 377–399. [Google Scholar] [CrossRef]
  169. Kazemzadeh Azad, S.; Kazemzadeh Azad, S.; Hasançebi, O. Structural optimization using big bang-big crunch algorithm: a review. Int. J. Optim. Civil Eng. 2016, 6, 433–445. [Google Scholar]
  170. Hasancebi, O.; Erdal, F.; Saka, M.P. Adaptive harmony search method for structural optimization. ASCE J. Struct. Eng. 2010, 136, 419–431. [Google Scholar] [CrossRef]
  171. Degertekin, S.O. Improved harmony search algorithms for sizing optimization of truss structures. Comput. Struct. 2012, 92–93, 229–241. [Google Scholar] [CrossRef]
  172. Kazemzadeh Azad, S.; Hasançebi, O.; Kazemzadeh Azad, S. Upper bound strategy in optimum design of truss structures: a big bang-big crunch algorithm based application. Adv. Struct. Eng. 2013, 16, 1035–1046. [Google Scholar] [CrossRef]
  173. MATLAB®; Release 2018b; The MathWorks: Austin, TX, USA, 2018.
  174. ANSYS®; Version 14.5; Theory and User’s Manual; ANSYS Inc.: Canonsburg, PN, USA, 2012.
  175. Degertekin, S.O.; Lamberti, L.; Ugur, B.I. Sizing, layout and topology design optimization of truss structures using the Jaya algorithm. Appl. Soft Comput. 2018, 70, 903–928. [Google Scholar] [CrossRef]
  176. Degertekin, S.O.; Lamberti, L.; Ugur, B.I. Discrete sizing/layout/topology optimization of truss structures with an advanced Jaya algorithm. Appl. Soft Comput. 2019, 79, 363–390. [Google Scholar] [CrossRef]
  177. Sciammarella, C.A.; Lamberti, L.; Sciammarella, F.M. Optical holography reconstruction of nano-objects. In Holography, Research and Technologies; Rosen, J., Ed.; Intech Open: Rijeka, Croatia, 2011; Chapter 9; pp. 191–216. [Google Scholar]
  178. de Monsabert, B.G.; Rao, G.; Gay, A.; Berton, E.; Vigouroux, L. A scaling method to individualise muscle force capacities in musculoskeletal models of the hand and wrist using isometric strength measurements. Med. Biol. Eng. Comput. 2017, 55, 2227–2244. [Google Scholar] [CrossRef] [PubMed]
  179. Ribeiro, R.F.; Ritto, T.G.; Campos Velho, H.F.; Herskovits, J. Damage identification in a multi-DOF system under uncertainties using optimization algorithms. J. Appl. Comput. Mech. 2018, 4, 365–374. [Google Scholar]
  180. Genovese, K.; Lamberti, L.; Pappalettere, C. A comprehensive ESPI based system for combined measurement of shape and deformation of electronic components. Opt. Lasers Eng. 2004, 42, 543–562. [Google Scholar] [CrossRef]
  181. Sciammarella, C.A.; Lamberti, L.; Boccaccio, A. A general model for moiré contouring. Part I: Theory. Opt. Eng. 2008, 47, 033605. [Google Scholar] [CrossRef]
  182. Sciammarella, C.A.; Lamberti, L.; Boccaccio, A.; Cosola, E.; Posa, D. A general model for moiré contouring. Part II: Applications. Opt. Eng. 2008, 47, 033606. [Google Scholar] [CrossRef]
  183. Holo-Moiré Strain Analyzer (HoloStrain), Version 2.0; General Stress Optics, Inc.: Chicago, IL, USA, 2013.
Figure 1. Flow chart of the HFSA algorithm developed in this research.
Figure 1. Flow chart of the HFSA algorithm developed in this research.
Materials 12 02133 g001
Figure 2. Flow chart of the HFHS algorithm developed in this research.
Figure 2. Flow chart of the HFHS algorithm developed in this research.
Materials 12 02133 g002
Figure 3. Determination of sensitivities for HFHS and HFBBBC. Red dots represent designs stored in the population. Since vectors (XkXOPT) are non-descent directions as (XkXOPT)T ¯ Ω(XOPT) > 0, the opposite vectors are considered in order to perturb design along descent directions.
Figure 3. Determination of sensitivities for HFHS and HFBBBC. Red dots represent designs stored in the population. Since vectors (XkXOPT) are non-descent directions as (XkXOPT)T ¯ Ω(XOPT) > 0, the opposite vectors are considered in order to perturb design along descent directions.
Materials 12 02133 g003
Figure 4. Flow chart of the HFBBBC algorithm developed in this research.
Figure 4. Flow chart of the HFBBBC algorithm developed in this research.
Materials 12 02133 g004
Figure 5. Determination of trial points for hybrid HFBBBC when Ω(XCM) > Ω(XOPT). The red dots represent designs stored in the population.
Figure 5. Determination of trial points for hybrid HFBBBC when Ω(XCM) > Ω(XOPT). The red dots represent designs stored in the population.
Materials 12 02133 g005
Figure 6. (a) Woven composite laminate to be characterized; (b) Schematic of ESPI setup sensitive to u-displacements; (c) Phase pattern of ESPI fringes; (d) Finite element model simulating the experiment (control paths are also indicated).
Figure 6. (a) Woven composite laminate to be characterized; (b) Schematic of ESPI setup sensitive to u-displacements; (c) Phase pattern of ESPI fringes; (d) Finite element model simulating the experiment (control paths are also indicated).
Materials 12 02133 g006
Figure 7. Comparison of convergence behavior of different optimizers in the woven composite laminate identification problem.
Figure 7. Comparison of convergence behavior of different optimizers in the woven composite laminate identification problem.
Materials 12 02133 g007
Figure 8. (a) Experimental setup used in the identification problem of the axially compressed composite panel; (b) Phase pattern at the onset of buckling; (c) Finite element model simulating the experiment (control path AB is also indicated).
Figure 8. (a) Experimental setup used in the identification problem of the axially compressed composite panel; (b) Phase pattern at the onset of buckling; (c) Finite element model simulating the experiment (control path AB is also indicated).
Materials 12 02133 g008
Figure 9. Comparison of convergence behavior of different optimizers in the axially compressed panel identification problem.
Figure 9. Comparison of convergence behavior of different optimizers in the axially compressed panel identification problem.
Materials 12 02133 g009
Figure 10. (a) Experimental setup used in the identification problem of the bovine pericardium patch; (b) Modulation of printed and projected gratings from specimen deformation; (c) Finite element model simulating the experiments (control paths hc and vc are also indicated).
Figure 10. (a) Experimental setup used in the identification problem of the bovine pericardium patch; (b) Modulation of printed and projected gratings from specimen deformation; (c) Finite element model simulating the experiments (control paths hc and vc are also indicated).
Materials 12 02133 g010
Figure 11. Comparison of convergence behavior of different optimizers in the bovine pericardium patch identification problem.
Figure 11. Comparison of convergence behavior of different optimizers in the bovine pericardium patch identification problem.
Materials 12 02133 g011
Figure 12. Sensitivity of HFSA, HFHS and HFBBBC to initial population/starting point for the bovine pericardium patch identification problem.
Figure 12. Sensitivity of HFSA, HFHS and HFBBBC to initial population/starting point for the bovine pericardium patch identification problem.
Materials 12 02133 g012
Table 1. Results of the identification process carried out for the woven composite laminate.
Table 1. Results of the identification process carried out for the woven composite laminate.
Material PropertiesHFSA [Present]SA-NGRSA-Grad [77]HFHS [Present]HS-NGRHFBBBC [Present]BBBC-NGR
Ex24953248382504325034250742504725051
Ey22051222072203421995219132197821861
νxy0.2800.2780.2790.2790.2790.2790.279
Gxy5008501050005005501750034994
Error on material properties (%)Aver: 0.145
Max: 0.232
Aver: 0.626
Max: 0.941
Aver: 0.171
Max: 0.357
Aver: 0.154
Max: 0.357
Aver: 0.347
Max: 0.395
Aver: 0.176
Max: 0.357
Aver: 0.328
Max: 0.632
Residual error on displacements (%)Aver: 0.545
Max: 2.532
Aver: 0.591
Max: 2.788
Aver: 0.643
Max: 2.736
Aver: 0.580
Max: 2.689
Aver: 0.622
Max: 2.903
Aver: 0.640
Max: 2.734
Aver: 0.626
Max: 2.899
Optimization iterations14
(1-D SA = 1)
17
(1-D SA = 5)
22
(1-D SA = 10)
131512
(Nexp = 1)
13
(Nexp = 4)
FE analyses257332388210242222274
Table 2. Results of the identification process carried out for the axially compressed composite panel.
Table 2. Results of the identification process carried out for the axially compressed composite panel.
Material PropertiesHFSA [Present]SA-NGRSA-Grad [77]HFHS [Present]HS-NGRHFBBBC [Present]BBBC-NGR
Ex148708149626148730147928146831148343147406
Ey7479740375167491746474997523
νxy0.015080.015020.015200.015100.015270.015090.01517
Gxy4140410741074145416041604163
θ00.0010210.0010220.00099300.0034920.0035990.0032430.0009077
θ9090.22389.34790.80089.90489.72090.27490.685
θ4544.93845.05544.97545.11345.35944.97544.905
Error (%) on parametersAver: 0.197
Max: 0.389
Aver: 0.617
Max: 0.894
Aver: 0.597
Max: 0.889
Aver: 0.213
Max: 0.550
Aver: 0.649
Max: 1.126
Aver: 0.266
Max: 0.658
Aver: 0.596
Max: 0.980
Error on mode shape (%)Aver: 1.554
Max: 2.281
Aver: 2.088
Max: 3.453
Aver: 2.107
Max: 3.580
Aver: 1.680
Max: 2.504
Aver: 2.072
Max: 3.426
Aver: 1.889
Max: 2.735
Aver: 2.103
Max: 3.497
Optimization iterations13
(1-D SA = 2)
16
(1-D SA = 5)
20
(1-D SA = 6)
141815
(Nexp = 2)
20
(Nexp = 11)
FE analyses596863935478672431660
Table 3. Results of the identification process carried out for the bovine pericardium patch.
Table 3. Results of the identification process carried out for the bovine pericardium patch.
Material PropertiesHFSA [Present]SA-NGRSA-Grad Derived from [83,84]HFHS [Present]HS-NGRHFBBBC [Present]BBBC-NGR
a1198.961198.787199.255198.837200.989199.246199.362
a2126.885127.046126.110125.858127.761125.361127.680
a3135.601134.819135.758135.587135.349136.194136.205
b1386.003385.440388.077388.455390.833388.284387.069
b2169.684170.439169.234169.245169.568168.627170.263
b3187.298185.346187.116187.383188.867187.568187.567
c2197.367197.407197.506197.783197.959197.896198.615
c389.11189.56889.35989.19289.59489.06489.661
c4173.820173.634174.382174.239174.382173.938174.938
c5169.835171.097169.645170.031170.209169.607169.833
c6147.876149.953148.225148.261147.013148.060148.043
d2159.260159.449158.541158.699158.268158.595158.417
d321.69221.79421.60821.70521.59921.52521.512
d469.11269.27569.22969.06369.68269.04169.280
d5167.501166.533168.032169.194168.956168.035166.372
d6102.293102.551102.076101.687101.573102.410102.654
cosθf0.68380.68370.68370.68390.67760.68370.6834
Errors on properties (%)Aver: 0.256
Max: 0.615
Aver: 0.565
Max: 1.166
N/AAver: 0.196
Max: 0.692
Aver: 0.516
Max: 1.309
Aver: 0.206
Max: 0.594
Aver: 0.376
Max: 1.245
Residual errors on utot (%)Aver: 1.141
Max: 2.845
Aver: 1.666
Max: 3.254
Aver: 1.560
Max: 3.167
Aver: 1.070
Max: 2.903
Aver: 1.533
Max: 3.089
Aver: 1.097
Max: 2.904
Aver: 1.658
Max: 3.226
Optimization iterations25
(1-D SA = 2)
27
(1-D SA = 8)
34
(1-D SA = 12)
243125
(Nexp = 2)
33
(Nexp = 9)
FE analyses1373182919721223180712941668
Table 4. Bovine pericardium patch identification problem: sensitivity of HFHS, HFBBBC and HFSA algorithms to initial design/population.
Table 4. Bovine pericardium patch identification problem: sensitivity of HFHS, HFBBBC and HFSA algorithms to initial design/population.
Material PropertiesHFSA (NPOP = 30)HFSA (NPOP = 90)HFHS (NPOP = 30)HFHS (NPOP = 90)HFBBBC
(NPOP = 30)
HFBBBC
(NPOP = 90)
a1198.961198.493198.837199.408199.246198.978
a2126.885126.010125.858126.093125.361125.888
a3135.601136.301135.587135.970136.194135.857
b1386.003387.236388.455388.961388.284390.563
b2169.684169.151169.245168.943168.627168.873
b3187.298186.850187.383186.345187.568185.810
c2197.367197.909197.783197.181197.896197.948
c389.11189.24789.19289.80989.06489.264
c4173.820174.274174.239174.325173.938174.047
c5169.835169.719170.031169.600169.607169.458
c6147.876149.097148.261148.307148.060148.189
d2159.260157.591158.699157.380158.595158.459
d321.69221.69721.70521.72521.52521.714
d469.11269.18269.06369.35469.04169.287
d5167.501168.849169.194168.994168.035169.337
d6102.293102.550101.687102.141102.410102.204
cosθf0.68380.68510.68390.68370.68370.6846
Errors on properties (%)Aver: 0.256
Max: 0.615
Aver: 0.266
Max: 0.599
Aver: 0.196
Max: 0.692
Aver: 0.231
Max: 0.732
Aver: 0.206
Max: 0.594
Aver: 0.250
Max: 0.777
Residual errors on utot (%)Aver: 1.141
Max: 2.845
Aver: 1.257
Max: 2.896
Aver: 1.070
Max: 2.903
Aver: 1.188
Max: 2.907
Aver: 1.097
Max: 2.904
Aver: 1.201
Max: 2.962
Optimization iterations25
(1-D SA = 2)
24
(1-D SA = 2)
242525
(Nexp = 2)
26
(Nexp = 2)
FE analyses137313961223141412941377

Share and Cite

MDPI and ACS Style

Ficarella, E.; Lamberti, L.; Degertekin, S.O. Mechanical Identification of Materials and Structures with Optical Methods and Metaheuristic Optimization. Materials 2019, 12, 2133. https://doi.org/10.3390/ma12132133

AMA Style

Ficarella E, Lamberti L, Degertekin SO. Mechanical Identification of Materials and Structures with Optical Methods and Metaheuristic Optimization. Materials. 2019; 12(13):2133. https://doi.org/10.3390/ma12132133

Chicago/Turabian Style

Ficarella, Elisa, Luciano Lamberti, and Sadik Ozgur Degertekin. 2019. "Mechanical Identification of Materials and Structures with Optical Methods and Metaheuristic Optimization" Materials 12, no. 13: 2133. https://doi.org/10.3390/ma12132133

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop