Next Article in Journal
Permutation Entropy for the Characterisation of Brain Activity Recorded with Magnetoencephalograms in Healthy Ageing
Next Article in Special Issue
Is Turbulence a State of Maximum Energy Dissipation?
Previous Article in Journal
Ionic Liquids Confined in Silica Ionogels: Structural, Thermal, and Dynamical Behaviors
Previous Article in Special Issue
Physical Intelligence and Thermodynamic Computing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Impact Location and Quantification on an Aluminum Sandwich Panel Using Principal Component Analysis and Linear Approximation with Maximum Entropy

Department of Mechanical Engineering, Universidad de Chile, Beauchef 851, Santiago 8370456, Chile
*
Author to whom correspondence should be addressed.
Entropy 2017, 19(4), 137; https://doi.org/10.3390/e19040137
Submission received: 4 January 2017 / Revised: 7 March 2017 / Accepted: 19 March 2017 / Published: 25 March 2017
(This article belongs to the Special Issue Maximum Entropy and Its Application II)

Abstract

:
To avoid structural failures it is of critical importance to detect, locate and quantify impact damage as soon as it occurs. This can be achieved by impact identification methodologies, which continuously monitor the structure, detecting, locating, and quantifying impacts as they occur. This article presents an improved impact identification algorithm that uses principal component analysis (PCA) to extract features from the monitored signals and an algorithm based on linear approximation with maximum entropy to estimate the impacts. The proposed methodology is validated with two experimental applications, which include an aluminum plate and an aluminum sandwich panel. The results are compared with those of other impact identification algorithms available in literature, demonstrating that the proposed method outperforms these algorithms.

1. Introduction

The high stiffness and strength at a minimum weight make sandwich structures attractive for applications where weight-saving is critical. Consequently in recent years, the applications of sandwich structures have been rapidly increasing and range from satellites, spacecraft, aircraft, ships, automobiles, rail cars, wind energy systems and bridge construction, among others [1]. Nevertheless, despite previous advantages, these structures can experience damage due to impact loads. To avoid catastrophic failures, it is of critical importance to detect the presence of an impact damage as soon as it occurs. However, this type of damage, known as barely visible impact damage (BVID), is usually internal and there are no visible indications of its presence on the surface.
BVID can be detected by non-destructive testing (NDT) techniques such as X-ray, c-scan or visual inspections. Nevertheless, these techniques are time-consuming, require access to the portion of the structure being inspected and can only be performed when the system is out of service, which can be impractical in some cases. Impact identification methodologies have been proposed as a complement to NDT. These methodologies would continuously monitor the structure, detecting, locating, and quantifying impacts as they occur. For impact damage, its extent is correlated with the impact energy. Therefore, by locating and quantifying impacts over a structure it is feasible to predict possible damage locations, which allows for scheduling of inspections only when they are necessary, and to perform localized searches, saving inspection time.
Impact identification methodologies can be divided into two groups: model-based and data-driven algorithms. Model-based impact identification involves the solution of a nonlinear inverse problem that requires the evaluation of the numerical model several times [2,3], which can be exceedingly slow for real-time applications. In addition, model-based algorithms rely on the precision of the numerical model, and any error in the numerical model will be interpreted as an impact identification error. Within data-driven algorithms, methodologies based on classification, pattern recognition and regression have been proposed, with artificial neural networks (ANNs) being most frequently used. Worden and Staszewski [4] and Staszewski et al. [5] used two feed-forward multi-layer perceptron (MLP) networks to identify impacts on a composite plate. The first network was trained to detect the impact location, whereas the second network quantifies the impact magnitude. Haywood et al. [6] investigated two approaches to locate impacts in a composite panel with embedded piezoceramic sensors: MLP network and GA-based triangulation. They concluded that both approaches provide a similar degree of accuracy. LeClerc et al. [7] applied a two-step impact detection algorithm to an aircraft component. First, a classification network finds the region of the impact and afterwards another network localizes its position. With this methodology the researchers were able to obtain better results than those of a single neural network. Sharif-Khodaei et al. [8] trained a neural network to detect impact location on a composite-stiffened panel. The network was trained and tested with data obtained from a finite element model of the structure and the numerical model was validated with experimental data.
Although ANNs can process data very quickly, the slow learning speed and the large number of parameters that need to be tuned within the training stage are drawbacks in their application. In the parametric study performed by Sharif-Khodaei et al. [8] it was demonstrated that the performance of a neural network for impact localization strongly depends on the network architecture (number of layers and number of hidden nodes) and network properties (transfer functions and training algorithm). In addition, ANNs have the disadvantage of over-fitting and getting stuck in local minimum. An alternative are support vector machines (SVMs), which exhibit the advantage of global optimization and higher generalization capabilities than ANNs. Least squares support vector machines (LS-SVMs) further simplify the regression to a problem that can be solved from a set of linear equations [9]. Xu [10] implemented an LS-SVM to locate and quantify impacts in an aluminum plate and the results are compared with those of an ANN approach, demonstrating that LS-SVMs reach more accurate results. Fu and Xu [11] proposed a two-layer SVM to predict the location of impacts on an aluminum plate structure. Input data is obtained from a principal component analysis (PCA) of the strain time signal of piezoelectric sensors located over the plate. The results are compared with those of an ANN, concluding that SVM are capable of accomplishing better impact localization accuracy than ANN. To overcome the slow learning speed of ANN, Huang et al. [12] proposed a new learning algorithm called the extreme learning machine (ELM), which is suitable for single-layer feed-forward networks. This algorithm provides good generalization at fast learning speeds, and the only parameter that needs to be tuned is the number of hidden nodes. Xu [13] compared the algorithm performance among a basic ELM, kernel-ELM and LS-SVMs to localize impacts in an aluminum plate. Xu concluded that kernel-ELM is as precise as LS-SVMs, with lower training and evaluation times. Fu et al. [14] implemented a kernel-ELM for impact localization using PCA to extract features. Results in accuracy are similar to those of the SVM, but the kernel-ELM is faster, making it suitable for real-time applications. The time-reversal approach as been presented as a precise alternative for impact localization [15,16,17,18]. This a one-class nearest neighbor algorithm, in which the correlation between different impacts is measured by the signals convolution.
Meruane and Ortiz-Bernardin [19] presented a supervised learning algorithm that uses a linear approximation handled by a statistical inference model based on the maximum-entropy principle, denoted linear approximation with maximum entropy (LME). The merits of this approach are threefold: the selection of only one parameter is needed, the regression is determined after solving a convex optimization problem that has a unique solution, and data is processed in a period of time that is comparable to the one of other regression algorithms such as ANNs and SVMs. In addition, LME does not require classical training in which the algorithm is trained once and then the training data is discarded, but performs a linear approximation using the training data to estimate the impact. The main advantage is that new data can be easily incorporated into the training database with no need for re-training the algorithm as in the case of ANN and SVM. The LME algorithm was originally developed for damage assessment [19,20], and Sanchez et al. [21] implemented it in impact identification demonstrating that, in this case, LME provides a better performance than ANNs and SVMs.
The principle of an impact identification algorithm is to detect, locate and quantify an impact force with the use of a passive system consisting of piezoelectric sensors distributed over the structure. Nonetheless, the amount of strain–time data collected by the sensors is too large to be used directly in a classification or regression algorithm. Therefore, preprocessing of the data is necessary to extract relevant features. In the literature, different features extracted from the signals in the time domain have been studied. Some example are the time of arrival, the time and amplitude of the first peak, and the time and amplitude of the maximum of the signal, among others [21]. Fu et al. [11,14] showed that by using PCA to extract features from the time signals the impact localization results are greatly improved.
The primary contribution of this research is the development of a novel impact identification algorithm that uses LME in conjunction with PCA. The algorithm implemented by Sanchez et al. [21] uses features extracted from the time signal such as the time of arrival, signal amplitude and information related to the first peak. Therefore, the main differences between the proposed algorithm and the one presented in [21] are with respect to the input features. In addition, our algorithm is evaluated with a stiffened aluminum composite panel that represents a more challenging impact identification problem than the aluminum plate used in references [11,14,21]. The results are compared with those of other impact identification algorithms available in literature [10,11,13,14,15,21], demonstrating that the proposed method outperforms these algorithms.
The remainder of this work is structured as follows: Section 2 presents the proposed impact identification methodology. Section 3 describes the experimental applications and results. Section 4 compares the results that are delivered by our proposed method with those of other impact identification algorithms available in literature. Finally, conclusions and forthcoming work are presented in Section 5.

2. Impact Identification Methodology

2.1. Data Acquisition

Figure 1 illustrates a typical experimental setup for impact identification, where piezoelectric sensors bonded to the structure detect surface stress waves generated by the impact. Figure 2 shows an example of a strain-time signal obtained from an impact test and the signal envelope obtained through the Hilbert transform [22].
The training data set consists on the time response envelope of the sensors at impacts in different locations in the structure. Let us consider n s piezoelectric sensors distributed over the structure, which measure the response at q training impacts. During an impact the time response of each sensor is recorded in p data points and the envelope is computed through the Hilbert transform. The response data corresponds to the voltage provided by the sensors and all sensors are identical. Therefore, the response data is in the same scale units for all the sensors. The information of the i-th sensor is arranged in the matrix Z i R p × q as follows,
Z i = z i 1 ( 1 ) z i 2 ( 1 ) z i 3 ( 1 ) z i q ( 1 ) z i 1 ( 2 ) z i 2 ( 2 ) z i 3 ( 2 ) z i q ( 2 ) z i 1 ( 3 ) z i 2 ( 3 ) z i 3 ( 3 ) z i q ( 3 ) z i 1 ( p ) z i 2 ( p ) z i 3 ( p ) z i q ( p ) ,
where z i j ( k ) is the k-th response envelope data point of the i-th sensor at the j-th impact. Since the number of data points is too large to be used directly in a regression algorithm, preprocessing of the data using principal component analysis (PCA) is performed.

2.2. Principal Component Analysis

The objective of PCA is to reduce the dimensionality of a data set, while retaining as much as possible the variation present in the data set. This is achieved by transforming it to a new set of uncorrelated variables known as the principal components (PCs).
PCA is defined as an orthogonal linear transformation that transforms the data to a new coordinate system such that the greatest variance by any projection lies on the first PC, the second greatest variance on the second PC, and so on.
Let Z i R p × q be a data matrix with the strain–time response of the i-th sensor, as described in Equation (1). In general, it is recommended to normalize the data before performing PCA. This is because some of the variables might have a large variance while others a small one, and the variables with a large variance will dominate the first PCs. Nevertheless, in this case the scale of the variables matter because they are related to the location and magnitude of the impacts and it is important to retain that information. Therefore, we have chosen not to normalize the data.
The goal of PCA is to find a matrix P i R n i × p , such that a linear mapping from the original dimension p to a lower dimension n i is provided. A new matrix S i R n i × q , called score matrix, is obtained from:
S i = P i Z i .
The n i rows of P i are the principal components of Z i . P i can be calculated by extracting the main n i eigenvectors of the covariance matrix of Z i , C i , which is given by:
C i = 1 n 1 Z i Z i T .
The score matrix, S i , contains the features associated with the i-th sensor. The feature matrix, S R n × q , is obtained by assembling the features from all the sensors as:
S = S 1 S 2 S n s = X 1 X 2 X 3 X q
where n s is the number of piezoelectric sensors and n is the total number of features. The j-th column of S , X j , corresponds to the PCs associated with the j-th impact.

2.3. Linear Approximation with Maximum Entropy

The observation vector, defined as: Y j = Y 1 j , Y 2 j , Y 3 j R 3 , contains information related to the impact location and magnitude, where Y 1 j , Y 2 j are the x and y coordinates of the force location in m m and Y 3 j is the force magnitude in N. The feature vector X j = X 1 j , X 2 j , , X n j R n , which is obtained from the j-th column of S in Equation (4), represents the set of PCs associated to the j-th observation vector Y j . Assuming that a database with a set of N impacts is constructed, each impact is characterized by an observation vector and a feature vector. Therefore the database is formed by N pairs of observation and feature vectors as: ( X 1 , Y 1 ) , ( X 2 , Y 2 ) , , ( X N , Y N ) . The central problem in impact identification is: given a certain feature X , estimate the corresponding observation Y . The nearest neighbor regression estimate of Y is given by:
Y ^ = j = 1 k w j ( X ) Y j ,
where Y 1 , Y 2 , , Y k are the observation vectors associated with the k closest neighbors to the test vector X , and w 1 ( X ) , w 2 ( X ) , , w k ( X ) are weighting functions. The k nearest neighbor (k-NN) algorithm weights each neighbor equally, thus w i ( X ) = 1 / k , for i = 1 to k. A kernel nearest neighbor algorithm bases weightings on the distance from the test vector X to each vector in the database [23].
On the other hand, linear approximation takes the N feature vectors in the database and uses a linear combination of them to represent X as [24]:
X = j = 1 N w j ( X ) X j , j = 1 N w j ( X ) = 1 ,
where X 1 , X 2 , , X N are the feature vectors in the database set. Once the weighting functions are determined, then Y is estimated from Equation (5) with k = N . Typically, Equation (6) is undetermined and its solution can be tackled via an unconstrained optimization technique of the family of least squares. However, these methods produce some negative weights, which lack physical meaning. An alternative that produces positive weights is obtained via the maximum-entropy (max-ent) variational principle [25].
The notion of entropy in information theory was introduced by Shannon as a measure of uncertainity [26]. Later on, Jaynes [25] postulated the maximum-entropy principle as a rationale means for least-biased statistical inference when insufficient information is available. The maximum-entropy principle is suitable to find the least biased probability distribution when there are fewer constraints than unknowns and is posed as follows:
Consider a set of N discrete events { x 1 , , x N } . The possibility of each event is p i = p ( x i ) [ 0 , 1 ] with uncertainty ln p i . The Shannon entropy H ( p ) = i = 1 N p i ln p i is the amount of uncertainty represented by the distribution { p 1 , , p N } . The least biased probability distribution and the one that has the most likelihood to occur is obtained via the solution of the following optimization problem (maximum-entropy principle):
max p R + N H ( p ) = i = 1 N p i ln p i ,
subject to the constraints:
i = 1 N p i = 1 , i = 1 N p i g r ( x i ) = < g r ( x ) > ,
where R + N is the non-negative orthant and < g r ( x ) > is the known expected value of functions g r ( x ) ( r = 0 , 1 , , m ). Comparison between Equations (6) and (7b) clearly reveals the analogy between linear approximation and the constraints to which the maximum entropy is subjected. Thus, on replacing the possibility p i by the weighting function w i , the function g r ( x i ) by the feature vector X i , and the known expected value < g r ( x ) > by the test vector X , we have a method to compute the weighting functions that are needed to completely determine the impact estimate Y ^ in Equation (5).
The optimization problem in Equation (7) assigns probabilities to every x i in the set. Now, assume that the probability p i has an initial guess m i known as a prior, which reduces its uncertainty to ln p i + ln m i = ln ( p i / m i ) . An alternative problem can be formulated by using this prior in Equation (7) [27]:
max p R + n H ( p ) = i = 1 N p i ln p i m i ,
subject to the constraints:
i = 1 n p i = 1 , i = 1 n p i g r ( x i ) = < g r ( x ) > .
In Equation (8), the variational principle associated with i = 1 N p i ln p i m i is known as the principle of minimum relative (cross) entropy [28,29]. Depending upon the prior employed, the optimization problem in Equation (8) will favor some x i in the set by assigning more probability to them, and eventually, assigning non-zero probability ( p i > 0 ) to a selected number of x i ( i < N ) in the set. It can be easily seen that if the prior is constant, the Shannon–Jaynes entropy functional in Equation (7) is recovered as a particular case.
The database that is constructed for the impact identification method is usually formed by a large quantity of data elements. Data elements that are far from the impact estimate are not relevant and only introduce noise into the system. Therefore, we need a method to disregard this type of data elements. To this end, the relative entropy approach is suitable and thus we adopt it for our method. We use the prior function to cut off the data elements that are not relevant for the impact estimate. We also replace the probability p i and the discrete event x i with the weighting function w i and the feature vector X i of the linear approximation problem posed in (8), respectively. Finally, the optimization problem that builds our impact identification algorithm reads:
max w R + N H ( w ) = i = 1 N w i ( X ) ln w i ( X ) m i ( X ) ,
subject to the constraints:
i = 1 N w i ( X ) X ˜ i = 0 , i = 1 N w i ( X ) = 1 ,
where X ˜ i = X i X has been introduced as a shifted measure for stability purposes. Different prior distributions can be used, typical ones are: Gaussian, cubic spline, quartic spline or constant [30]. Here we tested the four distributions and the best performance was obtained with a smooth Gaussian,
m i ( X ) = exp ( β i X ˜ i 2 ) ,
where β i = γ / h i 2 ; γ is a parameter that controls the support of the Gaussian prior at X i , and therefore its associated weight function; and h i is a characteristic n - dimensional Euclidean distance between neighbors that can be distinct for each X i . In view of the optimization problem posed in Equation (9) for supervised learning, maximizing the entropy chooses the weight solution that commits the least to any one in the database samples [31].
The solution of the max-ent optimization problem is handled by using the procedure of Lagrange multipliers, which yields [27]:
w i ( X ) = Z i ( X ; λ * ) Z ( X ; λ * ) , Z i ( X ; λ * ) = m i ( X ) exp ( λ * · X ˜ i ) ,
where Z ( X ; λ * ) = j Z j ( X ; λ * ) , X ˜ i = [ X ˜ 1 i X ˜ N i ] T , and λ * = [ λ 1 * λ N * ] T are the converged lagrange multipliers.
In Equation (11), the Lagrange multiplier vector λ * is the minimizer of the dual optimization problem posed in Equation (12) [27]:
λ * = arg min λ R N ln Z ( X ; λ ) ,
which gives rise to the following system of nonlinear equations:
f ( λ ) = λ ln Z ( λ ) = i N w i ( X ) X ˜ i = 0 ,
where λ stands for the gradient with respect to λ . Once the converged λ * is found, the weight functions are computed from Equation (11) and the impact force is estimated from Equation (5) with k = N .

2.4. Impact Identification

The impact identification methodology consists of three main parts: building of the databases, selection of parameters and evaluation of the algorithm.

2.4.1. Building of the Databases

Three sets of impact data are acquired, one for training, one to set up the parameters of the identification algorithm and one to evaluate the algorithm. In the three cases, each point is impacted once using an instrumented impact hammer. The structure is linear, and as a consequence, the response is proportional to the magnitude of the force. Therefore, the response to impacts of different magnitudes can be determined by simply multiplying the measured response by scaling factors. With this methodology the initial training set is expanded to a new set with impacts of magnitudes between 5 N and 250 N.
Figure 3 presents an scheme of the procedure followed to build the training, setting up and testing databases. In the three cases the information related to the impacts location and magnitude is stored in observation vectors and the time response of the piezoelectric sensors goes into the PCA process described in Section 2.2. First, for the training data, the PCA is performed using n i = p and the cumulative percentage variance as a function of the number of PCs is computed. The number of PCs is selected to ensure a cumulative percentage variance of 99.99%. Once n i has been selected the matrix P i is constructed. Using P i , the feature matrices for the training, setting up and testing databases are computed according to Equation (4), from which the feature vectors are extracted (columns of the feature matrices).

2.4.2. Selection of Parameters

The only parameter that needs to be selected is the number of neighbors that contribute to the solution, k, which is selected to optimize the performance of the algorithm. To quantify the performance the following error functions are defined:
E x = 1 n t j = 1 n t Y ^ 1 j Y 1 j ,
E y = 1 n t j = 1 n t Y ^ 2 j Y 2 j ,
E F = 1 n t j = 1 n t Y ^ 3 j Y 3 j Y 3 j × 100 ,
E A = E x × E y A × 100 = j = 1 n t Y ^ 1 j Y 1 j j = 1 n t Y ^ 2 j Y 2 j n t 2 A × 100 ,
where n t is the number of elements in the database; A is the area of the plate; E x and E y are the mean errors in the estimation of the force in the x and y coordinates; E F is the percentage error in the estimation of the force magnitude, and E A is the percentage area localization error. The normalized impact identification error is defined as,
E I = E A E A 0 × E F E F 0 ,
where E A 0 and E F 0 are the area and force error for initial value of k.
Figure 4a illustrates the procedure to select the optimum value of k. The impact identification algorithm is evaluated for different values of k = k 1 , k 2 , . . . , k n k . The parameter k * that provides the lowest normalized impact identification error is selected.

2.4.3. Evaluation of the Algorithm

The last step is to evaluate the performance of the algorithm using the testing database. Figure 4b presents the procedure followed to identify the impacts, which consists of the following steps:
  • Extract a feature vector from the testing database.
  • Select the parameter β i in the Equation (10), so that k * neighbors contribute to the solution.
  • Solve the system of nonlinear equations presented in Equation (13).
  • Compute the weight functions using Equation (11).
  • Read the observation vectors in the database and estimate the experimental impact using Equation (5).
  • Compute the area and force errors using Equations (16) and (17).
  • Repeat steps 1 to 6 for all the feature vectors in the testing database.

3. Experimental Applications

3.1. Aluminum Plate

Figure 5 presents the experimental setup, which corresponds to an aluminum plate with dimensions 490 mm × 390 mm × 2.5 mm that is simply supported by four screws. The plate is excited by an instrumented impact hammer and the response is captured by four piezoelectric discs bonded to the surface. Table 1 summarizes the specifications of the experimental equipment.
Data from the four sensors and impact hammer is recorded with a sampling rate of 24 kHz. The hammer is used as trigger and 500 data points before the impact and 3000 data points after the impact are recorded. The signal envelope is computed though the Hilbert transform and no further post processing of the data is performed before PCA.
Two training sets consisting of a uniform grid of 117 and 61 points are evaluated, as shown in Figure 6a,b, respectively. The setting up and testing sets consist of 20 and 35 random impacts distributed over the plate, as shown in Figure 6c,d, respectively.
Figure 7a presents the cumulative percentage variance as a function of the number of PCs, n i , for the training database 1. With n i = 110 all sensors have a cumulative percentage variance of 99.99%. Figure 7b shows the normalized impact identification error as a function of k when n i = 110 . The minimum error is obtained for k = 120 . Figure 8 presents the impact identification results using these parameters. The area error, E A , is 0.009% and the percentage force error, E F , is 5.84%.
The setting up results for the second training database are presented in Figure 9. The cumulative percentage variance of 99.99% is obtained with n i = 61 and the minimum normalized impact identification error is obtained with k = 80 . Figure 10 presents the impact identification results using these parameters. The area error, E A , is 0.028% and the percentage force error, E F , is 6.53%.

3.2. Aluminum Sandwich Panel

The structure consists of a sandwich panel of 700 mm × 400 mm × 24 mm made of an aluminum core bonded to two aluminum skins. The core consists of stiffeners with triangular shapes that go through the panel in the two principal directions. The thickness of the skins is 2 mm and the thickness of the stiffeners is 1 mm. The skins are bonded to the stiffeners using an epoxy adhesive cured using a vacuum bagging system. Figure 11a,b show the internal structure and the assembled panel, respectively.
Figure 12 presents the experimental setup. The panel, which is clamped on two edges, is excited by an instrumented impact hammer and the response is captured by six piezoelectric discs bonded to the surface. The specifications of the experimental equipment are presented in Table 1.
Data from the six sensors and impact hammer is recorded with a sampling rate of 24 kHz. The hammer is used as trigger and 500 data points before the impact and 3000 data points after the impact are recorded. The signal envelope is computed though the Hilbert transform and no further post processing of the data is performed before PCA.
The training set consists of a uniform grid of 91 points, as shown in Figure 13a. The setting up and testing sets consist of 20 and 40 random impacts distributed over the plate, as shown in Figure 13b,c, respectively.
Figure 14a presents the cumulative percentage variance as a function of the number of PCs, n i . With n i = 84 all sensor have a cumulative percentage variance of 99.99%. Figure 14b shows the normalized impact identification error as a function of k when n i = 84 . The minimum error is obtained for k = 190 . Figure 15 presents the impact identification results with this combination of parameters. The area error, E A , is 0.031 % and the percentage force error, E F , is 12.39 %.
The experimental data and impact identification algorithm for this application case are available for download at [32].

4. Discussion

Table 2 compares the results obtained for the aluminum plate with those of other algorithms available in the literature. These algorithms are evaluated using the same structure and boundary conditions, i.e., an aluminum plate that is simply supported by four screws. The results demonstrate that the proposed methodology has a much better precision than these algorithms.
Furthermore, Table 3 summarizes the testing results for the two experimental cases. In the case of the aluminum plate, the first training database is used. The results are compared with those of the algorithm presented by Sanchez et al. [21]. This algorithm uses LME with features extracted from the time signal such as the time of arrival, signal amplitud and information related to the first peak. Therefore, the only difference are the features used in the LME algorithm. Both algorithms, LME and LME + PCA, have been tested with the same experimental data. The results show that when PCA is used, the precision of the impact identification method has a considerable improvement.
It can also be observed that the results with the aluminum plate are better than with the sandwich panel. This is expected since the stiffeners in the sandwich panel act as obstacles in the propagation of waves, generating reflections that difficult the impact identification procedure.

5. Conclusions

This article presented a new impact identification algorithm that uses principal component analysis (PCA) and linear approximation with maximum entropy (LME). The performance of the proposed methodology was validated by considering two experimental cases, which include an aluminum plate and an aluminum sandwich panel. Time varying strain data was measured using piezoceramic sensors bonded to the structures.
To demonstrate the potential of the proposed approach over existing ones, the results were compared with those of other impact identification algorithms available in literature. A much better performance is achieved with our proposed algorithm. The comparison of results using LME with classical time features versus features obtained with PCA shows that with PCA the results are largely improved.
The second experimental case represents a composite panel with stiffeners that can be used for example in aeronautical applications. Therefore, with a low number of sensors it is possible to accurately locate and quantify impacts in realistic structures. Nevertheless, it is necessary to study the performance in structures with more complex geometries.

Acknowledgments

The authors acknowledge the partial financial support of the Chilean National Fund for Scientific and Technological Development (Fondecyt) under Grant No. 1160494.

Author Contributions

Viviana Meruane and Pablo Véliz designed the experiment, processed and analyzed the experimental data, and tested the impact identification algorithm. Alejandro Ortiz-Bernardin programmed the max-ent linear approximation algorithm and adapted it to the application case. Enrique López Droguett helped with the implementation of the principal component analysis. The article was written by Viviana Meruane, Alejandro Ortiz-Bernardin and Enrique López Droguett.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Vinson, J.R. Sandwich structures: Past, present, and future. In Sandwich Structures 7: Advancing with Sandwich Structures and Materials; Springer: New York, NY, USA, 2005. [Google Scholar]
  2. Seydel, R.; Chang, F.K. Impact identification of stiffened composite panels: I. System development. Smart Mater. Struct. 2001, 10, 354–369. [Google Scholar] [CrossRef]
  3. Seydel, R.; Chang, F.K. Impact identification of stiffened composite panels: II. Implementation studies. Smart Mater. Struct. 2001, 10, 370–379. [Google Scholar] [CrossRef]
  4. Worden, K.; Staszewski, W. Impact location and quantification on a composite panel using neural networks and a genetic algorithm. Strain 2000, 36, 61–68. [Google Scholar] [CrossRef]
  5. Staszewski, W.; Worden, K.; Wardle, R.; Tomlinson, G. Fail-Safe sensor distributions for impact detection in composite materials. Smart Mater. Struct. 2000, 9, 298–303. [Google Scholar] [CrossRef]
  6. Haywood, J.; Coverley, P.; Staszewski, W.; Worden, K. An automatic impact monitor for a composite panel employing smart sensor technology. Smart Mater. Struct. 2005, 14, 265–271. [Google Scholar] [CrossRef]
  7. LeClerc, J.; Worden, K.; Staszewski, W.; Haywood, J. Impact detection in an aircraft composite panel—A neural-network approach. J. Sound Vib. 2007, 299, 672–682. [Google Scholar] [CrossRef]
  8. Sharif-Khodaei, Z.; Ghajari, M.; Aliabadi, M. Determination of impact location on composite stiffened panels. Smart Mater. Struct. 2012, 21, 105026. [Google Scholar] [CrossRef]
  9. Gestel, T.V.; Suykens, J.; Baesens, B.; Viaene, S.; Vanthienen, J.; Dedene, G.; Moor, B.D.; Vandewalle, J. Benchmarking least squares support vector machine classifiers. Mach. Learn. 2004, 54, 5–32. [Google Scholar] [CrossRef]
  10. Xu, Q. Impact detection and location for a plate structure using least squares support vector machines. Struct. Health Monit. 2014, 31, 5–18. [Google Scholar] [CrossRef]
  11. Fu, H.; Xu, Q. Locating impact on structural plate using principal component analysis and support vector machines. Math. Probl. Eng. 2013, 2013, 352149. [Google Scholar] [CrossRef]
  12. Huang, G.B.; Zhu, Q.Y.; Siew, C.K. Extreme learning machine: A new learning scheme of feedforward neural networks. In Proceedings of the IEEE International Joint Conference on Neural Networks, Budapest, Hungary, 25–29 July 2004; Volume 2, pp. 985–990. [Google Scholar]
  13. Xu, Q. A Comparison study of extreme learning machine and least squares support vector machine for structural impact localization. Math. Probl. Eng. 2014, 2014, 906732. [Google Scholar] [CrossRef]
  14. Fu, H.; Vong, C.M.; Wong, P.K.; Yang, Z. Fast detection of impact location using kernel extreme learning machine. Neural Comput. Appl. 2016, 27, 121–130. [Google Scholar] [CrossRef]
  15. Ing, R.K.; Quieffin, N.; Catheline, S.; Fink, M. In solid localization of finger impacts using acoustic time-reversal process. Appl. Phys. Lett. 2005, 87, 204104. [Google Scholar] [CrossRef]
  16. Ribay, G.; Catheline, S.; Clorennec, D.; Ing, R.K.; Quieffin, N.; Fink, M. Acoustic impact localization in plates: Properties and stability to temperature variation. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2007, 54, 378–385. [Google Scholar] [CrossRef] [PubMed]
  17. Park, B.; Sohn, H.; Olson, S.E.; DeSimio, M.P.; Brown, K.S.; Derriso, M.M. Impact localization in complex structures using laser-based time reversal. Struct. Health Monit. 2012, 11, 577–588. [Google Scholar] [CrossRef]
  18. Ciampa, F.; Meo, M. Impact detection in anisotropic materials using a time reversal approach. Struct. Health Monit. 2012, 11, 43–49. [Google Scholar] [CrossRef]
  19. Meruane, V.; Ortiz-Bernardin, A. Structural damage assessment using linear approximation with maximum entropy and transmissibility data. Mech. Syst. Signal Process. 2015, 54, 210–223. [Google Scholar] [CrossRef]
  20. Meruane, V.; Fierro, V.D.; Ortiz-Bernardin, A. A Maximum Entropy Approach to Assess Debonding in Honeycomb Aluminum Plates. Entropy 2014, 16, 2869–2889. [Google Scholar] [CrossRef]
  21. Sanchez, N.; Meruane, V.; Ortiz-Bernardin, A. A novel impact identification algorithm based on a linear approximation with maximum entropy. Smart Mater. Struct. 2016, 25, 095050. [Google Scholar] [CrossRef]
  22. Ulrich, T. Envelope Calculation From the Hilbert Transform; Technical Report; Los Alamos National Laboratory: Los Alamos, NM, USA, 2006. [Google Scholar]
  23. Devroye, L.; Gyorfi, L.; Krzyzak, A.; Lugosi, G. On the strong universal consistency of nearest neighbor regression function estimates. Ann. Stat. 1994, 22, 1371–1385. [Google Scholar] [CrossRef]
  24. Rovatti, R.; Borgatti, M.; Guerrieri, R. A geometric approach to maximum-speed n-dimensional continuous linear interpolation in rectangular grids. IEEE Trans. Comput. 1998, 47, 894–899. [Google Scholar] [CrossRef]
  25. Jaynes, E. Information theory and statistical mechanics. Phys. Rev. 1957, 106, 620–630. [Google Scholar] [CrossRef]
  26. Shannon, C. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  27. Sukumar, N.; Wright, R. Overview and construction of meshfree basis functions: From moving least squares to entropy approximants. Int. J. Numer. Methods Eng. 2007, 70, 181–205. [Google Scholar] [CrossRef]
  28. Kullback, S. Information Theory and Statistics; Wiley: New York, NY, USA, 1959. [Google Scholar]
  29. Shore, J.; Johnson, R. Axiomatic derivation of the principle of maximum entropy and the principle of minimum cross-entropy. IEEE Trans. Inf. Theory 1980, 26, 26–36. [Google Scholar] [CrossRef]
  30. Arroyo, M.; Ortiz, M. Local maximum-entropy approximation schemes: A seamless bridge between finite elements and meshfree methods. Int. J. Numer. Methods Eng. 2006, 65, 2167–2202. [Google Scholar] [CrossRef]
  31. Gupta, M.R.; Gray, R.M.; Olshen, R.A. Nonparametric supervised learning by linear interpolation with maximum entropy. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 766–781. [Google Scholar] [CrossRef] [PubMed]
  32. Meruane, V. Source Code for Impact Location and Quantification on an Aluminum Sandwich Panel Using PCA and LME. Available online: http://viviana.meruane.com/des_en.htm (accessed on 21 March 2017).
Figure 1. Example of an experimental setup for impact identification.
Figure 1. Example of an experimental setup for impact identification.
Entropy 19 00137 g001
Figure 2. Example of a strain–time signal along with its envelope. (a) Strain–time signal; (b) Signal envelope.
Figure 2. Example of a strain–time signal along with its envelope. (a) Strain–time signal; (b) Signal envelope.
Entropy 19 00137 g002
Figure 3. Construction of the training, setting up and testing databases. (a) Training database; (b) Setting up/testing database. PC: principal components; PCA: principal component analysis.
Figure 3. Construction of the training, setting up and testing databases. (a) Training database; (b) Setting up/testing database. PC: principal components; PCA: principal component analysis.
Entropy 19 00137 g003
Figure 4. Schemes of the procedure to select the optimum value of k and to identity an impact. (a) Selection of k * ; (b) Impact identification.
Figure 4. Schemes of the procedure to select the optimum value of k and to identity an impact. (a) Selection of k * ; (b) Impact identification.
Entropy 19 00137 g004
Figure 5. Experimental setup for the aluminum plate.
Figure 5. Experimental setup for the aluminum plate.
Entropy 19 00137 g005
Figure 6. Location of the experimental impacts applied to the aluminum plate. (a) Training 1; (b) Training 2; (c) Setting up; (d) Testing.
Figure 6. Location of the experimental impacts applied to the aluminum plate. (a) Training 1; (b) Training 2; (c) Setting up; (d) Testing.
Entropy 19 00137 g006
Figure 7. Cumulative percentage variance as a function of n i and normalized impact identification error as a function of k for the first training database. (a) Cumulative percentage variance; (b) Normalized impact identification error.
Figure 7. Cumulative percentage variance as a function of n i and normalized impact identification error as a function of k for the first training database. (a) Cumulative percentage variance; (b) Normalized impact identification error.
Entropy 19 00137 g007
Figure 8. Testing results for the aluminum plate using the linear approximation with maximum entropy LME+PCA algorithm and training database 1. (a) Force amplitude; (b) X coordinate; (c) Y coordinate; (d) Localization.
Figure 8. Testing results for the aluminum plate using the linear approximation with maximum entropy LME+PCA algorithm and training database 1. (a) Force amplitude; (b) X coordinate; (c) Y coordinate; (d) Localization.
Entropy 19 00137 g008aEntropy 19 00137 g008b
Figure 9. Cumulative percentage variance as a function of n i and normalized impact identification error as a function of k for the second training database. (a) Cumulative percentage variance; (b) Normalized impact identification error.
Figure 9. Cumulative percentage variance as a function of n i and normalized impact identification error as a function of k for the second training database. (a) Cumulative percentage variance; (b) Normalized impact identification error.
Entropy 19 00137 g009
Figure 10. Testing results for the aluminum plate using the LME+PCA algorithm and training database 2. (a) Force amplitude; (b) X coordinate; (c) Y coordinate; (d) Localization.
Figure 10. Testing results for the aluminum plate using the LME+PCA algorithm and training database 2. (a) Force amplitude; (b) X coordinate; (c) Y coordinate; (d) Localization.
Entropy 19 00137 g010aEntropy 19 00137 g010b
Figure 11. Aluminum sandwich panel. (a) Internal structure; (b) Panel.
Figure 11. Aluminum sandwich panel. (a) Internal structure; (b) Panel.
Entropy 19 00137 g011
Figure 12. Experimental setup for the aluminum sandwich panel.
Figure 12. Experimental setup for the aluminum sandwich panel.
Entropy 19 00137 g012
Figure 13. Location of the experimental impacts applied to the aluminum sandwich panel. (a) Training; (b) Setting up; (c) Testing.
Figure 13. Location of the experimental impacts applied to the aluminum sandwich panel. (a) Training; (b) Setting up; (c) Testing.
Entropy 19 00137 g013
Figure 14. Cumulative percentage variance as a function of n i and normalized impact identification error as a function of k. (a) Cumulative percentage variance; (b) Normalized impact identification error.
Figure 14. Cumulative percentage variance as a function of n i and normalized impact identification error as a function of k. (a) Cumulative percentage variance; (b) Normalized impact identification error.
Entropy 19 00137 g014
Figure 15. Testing results for the aluminum sandwich panel using the LME + PCA algorithm. (a) Force amplitude; (b) X coordinate; (c) Y coordinate; (d) Localization.
Figure 15. Testing results for the aluminum sandwich panel using the LME + PCA algorithm. (a) Force amplitude; (b) X coordinate; (c) Y coordinate; (d) Localization.
Entropy 19 00137 g015aEntropy 19 00137 g015b
Table 1. Specifications of the experimental equipment.
Table 1. Specifications of the experimental equipment.
Piezoelectric Discs
Model7BB-20-6L0
Resonant frequency6 kHz
Disc size20 mm
Thickness0.42 mm
Impact Hammer
ModelLC-01A
Sensitivity4 mV/N
Max. shock force2 kN
Tip materialNylon
Force transducerCL-YD-303
Data Acquisition System
ModelECON MI-7016
Resolution24 bit
Channels16
Max. sampling rate96 kHz
Table 2. Comparison between impact identification algorithms available in the literature and current work. SVM: support vector machine; LS-SVM: least squares SVM; ELM: extreme learning machine.
Table 2. Comparison between impact identification algorithms available in the literature and current work. SVM: support vector machine; LS-SVM: least squares SVM; ELM: extreme learning machine.
ReferenceAlgorithmPlate Size (mm 2 )Number of SensorsNumber of Training Impact PointsArea Error (%)Force Error (%)
Xu [10]LS-SVM490 × 3904631.0651.2
Fu and Xu [11]PCA+SVM490 × 3904630.13-
Xu [13]Kernel-ELM490 × 3904630.74-
Fu et al. [14]PCA+Kernel-ELM490 × 3904630.24-
Current workLME+PCA490 × 3904610.0286.53
Table 3. Performance of the impact identification algorithm.
Table 3. Performance of the impact identification algorithm.
Experimental Case Identification Algorithm
LMELME + PCA
Aluminum plateArea Error (%)0.120.009
Force Error (%)7.185.84
Sandwich panelArea Error (%)0.150.031
Force Error (%)27.4212.39

Share and Cite

MDPI and ACS Style

Meruane, V.; Véliz, P.; López Droguett, E.; Ortiz-Bernardin, A. Impact Location and Quantification on an Aluminum Sandwich Panel Using Principal Component Analysis and Linear Approximation with Maximum Entropy. Entropy 2017, 19, 137. https://doi.org/10.3390/e19040137

AMA Style

Meruane V, Véliz P, López Droguett E, Ortiz-Bernardin A. Impact Location and Quantification on an Aluminum Sandwich Panel Using Principal Component Analysis and Linear Approximation with Maximum Entropy. Entropy. 2017; 19(4):137. https://doi.org/10.3390/e19040137

Chicago/Turabian Style

Meruane, Viviana, Pablo Véliz, Enrique López Droguett, and Alejandro Ortiz-Bernardin. 2017. "Impact Location and Quantification on an Aluminum Sandwich Panel Using Principal Component Analysis and Linear Approximation with Maximum Entropy" Entropy 19, no. 4: 137. https://doi.org/10.3390/e19040137

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop